Compare commits

...

674 Commits

Author SHA1 Message Date
Soulter
e9be8cf69f chore: bump version to 4.7.4 2025-12-01 18:42:07 +08:00
Soulter
31d53edb9d refactor: standardize provider test method implementation
- Updated the `test` method in all provider classes to remove return values and raise exceptions for failure cases, enhancing clarity and consistency.
- Adjusted related logic in the dashboard and command routes to align with the new `test` method behavior, simplifying error handling.
2025-12-01 18:37:08 +08:00
Soulter
2ba0460f19 feat: introduce file extract capability (#3870)
* feat: introduce file extract capability

powered by MoonshotAI

* fix: correct indentation in default configuration file

* fix: add error handling for file extract application in InternalAgentSubStage

* fix: update file name handling in InternalAgentSubStage to correctly associate file names with extracted content

* feat: add condition settings for local agent runner in default configuration

* fix: enhance file naming logic in File component and update prompt handling in InternalAgentSubStage
2025-12-01 18:12:39 +08:00
雪語
0e034f0fbd fix: aiocqhttp 适配器 NapCat 文件名获取为空 (#3853)
* aiocqhttp 适配器 NapCat 文件名获取为空

修复使用 NapCat 时,文件消息的 File.name 为空的问题。原代码硬编码 name="",导致下游插件无法获取文件名和扩展名

* Enhance file name retrieval from message data

Updated file name extraction logic to check multiple fields for better accuracy.
2025-12-01 13:36:19 +08:00
Soulter
2a7d03f9e1 fix: fit language and log AI responses more clearly (#3864)
* fix: fit language and log AI responses more clearly

* chore: ruff format
2025-12-01 13:24:52 +08:00
Soulter
72fac4b9f1 feat: implement unified provider availability testing across components (#3865)
- Added a `test` method to each provider class to standardize availability checks.
- Updated the dashboard and command routes to utilize the new `test` method for provider reachability verification, simplifying the logic and improving maintainability.
- Removed redundant reachability check logic from the command handler.
2025-12-01 13:17:20 +08:00
Soulter
38281ba2cf refactor: restore reachability check configuration in default settings and localization files 2025-12-01 00:38:30 +08:00
Soulter
21aa3174f4 fix: disable reachability check in default configuration 2025-12-01 00:16:11 +08:00
邹永赫
dcda871fc0 feat: provider availability reachability improvements (#3708) 2025-12-01 01:06:10 +09:00
Soulter
c13c51f499 fix: assistant message validation error when tool_call exists but content not exists (#3862)
* fix: assistant message validation error when tool_call exists but content not exists

* fix: enhance content validation in Message model to allow None for assistant role with tool_calls
2025-11-30 23:42:37 +08:00
Dt8333
a130db5cf4 fix: 将 Graceful shutdown 的异常改为 KeyboardInterrupt (#3855) 2025-11-30 20:31:17 +08:00
邹永赫
7faeb5cea8 Merge pull request #3850 from zouyonghe/feature/plugin-upgrade-all
增加升级所有插件按钮
2025-11-30 15:12:36 +09:00
ZouYonghe
8d3ff61e0d Format plugin route with ruff 2025-11-30 11:56:24 +08:00
ZouYonghe
4c03e82570 Fix plugin update JSON parsing and concurrency handling 2025-11-30 11:50:46 +08:00
ZouYonghe
e7e8664ab4 chore: tweak update all label 2025-11-30 11:18:30 +08:00
ZouYonghe
1dd1623e7d feat: batch update plugins via new api 2025-11-30 11:11:36 +08:00
ZouYonghe
80d8161d58 feat: add update all plugins action 2025-11-30 10:40:46 +08:00
Soulter
fc80d7d681 chore: bump version to 4.7.3 2025-11-30 00:42:49 +08:00
Soulter
c2f036b27c chore: bump vertion to 4.7.2 2025-11-30 00:33:07 +08:00
Soulter
4087bbb512 perf: set content attribute optional to AssistantMessageSegment for enhanced message handling
fixes: #3843
2025-11-30 00:32:00 +08:00
Soulter
e1c728582d chore: bump version to 4.7.2 2025-11-30 00:18:23 +08:00
Oscar Shaw
93c69a639a feat: 新增群聊模式下的专用图片转述模型配置 (#3822)
* feat: add image caption provider configuration for group chat

- Introduced `image_caption_provider_id` to allow separate configuration for group chat image understanding.
- Updated metadata and hints in English and Chinese for clarity on new settings.
- Adjusted logic in long term memory to utilize the new provider ID for image captioning.

* fix: format

* Fix logic for image caption and active reply settings

* Fix indentation and formatting in long_term_memory.py

* chore: ruff format

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
2025-11-29 23:53:32 +08:00
Soulter
a7fdc98b29 fix: third party agent runner cannot run properly when using non-default config file
fix: #3815
2025-11-29 23:45:12 +08:00
Soulter
85b7f104df fix: remove unnecessary provider check (#3846)
fixes: #3815
2025-11-29 23:15:19 +08:00
Oscar Shaw
d76d1bd7fe perf: adjust padding for PlatformPage and ProviderPage log sections (#3825)
- Added bottom margin to log card for better spacing.
2025-11-29 19:15:35 +08:00
Soulter
df4412aa80 style: adjust bot-embedded-image max-width and remove hover effect for improved layout 2025-11-29 01:31:25 +08:00
Soulter
ab2c94e19a chore: comment out error logging in provider sources to reduce verbosity 2025-11-28 19:59:33 +08:00
Oscar Shaw
37cc4e2121 perf: console tag UI improve (#3816)
- Added yarn.lock to .gitignore to prevent tracking of Yarn lock files.
- Updated ConsoleDisplayer.vue to improve chip styling
2025-11-28 17:17:11 +08:00
Soulter
60dfdd0a66 chore: update astrbot cli version 2025-11-28 16:53:20 +08:00
Soulter
bb8b2cb194 chore: bump version to 4.7.1 2025-11-28 15:13:35 +08:00
Soulter
4e29684aa3 fix: add plugin set and knowledge bases selection in custom rules page (#3813)
fixes: #3806
2025-11-28 13:29:50 +08:00
Soulter
0e17e3553d chore: bump version to 4.7.0 2025-11-27 23:50:05 +08:00
Soulter
0a55060e89 fix: session controller in webchat 2025-11-27 22:32:35 +08:00
Soulter
77859c7daa feat: enhance provider status display in ProviderPage
- Added a tooltip to show detailed provider status, including availability and error messages.
- Refactored item details template to include status chips for better visual representation.
- Removed unused status section to streamline the UI.
2025-11-27 16:39:51 +08:00
Soulter
ba39c393a0 perf: enhance provider management with reload locking and logging (#3793)
- Introduced a reload lock to prevent concurrent reloads of providers.
- Added logging to indicate when a provider is disabled and when providers are being synchronized with the configuration.
- Refactored the reload method to improve clarity and maintainability.


Co-authored-by: anka <1350989414@qq.com>
2025-11-27 16:25:31 +08:00
Soulter
6a50d316d9 fix: mcp server cannot reload successfully after updating mcp server config (#3797)
fixes: #3780
2025-11-27 16:22:26 +08:00
Soulter
88c1d77f0b perf: add at message to group chat history (#3796)
* feat: enhance long-term memory message formatting

- Added support for 'At' message components in long-term memory, allowing for better representation of mentions in messages.

* chore: ruff check
2025-11-27 15:59:07 +08:00
Dt8333
758ce40cc1 chore: fix test (#3787) 2025-11-27 14:02:42 +08:00
Soulter
3e7bb80492 chore: ruff format 2025-11-27 14:01:25 +08:00
Soulter
75e95aa9ca fix: update session management icon in sidebar
- Changed the icon for the session management sidebar item from 'mdi-account-group' to 'mdi-pencil-ruler' for better representation.
2025-11-27 14:00:05 +08:00
Soulter
a389842e25 feat: update session management UI with information button and layout adjustments
- Added an information button linking to custom rules documentation.
- Adjusted layout for improved spacing and readability in the session management page.
- Minor refactoring of the data table component for better alignment.
2025-11-27 13:58:37 +08:00
Soulter
0f6a3c3f5a refactor: session management custom rules (#3792)
* refactor: umo custom rules

* feat(i18n): update session management translations and improve provider configuration handling

- Updated English and Chinese translations for session management, including "Unified Message Origin" and "Follow Config".
- Enhanced provider configuration options to include "Follow Config" as a selectable item.
- Removed unused clear buttons and refactored provider configuration saving logic to handle updates and deletions more efficiently.
2025-11-27 13:30:43 +08:00
Soulter
133f27422d feat: implement i18n of astrbot config (#3772)
* feat: implement i18n of astrbot config

* feat(config): update configuration metadata with i18n details and future deprecation notes
2025-11-26 16:40:58 +08:00
RC-CHN
abc6deb244 feat: add plugin logo placeholder (#3784) 2025-11-26 16:22:11 +08:00
teapot1de
06869b4597 docs: clarify segmented_reply words_count_threshold hint (#3779)
Update the configuration hint for `words_count_threshold` to explicitly state that it acts as a maximum limit for segmentation, preventing user confusion about it being a minimum trigger.
2025-11-26 16:15:09 +08:00
dependabot[bot]
d32cea9870 chore(deps): bump actions/checkout in the github-actions group (#3775)
Bumps the github-actions group with 1 update: [actions/checkout](https://github.com/actions/checkout).


Updates `actions/checkout` from 5 to 6
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-26 16:13:42 +08:00
Soulter
4b68100f16 feat(chat): add standalone chat component and integrate with config page for testing configurations (#3767)
* feat(chat): add standalone chat component and integrate with config page for testing configurations

* feat(chat): add error handling for message sending and session creation
2025-11-24 22:06:02 +08:00
Soulter
5c5515d462 fix: segmented reply regex error handling (#3771)
* fix: segmented reply regex error handling

closes: #3761

* fix: improve regex handling for segmented replies to support multiline input

* fix: update regex handling in ResultDecorateStage to use findall for segmented replies

* fix: update error logging message for segmented reply regex handling
2025-11-24 22:00:59 +08:00
Soulter
3932b8f982 Merge pull request #3760 from AstrBotDevs/feat/agent-runner
refactor: transfer dify, coze and alibaba dashscope from chat provider to agent runner
2025-11-24 15:33:20 +08:00
Soulter
82488ca900 feat(api): enhance file upload method to support mime type and file name 2025-11-24 15:30:49 +08:00
Soulter
29d9b9b2d6 feat(config): add condition for display_reasoning_text based on agent_runner_type 2025-11-24 15:10:17 +08:00
Soulter
02215e9b7b feat(config): update hint for agent_runner execution method to clarify third-party integration 2025-11-24 15:07:33 +08:00
Soulter
7160b7a18b fix: dify workflow streaming mode 2025-11-24 15:04:15 +08:00
Soulter
ea8dac837a feat(config): enhance hint for agent_runner execution method in configuration 2025-11-24 14:42:36 +08:00
Soulter
e2a7a028bd feat(migration): enhance migration process with error handling and agent runner config updates 2025-11-24 14:37:25 +08:00
Soulter
70db8d264b fix(config): disable auto_save_history option in configuration 2025-11-24 14:25:14 +08:00
Soulter
0518e6d487 feat(config): add hint for agent_runner execution method in configuration 2025-11-24 14:23:53 +08:00
Soulter
39eb367866 perf: improve file structure
- Implemented CozeAPIClient for file upload, image download, chat messaging, and context management.
- Developed DashscopeAgentRunner for handling requests to the Dashscope API with streaming support.
- Created DifyAgentRunner to manage interactions with the Dify API, including file uploads and workflow execution.
- Introduced DifyAPIClient for making asynchronous requests to the Dify API.
- Updated third-party agent imports to reflect new module structure.
2025-11-24 14:00:16 +08:00
Soulter
f1d51a22ad feat(dashscope_agent_runner): refactor request payload construction and enhance streaming response handling 2025-11-24 13:21:34 +08:00
Soulter
77fb554e8f feat(dashscope_agent_runner): implement streaming response handling and request payload construction 2025-11-24 13:09:57 +08:00
Soulter
91f8a0ae09 fix(provider_manager): use get method for provider_type check in load_provider 2025-11-24 10:57:13 +08:00
Soulter
370cda7cf0 feat(dify_api_client): add docstring for file_upload method 2025-11-24 10:53:50 +08:00
Soulter
66b3eed273 fix: correct typo in agent state transition log message 2025-11-24 00:03:22 +08:00
Soulter
99b061a143 fix: make session properties required in Session interface 2025-11-23 23:25:29 +08:00
Soulter
5f3c7ed673 feat(conversation): update agent runner type configuration path to provider_settings 2025-11-23 23:05:36 +08:00
Soulter
a6dc458212 feat(third-party-agent): implement streaming response handling and enhance agent execution flow 2025-11-23 23:03:56 +08:00
Soulter
520f521887 feat(provider): enhance agent runner provider selection with subtype filtering 2025-11-23 22:23:23 +08:00
Soulter
01427d9969 feat(config): add hint for non-built-in agent execution model configuration 2025-11-23 22:13:52 +08:00
Soulter
34c03ce983 Merge remote-tracking branch 'origin/master' into feat/agent-runner 2025-11-23 22:06:52 +08:00
Soulter
95e9da42d6 fix(webchat): webchat session cannot be deleted (#3759) 2025-11-23 22:03:07 +08:00
Soulter
1338cab61b feat: add configuration selector for session management and enhance session handling in chat components 2025-11-23 21:53:56 +08:00
Soulter
7ba98c1e91 feat: enhance provider display with grouped categorization and improved filtering 2025-11-23 21:06:16 +08:00
Soulter
9a5f507cbe feat: enable agent runner providers in configuration 2025-11-23 20:58:18 +08:00
Soulter
d560671d1f feat: agent runner config migration 2025-11-23 20:54:19 +08:00
Soulter
82c9cf4db6 chore: remove legacy coze and dashscope provider 2025-11-23 20:18:51 +08:00
Soulter
910ec6c695 feat: implement third party agent sub stage and refactor provider management
- Added `ThirdPartyAgentSubStage` to handle interactions with third-party agent runners (Dify, Coze, Dashscope).
- Refactored `star_request.py` to ensure consistent return types in the `process` method.
- Updated `stage.py` to initialize and utilize the new `AgentRequestSubStage`.
- Modified `ProviderManager` to skip loading agent runner providers.
- Removed `Dify` source implementation as it is now handled by the new agent runner structure.
- Enhanced `DifyAPIClient` to support file uploads via both file path and file data.
- Cleaned up shared preferences handling to simplify session preference retrieval.
- Updated dashboard configuration to reflect changes in agent runner provider selection.
- Refactored conversation commands to accommodate the new agent runner structure and remove direct dependencies on Dify.
- Adjusted main application logic to ensure compatibility with the new conversation management approach.
2025-11-23 20:18:51 +08:00
Soulter
766d6f2bec fix(conversation): update session configuration retrieval to use unified message origin 2025-11-23 20:18:51 +08:00
Soulter
9f39140987 fix(conversation): update session configuration retrieval to use unified message origin 2025-11-23 19:59:21 +08:00
Soulter
89716ef4da Merge remote-tracking branch 'origin/master' into feat/agent-runner 2025-11-23 14:48:08 +08:00
Soulter
3c4ea5a339 chore: bump version to 4.6.1 2025-11-23 13:58:53 +08:00
Soulter
601846a8c1 docs: refine readme 2025-11-22 18:57:08 +08:00
Soulter
85d66c1056 fix(migration): update migration_done key for webchat session tracking (#3746) 2025-11-22 18:51:00 +08:00
Dt8333
b89d3f663c fix(core.db): 修复升级后webchat未正确迁移的问题 (#3745)
不是所有人都叫Astrbot

#3722
2025-11-22 18:37:39 +08:00
Soulter
0260d430d1 Merge pull request #3706 from piexian/master 2025-11-22 01:11:35 +08:00
piexian
2e608cdc09 refactor(bailian_rerank): 修复误删除并优化top_n参数处理
- 移除不合理的知识库配置读取逻辑
- 添加os模块导入(用于读取环境变量)
- 抽取辅助函数:_build_payload()、_parse_results()、_log_usage()
- 添加自定义异常类:BailianRerankError、BailianAPIError、BailianNetworkError
- 使用.get()安全访问API响应字段,避免KeyError
- 使用raise ... from e保持异常链
2025-11-21 05:34:18 +08:00
piexian
234ce93dc1 refactor(bailian_rerank): 优化代码质量和错误处理
- 移除未使用的 os 导入
- 简化 API Key 验证逻辑
- 优化 top_n 参数处理,优先使用传入值
- 改进错误处理,使用 RuntimeError 替代通用 Exception
- 添加异常链保持原始错误上下文
2025-11-21 04:07:45 +08:00
Soulter
4e2154feb7 fix(ci): repository name must be lowercase 2025-11-20 23:46:34 +08:00
Soulter
604958898c chore: bump version to 4.6.0 2025-11-20 23:41:20 +08:00
Soulter
a093f5ad0a fix(dependencies): specify upper version limit for google-genai 2025-11-20 23:32:05 +08:00
Soulter
a7e9a7f30c fix(gemini): ensure extra_content is not empty before processing 2025-11-20 23:30:19 +08:00
Soulter
5d1e9de096 Merge pull request #3678 from AstrBotDevs/refactor/webchat-session
refactor: Implement WebChat session management and migration
2025-11-20 17:23:10 +08:00
Soulter
89da4eb747 Merge branch 'master' into refactor/webchat-session 2025-11-20 17:21:48 +08:00
Soulter
8899a1dee1 feat(chat): refactor chat component structure and add new features (#3701)
- Introduced `ConversationSidebar.vue` for improved conversation management and sidebar functionality.
- Enhanced `MessageList.vue` to handle loading states and improved message rendering.
- Created new composables: `useConversations`, `useMessages`, `useMediaHandling`, `useRecording` for better code organization and reusability.
- Added loading indicators and improved user experience during message processing.
- Ensured backward compatibility and maintained existing functionalities.
2025-11-20 17:19:45 +08:00
Soulter
384a687ec3 delete: remove useConversations composable 2025-11-20 17:15:47 +08:00
Soulter
70cfdd2f8b feat(chat): refactor chat component structure and add new features (#3701)
- Introduced `ConversationSidebar.vue` for improved conversation management and sidebar functionality.
- Enhanced `MessageList.vue` to handle loading states and improved message rendering.
- Created new composables: `useConversations`, `useMessages`, `useMediaHandling`, `useRecording` for better code organization and reusability.
- Added loading indicators and improved user experience during message processing.
- Ensured backward compatibility and maintained existing functionalities.
2025-11-20 17:15:04 +08:00
Soulter
bdbd2f009a delete: useConversations 2025-11-20 17:11:01 +08:00
Soulter
164e0d26e0 feat(chat): refactor chat component structure and add new features (#3701)
- Introduced `ConversationSidebar.vue` for improved conversation management and sidebar functionality.
- Enhanced `MessageList.vue` to handle loading states and improved message rendering.
- Created new composables: `useConversations`, `useMessages`, `useMediaHandling`, `useRecording` for better code organization and reusability.
- Added loading indicators and improved user experience during message processing.
- Ensured backward compatibility and maintained existing functionalities.
2025-11-20 17:10:36 +08:00
Soulter
cb087b5ff9 refactor: update timestamp handling in session management and chat components 2025-11-20 17:02:01 +08:00
Soulter
1d3928d145 refactor(sqlite): remove auto-generation of session_id in insert method 2025-11-20 16:33:57 +08:00
Soulter
6dc3d161e7 feat(chat): refactor chat component structure and add new features (#3701)
- Introduced `ConversationSidebar.vue` for improved conversation management and sidebar functionality.
- Enhanced `MessageList.vue` to handle loading states and improved message rendering.
- Created new composables: `useConversations`, `useMessages`, `useMediaHandling`, `useRecording` for better code organization and reusability.
- Added loading indicators and improved user experience during message processing.
- Ensured backward compatibility and maintained existing functionalities.
2025-11-20 16:30:05 +08:00
Soulter
e9805ba205 fix: anyio.ClosedResourceError when calling mcp tools (#3700)
* fix: anyio.ClosedResourceError when calling mcp tools

added reconnect mechanism

fixes: 3676

* fix(mcp_client): implement thread-safe reconnection using asyncio.Lock
2025-11-20 16:24:02 +08:00
Dt8333
d5280dcd88 fix(core.platform): 修复启用多个企业微信智能机器人适配器时消息混乱的问题 (#3693)
* fix(core.platform): 修复启用多个企业微信智能机器人适配器时消息混乱的问题

移除了全局的消息队列,改为每个适配器处理自己的队列。修改相关方法适应该更改。

#3673

* chore: apply suggestions from code review

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-11-20 16:24:02 +08:00
Dt8333
67a9663eff fix(dashboard.i18n): complete the missing i18n keys(#3699)
#3679
2025-11-20 16:24:02 +08:00
Soulter
77dd89b8eb feat: add supports for gemini-3 series thought signature (#3698)
* feat: add supports for gemini-3 series thought signature

* feat: refactor tools_call_extra_content to use a dictionary for better structure
2025-11-20 16:24:02 +08:00
Soulter
8e511bf14b fix: build docker ci failed 2025-11-20 16:24:02 +08:00
Soulter
164a4226ea feat(chat): refactor chat component structure and add new features (#3701)
- Introduced `ConversationSidebar.vue` for improved conversation management and sidebar functionality.
- Enhanced `MessageList.vue` to handle loading states and improved message rendering.
- Created new composables: `useConversations`, `useMessages`, `useMediaHandling`, `useRecording` for better code organization and reusability.
- Added loading indicators and improved user experience during message processing.
- Ensured backward compatibility and maintained existing functionalities.
2025-11-20 16:07:09 +08:00
Soulter
6d6fefc435 fix: anyio.ClosedResourceError when calling mcp tools (#3700)
* fix: anyio.ClosedResourceError when calling mcp tools

added reconnect mechanism

fixes: 3676

* fix(mcp_client): implement thread-safe reconnection using asyncio.Lock
2025-11-20 16:01:22 +08:00
Soulter
aa59532287 refactor: implement migration for WebChat sessions by creating PlatformSession records from platform_message_history 2025-11-20 15:58:27 +08:00
piexian
2ada1deb9a 修复文档返回读取问题 2025-11-20 08:31:50 +08:00
piexian
788ceb9721 添加阿里百炼重排序模型 2025-11-20 08:05:42 +08:00
Dt8333
8488c9aeab fix(core.platform): 修复启用多个企业微信智能机器人适配器时消息混乱的问题 (#3693)
* fix(core.platform): 修复启用多个企业微信智能机器人适配器时消息混乱的问题

移除了全局的消息队列,改为每个适配器处理自己的队列。修改相关方法适应该更改。

#3673

* chore: apply suggestions from code review

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-11-19 21:44:38 +08:00
Dt8333
676f9fd4ff fix(dashboard.i18n): complete the missing i18n keys(#3699)
#3679
2025-11-19 21:36:34 +08:00
Soulter
1935ce4700 refactor: update session handling by replacing conversation_id with session_id in chat routes and components 2025-11-19 19:54:29 +08:00
Soulter
e760956353 refactor: enhance PlatformSession migration by adding display_name from Conversations and improve session item styling 2025-11-19 19:41:57 +08:00
Soulter
be3e5f3f8b refactor: update message history deletion logic to remove newer records based on offset 2025-11-19 19:41:25 +08:00
Soulter
cdf617feac refactor: optimize WebChat session migration by batch inserting records 2025-11-19 19:16:15 +08:00
Soulter
afb56cf707 feat: add supports for gemini-3 series thought signature (#3698)
* feat: add supports for gemini-3 series thought signature

* feat: refactor tools_call_extra_content to use a dictionary for better structure
2025-11-19 18:54:56 +08:00
Soulter
cd2556ab94 fix: build docker ci failed 2025-11-19 15:40:41 +08:00
Soulter
cf4a5d9ea4 refactor: change to platform session 2025-11-18 22:37:55 +08:00
Soulter
0747099cac fix: restore migration check for version 4.7 2025-11-18 22:07:43 +08:00
Soulter
323ec29b02 refactor: Implement WebChat session management and migration from version 4.6 to 4.7
- Added WebChatSession model for managing user sessions.
- Introduced methods for creating, retrieving, updating, and deleting WebChat sessions in the database.
- Updated core lifecycle to include migration from version 4.6 to 4.7, creating WebChat sessions from existing platform message history.
- Refactored chat routes to support new session-based architecture, replacing conversation-related endpoints with session endpoints.
- Updated frontend components to handle sessions instead of conversations, including session creation and management.
2025-11-18 22:04:26 +08:00
magisk317
ae81d70685 ci(docker-build): build nightly image everyday (#3120)
* ci: build test image on master pushes

* ci: split workflows for master test and release builds

* test ci

* test ci

* Update docker-image.yml

* test ci

Updated README to enhance deployment instructions.

* Make GHCR publishing optional in Docker workflow

* chore: Update DockerHub password secret in workflow

* Update .github/workflows/docker-image.yml

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* chore: rename job to build nightly image in workflow

* feat: schedule the nightly build at 0:00 am everyday, if have new commits

* fix: update build-nightly-image job to trigger only on schedule events

* Update fetch-depth and enable fetch-tag in workflows

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: LIghtJUNction <lightjunction.me@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
2025-11-18 10:47:58 +08:00
RC-CHN
270c89c12f feat: Add URL document parser for knowledge base (#3622)
* feat: 添加从 URL 上传文档的功能,支持进度回调和错误处理

* feat: 添加从 URL 上传文档的前端

* chore: 添加 URL 上传功能的警告提示,确保用户配置正确

* feat: 添加内容清洗功能,支持从 URL 上传文档时的清洗设置和服务提供商选择

* feat: 更新内容清洗系统提示,增强信息提取规则;添加 URL 上传功能的测试版标识

* style: format code

* perf: 优化上传设置,增强 URL 上传时的禁用逻辑和清洗提供商验证

* refactor:使用自带chunking模块

* refactor: 提取prompt到单独文件

* feat: 添加 Tavily API Key 配置对话框,增强网页搜索功能的配置体验

* fix: update URL hint and warning messages for clarity in knowledge base upload settings

* fix: 修复设置tavily_key的热重载问题

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-11-17 19:05:14 +08:00
Soulter
c7a58252fe feat: supports knowledge base agentic search (#3667)
* feat: supports knowledge base agentic search

* fix: correct formatting of system prompt in knowledge base results
2025-11-17 17:29:18 +08:00
Soulter
47ad8c86e5 docs: update translations of README 2025-11-17 12:50:01 +08:00
Soulter
937e879e5e chore: revise the issue template
Updated the bug report template to include English translations for all fields and improved clarity.
2025-11-17 11:35:24 +08:00
Soulter
1ecf26eead chore: revice pr template
Removed unnecessary comments and streamlined the pull request template.
2025-11-17 11:27:48 +08:00
Soulter
adbb84530a chore: bump version to 4.5.8 2025-11-17 09:58:02 +08:00
piexian
6cf169f4f2 fix: ImageURLPart typo (#3665)
* 修复新版本更新对不上格式的问题

entities.py生成的格式:{"type": "image_url", "image_url": {"url": "data:image/jpeg;base64,..."}}
ImageURLPart期望的格式:{"type": "image_url", "image_url": "data:image/jpeg;base64,..."}

* Revert "修复新版本更新对不上格式的问题"

This reverts commit 28b4791391.

* fix(core.agent): 修复ImageURLPart的声明,修复pydantic校验失败的问题。

---------

Co-authored-by: piexian <piexian@users.noreply.github.com>
Co-authored-by: Dt8333 <lb0016@foxmail.com>
2025-11-17 09:52:31 +08:00
Soulter
5ab9ea12c0 chore: bump verstion to 4.5.7 2025-11-16 14:01:25 +08:00
Soulter
fd9cb703db refactor: update ToolSet initialization to use Pydantic Field and clean up deprecated methods in Context 2025-11-16 12:13:11 +08:00
Soulter
388c1ab16d fix: ensure parameter properties are correctly handled in spec_to_func 2025-11-16 11:50:58 +08:00
Soulter
f867c2a271 feat: enhance parameter type handling in LLM tool registration with JSON schema support (#3655)
* feat: enhance parameter type handling in LLM tool registration with JSON schema support

* refactor: remove debug print statement from FunctionToolManager
2025-11-16 00:55:40 +08:00
Soulter
605bb2cb90 refactor: disable debug logging for chunk delta in OpenAI provider 2025-11-15 22:29:06 +08:00
Soulter
5ea15dde5a feat: enhance LLM handsoff tool execution with system prompt and run hooks 2025-11-15 22:26:13 +08:00
Soulter
3ca545c4c7 Merge pull request #3636 from AstrBotDevs/feat/context-llm-capability
refactor: better invoke the LLM / Agent capabilities
2025-11-15 21:41:42 +08:00
Soulter
e200835074 refactor: remove unused Message import and context_model initialization in LLMRequestSubStage 2025-11-15 21:36:54 +08:00
Soulter
3a90348353 Merge branch 'master' into feat/context-llm-capability 2025-11-15 21:34:54 +08:00
Soulter
5a11d8f0ee refactor: LLM response handling with reasoning content (#3632)
* refactor: LLM response handling with reasoning content

- Added a `show_reasoning` parameter to `run_agent` to control the display of reasoning content.
- Updated `LLMResponse` to include a `reasoning_content` field for storing reasoning text.
- Modified `WebChatMessageEvent` to handle and send reasoning content in streaming responses.
- Implemented reasoning extraction in various provider sources (e.g., OpenAI, Gemini).
- Updated the chat interface to display reasoning content in a collapsible format.
- Removed the deprecated `thinking_filter` package and its associated logic.
- Updated localization files to include new reasoning-related strings.

* feat: add Groq chat completion provider and associated configurations

* Update astrbot/core/provider/sources/gemini_source.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-11-15 21:31:03 +08:00
Soulter
824af5eeea fix: Provider.meta() error (#3647)
fixes: #3643
2025-11-15 21:30:05 +08:00
Dt8333
08ec787491 fix(core.platform): make DingTalk user-ID compliant with UMO (#3634) 2025-11-15 21:30:05 +08:00
Soulter
b062e83d54 refactor: remove redundant session lock management from message sending logic in RespondStage (#3645)
fixes: #3644

Co-authored-by: Dt8333 <lb0016@foxmail.com>
2025-11-15 21:30:05 +08:00
Soulter
17422ba9c3 feat: introduce messages field in agent RunContext 2025-11-15 21:15:20 +08:00
Soulter
6849af2bad refactor: LLM response handling with reasoning content (#3632)
* refactor: LLM response handling with reasoning content

- Added a `show_reasoning` parameter to `run_agent` to control the display of reasoning content.
- Updated `LLMResponse` to include a `reasoning_content` field for storing reasoning text.
- Modified `WebChatMessageEvent` to handle and send reasoning content in streaming responses.
- Implemented reasoning extraction in various provider sources (e.g., OpenAI, Gemini).
- Updated the chat interface to display reasoning content in a collapsible format.
- Removed the deprecated `thinking_filter` package and its associated logic.
- Updated localization files to include new reasoning-related strings.

* feat: add Groq chat completion provider and associated configurations

* Update astrbot/core/provider/sources/gemini_source.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-11-15 18:59:17 +08:00
Soulter
09c3da64f9 fix: Provider.meta() error (#3647)
fixes: #3643
2025-11-15 18:01:51 +08:00
Dt8333
2c8470e8ac fix(core.platform): make DingTalk user-ID compliant with UMO (#3634) 2025-11-15 17:31:03 +08:00
Soulter
c4ea3db73d refactor: remove redundant session lock management from message sending logic in RespondStage (#3645)
fixes: #3644

Co-authored-by: Dt8333 <lb0016@foxmail.com>
2025-11-15 16:39:49 +08:00
Soulter
89e79863f6 fix: ensure image_urls and system_prompt default to empty values in ProviderRequest 2025-11-14 22:45:55 +08:00
Soulter
d19945009f refactor: decople the agent impl part and introduce some helper context method to call llm 2025-11-14 19:17:24 +08:00
Soulter
c77256ee0e feat: add id field to ProviderMetaData and update provider manager to set provider ID 2025-11-14 12:35:30 +08:00
Soulter
7d823af627 refactor: update provider metadata handling and enhance ProviderMetaData structure 2025-11-13 19:53:23 +08:00
Soulter
3957861878 refactor: streamline llm processing logic (#3607)
* refactor: streamline llm processing logic

* perf: merge-nested-ifs

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* fix: ruff format

* refactor: remove unnecessary debug logs in FunctionToolExecutor and LLMRequestSubStage

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-11-13 10:08:57 +08:00
Dt8333
6ac43c600e perf: improve streaming fallback strategy for streaming-unsupported platform (#3547)
* feat: 修改tool_loop_agent_runner,新增stream_to_general属性。

Co-authored-by: aider (openai/gemini-2.5-flash-preview) <aider@aider.chat>

* refactor: 优化text_chat_stream,直接yield完整信息

Co-authored-by: aider (openai/gemini-2.5-flash-preview) <aider@aider.chat>

* feat(core):  添加streaming_fallback选项,允许进行流式请求和非流式输出

添加了streaming_fallback配置,默认为false。在PlatformMetadata中新增字段用于标识是否支持真流式输出。在LLMRequest中添加判断是否启用Fallback。

#3431 #2793 #3014

* refactor(core): 将stream_to_general移出toolLoopAgentRunner

* refactor(core.platform): 修改metadata中的属性名称

* fix: update streaming provider settings descriptions and add conditions

* fix: update streaming configuration to use unsupported_streaming_strategy and adjust related logic

* fix: remove support_streaming_message flag from WecomAIBotAdapter registration

* fix: update hint for non-streaming platform handling in configuration

* fix(core.pipeline): Update astrbot/core/pipeline/process_stage/method/llm_request.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix(core.pipeline): Update astrbot/core/pipeline/process_stage/method/llm_request.py

---------

Co-authored-by: aider (openai/gemini-2.5-flash-preview) <aider@aider.chat>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-12 18:01:20 +08:00
RC-CHN
27af9ebb6b feat: changelog display improvement
* feat: 添加旧版本changelog的modal

* style: 调整发布说明对话框的样式,移除背景颜色
2025-11-12 14:54:03 +08:00
Soulter
b360c8446e feat: add default model selection chip in provider model selector 2025-11-10 13:04:28 +08:00
Soulter
6d00717655 feat: add streaming support with toggle in chat interface and adjust layout for mobile 2025-11-09 21:57:30 +08:00
Soulter
bb5f06498e perf: refine login page 2025-11-09 20:57:45 +08:00
Dt8333
aca5743ab6 feat: 为部分适配器添加缺失的 send_streaming 方法 (#3545)
为Wechatpadpro和discord添加缺失的方法。
2025-11-09 16:00:24 +08:00
Soulter
6903032f7e fix: improve knowledge base chip display with truncation and styling (#3582)
fixes: #3546
2025-11-09 15:30:41 +08:00
nazo
1ce0ff87bd feat: supports to add custom headers for openai providers (#3581)
* feat: OPENAI系支持自定义添加请求头

* chore: add custom headers and extra body to config for zhipu

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
2025-11-09 15:12:52 +08:00
Soulter
e39d6bae0b fix: update JSON submission link in plugin publish template 2025-11-09 15:06:40 +08:00
Raven95676
8028e9e9a6 chore: bump version to 4.5.6 2025-11-07 16:20:19 +08:00
Raven95676
817f20ea01 fix: pyproject 2025-11-07 16:18:42 +08:00
Raven95676
ad5579a2f4 chore: bump version to 4.5.5 2025-11-07 15:52:58 +08:00
Raven95676
81a689a79b fix: typo 2025-11-07 15:41:14 +08:00
Raven95676
1893dd8336 fix: dockefile 2025-11-07 15:41:03 +08:00
Soulter
021ca8175b chore: bump version to 4.5.4 2025-11-07 14:28:51 +08:00
Soulter
39d6207fe1 chore: remove dynamic version 2025-11-07 14:26:56 +08:00
Soulter
23ce687229 chore: fix dockerfile 2025-11-07 14:23:49 +08:00
鸦羽
3715312fd2 fix: update project description to English (#3516) 2025-11-07 01:13:32 +08:00
Soulter
8196922cac docs: simplify README 2025-11-06 15:22:43 +08:00
Soulter
8089ad91da perf: improve extension market ui 2025-11-06 13:57:46 +08:00
Soulter
2930cc3fd8 chore: bump version to 4.5.3 2025-11-05 21:21:14 +08:00
Soulter
0e841a8b25 fix: correct tools dictionary comprehension in get_tool_list method 2025-11-05 21:19:10 +08:00
Soulter
67fa1611cc chore: bump version to 4.5.2 2025-11-05 19:02:51 +08:00
Soulter
91136bb9f7 fix: llm tool register error (#3493) 2025-11-05 14:27:37 +08:00
Copilot
7c050d1adc feat: add customizable sidebar module ordering (#3307)
* Initial plan

* Add sidebar customization feature with drag-and-drop support

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* Add dist/ to .gitignore to exclude build artifacts

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* Fix memory leak and improve code quality per code review

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* Fix i18n key format: use dot notation instead of colon notation

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* Fix drag-and-drop to empty list issue

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
2025-11-04 23:59:45 +08:00
Misaka Mikoto
a0690a6afc feat: support options to delete plugins config and data (#3280)
* - 为插件管理页面中,删除插件提供一致的二次确认(原本只有卡片视图有二次确认)
- 二次确认时可选删除插件配置和持久化数据
- 添加对应的i18n支持

* ruff

* 移除未使用的
const $confirm = inject('$confirm');
2025-11-04 11:48:48 +08:00
Dt8333
c51609b261 fix: typing error (#3267)
* fix: 修复一些小错误。

修复aiocqhttp和slack中部分逻辑缺失的await。修复discord中错误的异常捕获类型。

* fix(core.platform): 修复discord适配器中错误的message_chain赋值

* fix(aiocqhttp): 更新convert_message方法的返回类型为AstrBotMessage | None

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-11-03 23:38:52 +08:00
Soulter
72148f66eb chore: nodejs in Dockerfile 2025-11-03 13:19:51 +08:00
Copilot
a04993a2bb Replace insecure random with secrets module in cryptographic contexts (#3248)
* Initial plan

* Security fixes: Replace insecure random with secrets module and improve SSL context

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

* Address code review feedback: fix POST method and add named constants

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

* Improve documentation for random number generation constants

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

* Update astrbot/core/utils/io.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update astrbot/core/platform/sources/wecom_ai_bot/WXBizJsonMsgCrypt.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update tests/test_security_fixes.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update astrbot/core/utils/io.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update astrbot/core/utils/io.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Fix: Handle path parameter in SSL fallback for download_image_by_url

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>
Co-authored-by: LIghtJUNction <lightjunction.me@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-03 02:43:00 +08:00
LIghtJUNction
74f845b06d Chore: Dockerfile (#3266)
* fix: Dockerfile

python main.py 改为uv run main.py

* fix(dockerfile): 减少重复安装

* fix: 修复一些细节问题

* fix(.dockerignore): 需要git文件夹以获取astrbot版本(带git commit hash后缀)

* fix(.dockerignore): uv run之前会uv sync
2025-11-03 02:41:40 +08:00
Soulter
50144ddcae refactor: revise LLM message schema and fix the reload logic when using dataclass-based LLM Tool registration (#3234)
* refactor: llm message schema

* feat: implement MCPTool and local LLM tools with enhanced context handling

* refactor: reorganize imports and enhance docstrings for clarity

* refactor: enhance ContentPart validation and add message pair handling in ConversationManager

* chore: ruff format

* refactor: remove debug print statement from payloads in ProviderOpenAIOfficial

* Update astrbot/core/agent/tool.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update astrbot/core/agent/message.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update astrbot/core/agent/message.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update astrbot/core/agent/tool.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update astrbot/core/pipeline/process_stage/method/llm_request.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update astrbot/core/agent/message.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* refactor: enhance documentation and import mcp in tool.py; update call method return type

* fix: 修复以数据类的方式注册 tool 时的插件重载机制问题

* refactor: change role attributes to use Literal types for message segments

* fix: add support for 'decorator_handler' method in call_local_llm_tool

* fix: handle None prompt in text_chat method and ensure context is properly formatted

---------

Co-authored-by: LIghtJUNction <lightjunction.me@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-02 18:12:20 +08:00
Copilot
94bf3b8195 Fix incorrect type annotations and errors (#3250)
* Initial plan

* Fix type annotation errors in cmd_conf, cmd_init, and version_comparator

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

* Changes before error encountered

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

* Fix more type annotation errors: change `= None` to `| None = None`

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

* Fix final batch of type annotation errors

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>
Co-authored-by: LIghtJUNction <lightjunction.me@gmail.com>
2025-11-02 17:02:56 +08:00
Copilot
e190bbeeed Optimize string concatenation in loops: replace += with list.join() (#3246)
* Initial plan

* Fix string concatenation performance issues in loops

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

* Address code review feedback: Fix plugin list logic and add comment

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

* Improve comment clarity for at_parts accumulation

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>
2025-11-02 13:00:59 +08:00
Copilot
92abc43c9d Fix mutable default arguments in constructors and methods (#3247)
* Initial plan

* Fix mutable default arguments in constructors and methods

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>
2025-11-02 12:57:37 +08:00
Copilot
c8e34ff26f [WIP] Translate mixed English comments to Chinese (#3256)
* Initial plan

* Changes before error encountered

Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: LIghtJUNction <106986785+LIghtJUNction@users.noreply.github.com>
2025-11-02 12:52:46 +08:00
Soulter
630df3e76e refactor: reorganize ComponentType definitions and remove unused classes 2025-11-01 23:18:40 +08:00
Raven95676
bdbf382201 chore: remove astrbot.lock 2025-11-01 17:43:54 +08:00
Raven95676
00eefc82db chore(.gitignore): update ignore rule 2025-11-01 17:41:02 +08:00
LIghtJUNction
dc97080837 Update .gitignore 2025-11-01 17:37:57 +08:00
LIghtJUNction
0b7fc29ac4 style: add ruff lint module of isort and pyupgrade, and some ruff check fix (#3214)
Co-authored-by: Dt8333 <25431943+Dt8333@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
2025-11-01 13:26:19 +08:00
Soulter
ff998fdd8d chore: bump version to 4.5.1 2025-10-31 23:55:40 +08:00
LIghtJUNction
d7461ed54c fix(helper.py): 修复了迁移逻辑,现在不再误判 (#3215)
* fix(helper.py): 修复了迁移逻辑,现在不再误判

* fix(helper.py): 没有data_v3 dir
2025-10-31 23:37:37 +08:00
Soulter
3ce577acf9 docs: enhance bug report template with clarity on details
Updated bug report template to emphasize the need for detailed logs and information.
2025-10-31 23:18:15 +08:00
Chris
50b1dccff3 feat: support xAI Grok Live Search config (#3203)
* Add xai_native_search configuration option

* Implement xAI compatibility and search injection

Add support for xAI integration with search parameters injection.

* Refactor xAI handling in openai_source.py

Removed the _is_xai method and updated xAI search injection logic.

* Fix formatting of condition in default.py

* Fix formatting in openai_source.py
2025-10-31 21:48:45 +08:00
Dt8333
c33e7e30d4 chore(requirements): Sync dependencies from pyproject to requirements.txt (#3208)
* chore(requirements): 将pyproject中的dependency同步到requirements.txt

* chore(requirements): 补全遗漏dependency
2025-10-31 15:27:16 +08:00
RC-CHN
bc7f01ba36 feat: add Xinference STT provider (#3197)
* feat: add Xinference STT provider

* chore:update comment in xinference_stt_provider

* style: ruff format xinference_stt_provider

* chore: remove unused import of base64 in xinference_stt_provider

* fix: enhance model initialization check in get_text method

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-10-31 01:49:35 +08:00
再吃颗电池吧
2ce653caad perf: modify the at logic in the DingTalk adapter (#3186)
* feat 初次提交

* fix: Modify the At logic in the DingTalk adapter.

* del uv.lock

* 添加at_users为空判断

* 优化钉钉at的处理逻辑,不用重复判断机器人是否is_in_at_list

* fix: refine handling of mentioned users in group messages

---------

Co-authored-by: linyiming <linyiming@example.com>
Co-authored-by: Soulter <905617992@qq.com>
2025-10-30 14:15:01 +08:00
Soulter
0d850d7b22 fix: refine docstring for add_llm_tools method in Context class 2025-10-29 20:16:27 +08:00
Soulter
a2be155b8e feat: add method to register LLM tools in Context class 2025-10-29 20:13:15 +08:00
Soulter
68aa107689 docs: update readme 2025-10-29 13:58:58 +08:00
Soulter
23096ed3a5 perf: update extension card page style, add config and view-docs button 2025-10-29 00:38:04 +08:00
RC-CHN
90a65c35c1 feat: add Xinference rerank provider (#3162)
* feat:add Xinference rerank provider

* feat:add default rerank_api_key option for Xinference provider

* style: format code

* fix: refactor XinferenceRerankProvider initialization for better error handling

* fix: update XinferenceRerankProvider to use async client methods for initialization and reranking

* feat: add launch_model_if_not_running option to XinferenceRerankProvider for better control over model initialization

* chore: remove unused asyncio import from xinference_rerank_source.py
2025-10-28 18:23:55 +08:00
a490077
3d88827a95 fix: qq_official_webhook is_sandbox field error (#3167)
* QQ官方机器人增加沙箱模式选项,让本地部署能跳过IP白名单验证

* chore: ruff format

* 修复沙盒配置为字符串判断

* 由于配置类型为字符串,修复为字符串判断

* chore: ruff format

* fix: update is_sandbox configuration to use boolean type

---------

Co-authored-by: 郭鹏 <gp@pp052.top>
Co-authored-by: Soulter <905617992@qq.com>
Co-authored-by: Dt8333 <lb0016@foxmail.com>
2025-10-28 10:15:46 +08:00
Futureppo
40a0a8df5a perf: 优化 /model 切换模型成功的提示 (#3161) 2025-10-28 09:05:42 +08:00
dependabot[bot]
20f7129c0b chore(deps): bump actions/upload-artifact in the github-actions group (#3178)
Bumps the github-actions group with 1 update: [actions/upload-artifact](https://github.com/actions/upload-artifact).


Updates `actions/upload-artifact` from 4 to 5
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-28 08:53:28 +08:00
Soulter
0e962e95dd docs: update plugin information template in YAML 2025-10-27 14:26:59 +08:00
Soulter
07ba9c772c chore: bump version to 4.5.0 2025-10-26 21:40:11 +08:00
Soulter
0622d88b22 fix: revert 3106 (#3153)
* fix: revert 3106

Co-authored-by: Dt8333 <25431943+Dt8333@users.noreply.github.com>
Co-authored-by: LIghtJUNction <lightjunction.me@gmail.com>
Co-authored-by: exynos <110159911+xiaoxi68@users.noreply.github.com>

* Update astrbot/dashboard/routes/update.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* fix: remove unnecessary version file handling in download_dashboard function

* fix: revert

---------

Co-authored-by: Dt8333 <25431943+Dt8333@users.noreply.github.com>
Co-authored-by: LIghtJUNction <lightjunction.me@gmail.com>
Co-authored-by: exynos <110159911+xiaoxi68@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-10-26 21:26:48 +08:00
Soulter
594f0fed55 style: adjust padding for card text in ExtensionPage for improved layout 2025-10-26 21:19:07 +08:00
Soulter
04b0d9b88d Merge pull request #3155 from AstrBotDevs/feat/plugin-display-name-and-logo
feat: add support for plugin display name and logo, and some extension card style fix
2025-10-26 20:54:24 +08:00
Soulter
1f2af8ef94 Update dashboard/src/components/shared/ExtensionCard.vue
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-10-26 20:52:37 +08:00
Soulter
598ea2d857 refactor: update ExtensionCard styling and improve layout for better responsiveness 2025-10-26 20:49:27 +08:00
Soulter
6dd9bbb516 feat: enhance plugin metadata with display name and logo support 2025-10-26 20:30:54 +08:00
Soulter
3cd0b47dc6 feat: add GitHub link button to ExtensionCard for extensions with a repository 2025-10-26 19:41:00 +08:00
Soulter
65c71b5f20 refactor: remove Google search engine integration from main module and dependencies (#3154) 2025-10-26 18:54:01 +08:00
exynos
1152b11202 feat(thinking_filter): 适配第三方 Gemini 思考片段过滤 (#3139)
* feat(thinking_filter): 适配第三方 Gemini 思考片段过滤

* feat(thinking_filter): Gemini 思考过滤、序列化回退与空白清理重构

* 使用 ruff 格式化并修复导入空行
2025-10-26 18:43:58 +08:00
Soulter
51246ea31b fix: apply configuration option to enable/disable WebUI in AstrBotDashboard (#3152) 2025-10-26 17:29:04 +08:00
Soulter
7e5592dd32 fix: comment out existing configuration preview section in AddNewPlatform component 2025-10-26 17:07:04 +08:00
Soulter
c6b28caebf Merge pull request #3151 from AstrBotDevs/feat/platform-abconf-interaction
feat: enhance AddNewPlatform and ConfigPage components with improved configuration management and UI interactions
2025-10-26 17:04:34 +08:00
Soulter
ca002f6fff feat: enhance AddNewPlatform dialog with scroll functionality and toggle for configuration section 2025-10-26 17:03:07 +08:00
Soulter
14ec392091 fix: update message styling in AddNewPlatform component for better visibility 2025-10-26 17:00:36 +08:00
Soulter
5e2eb91ac0 feat: enhance AddNewPlatform and ConfigPage components with improved configuration management and UI interactions 2025-10-26 16:57:01 +08:00
Soulter
c1626613ce fix: update repository references from Soulter/AstrBot to AstrBotDevs/AstrBot across documentation and codebase (#3150)
* fix: update repository references from Soulter/AstrBot to AstrBotDevs/AstrBot across documentation and codebase

- Updated README_ja.md to reflect new GitHub repository links.
- Modified AstrBotUpdator to download from the new repository.
- Changed download URLs in io.py for dashboard releases.
- Updated changelogs to point to the new issue links.
- Adjusted Docker compose file to reference the new repository.
- Updated Vue components in the dashboard to link to the new repository.
- Changed main.py to provide the correct download instructions for the new repository.

* fix: improve error handling for configId selection in AddNewPlatform component

* Update astrbot/core/utils/io.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-10-26 16:17:24 +08:00
LIghtJUNction
42042d9e73 Merge branch 'master' of https://github.com/AstrBotDevs/AstrBot 2025-10-26 15:41:36 +08:00
LIghtJUNction
22c3b53ab8 fix(io.py): path改回传入文件地址,而不是传入文件夹地址 2025-10-26 15:41:20 +08:00
Soulter
090c32c90e feat: enhance AddNewPlatform dialog with data preparation on enter and improve code formatting 2025-10-26 15:40:15 +08:00
LIghtJUNction
4f4a9b9e55 fix(io.py): download_dashboard如果发现没有dist/assets/version文件,下载完毕自动写入(以防万一) 2025-10-26 15:35:25 +08:00
Soulter
6c7d7c9015 Merge pull request #3147 from AstrBotDevs/feat/kb-markitdown
feat: refactor knowledge base parsers and add MarkitdownParser for docx, xls, xlsx support
2025-10-26 13:18:52 +08:00
Soulter
562e62a8c0 feat: add new dependencies for PDF processing, file handling, and text ranking 2025-10-26 13:02:32 +08:00
Soulter
0823f7aa48 在检查字面量集合的成员资格时使用 set
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-10-25 22:04:17 +08:00
Soulter
eb201c0420 feat: refactor knowledge base parsers and add MarkitdownParser for docx, xls, xlsx support 2025-10-25 22:00:54 +08:00
Soulter
6cfed9a39d Merge pull request #3143 from lxfight/feature/knowledge-base 2025-10-25 18:19:15 +08:00
Soulter
33618c4a6b feat: add dynamic embedding dimension retrieval for providers and enhance error handling 2025-10-25 16:39:11 +08:00
LI SONGSONG 🍂
ace0a7c219 docs: update link and description 2025-10-25 16:07:27 +08:00
Soulter
f7d018cf94 feat: add pre-checks for embedding and rerank providers in KnowledgeBaseRoute 2025-10-25 15:22:35 +08:00
Soulter
8ae2a556e4 feat: remove tips from knowledge base creation form and add persistent hints for field modifications 2025-10-25 15:06:07 +08:00
lxfight
4188deb386 fix: 简化日志错误信息格式 2025-10-25 14:13:23 +08:00
lxfight
82cf4ed909 fix: 使用ruff格式化文件代码 2025-10-25 14:10:26 +08:00
lxfight
88fc437abc feat: 优化知识库选择界面,添加自定义滚动条样式 2025-10-25 13:59:09 +08:00
lxfight
57f868cab1 Merge branch 'feature/knowledge-base' of https://github.com/lxfight/AstrBot into feature/knowledge-base 2025-10-25 13:53:03 +08:00
lxfight
6cb5527894 feat: 添加会话知识库配置的 API 接口,支持获取、设置和删除会话配置,优化知识库选择界面 2025-10-25 13:52:57 +08:00
Soulter
016783a1e5 feat: implement RecursiveCharacterChunker and update KnowledgeBaseManager to use it 2025-10-25 13:46:06 +08:00
lxfight
594ccff9c8 fix: 添加数据库连接检查和知识库终止功能,增强错误处理和清理逻辑,修复知识库无法删除的问题 2025-10-25 11:56:37 +08:00
Soulter
30792f0584 Merge pull request #114 from lxfight/lwl-dev/knowledge-base
refactor: 知识库优化
2025-10-25 00:42:16 +08:00
Soulter
8f021eb35a feat: refactor document storage to use SQLModel and enhance database operations 2025-10-24 23:17:37 +08:00
Soulter
1969abc340 feat: add route for legacy knowledge base and update UI with banner suggestion 2025-10-24 22:01:55 +08:00
Soulter
b1b53ab983 Merge remote-tracking branch 'origin/master' into lwl-dev/knowledge-base 2025-10-24 21:48:47 +08:00
Soulter
9b5af23982 feat: remove beta label from knowledge base navigation and adjust margin in KBList component 2025-10-24 21:46:53 +08:00
Soulter
4cedc6d3c8 feat: add t-SNE visualization for FAISS index and enhance knowledge base retrieval with debug mode 2025-10-24 21:22:46 +08:00
Soulter
4e9cce76da feat: add timing logs for dense and sparse retrieval processes and adjust top K results in sparse retriever 2025-10-24 17:51:30 +08:00
Soulter
9b004f3d2f feat: update document retrieval to include limit and offset parameters 2025-10-24 17:38:22 +08:00
Soulter
9430e3090d feat: add progress callback for document upload and enhance upload progress tracking 2025-10-24 17:13:44 +08:00
Soulter
ba44f9117b feat: enhance document upload process with batch settings and improved chunk handling 2025-10-24 16:37:37 +08:00
Soulter
eb56710a72 feat: add chunk size, overlap, and top K parameters to knowledge base response 2025-10-24 15:10:47 +08:00
Soulter
38e3f27899 feat: update knowledge base retrieval configuration and UI adjustments 2025-10-24 15:06:07 +08:00
Soulter
3c58d96db5 feat: add configuration for final knowledge base retrieval count and update related components 2025-10-24 14:45:07 +08:00
Soulter
a6be0cc135 feat: refresh knowledge base and document after uploading a document 2025-10-24 14:28:27 +08:00
Soulter
a53510bc41 refactor: comment out file path handling in KBHelper and search input in DocumentDetail 2025-10-24 14:27:01 +08:00
Soulter
1fd482e899 feat: update chunk deletion to include document ID and refresh metadata 2025-10-24 14:18:32 +08:00
Soulter
2f130ba009 feat: delete chunk and delete document 2025-10-24 13:59:17 +08:00
Soulter
e6d9db9395 feat: disable embedding provider selection in settings tab 2025-10-24 12:53:59 +08:00
Soulter
e0ac743cdb perf: remove rerank functionality from settings tab and related form data 2025-10-24 12:13:51 +08:00
Soulter
b0d3fc11f0 feat: remove sessions tab and related components from knowledge base detail view 2025-10-24 00:48:17 +08:00
Soulter
7e0a50fbf2 feat: enhance knowledge base retrieval with chunk metadata and pagination support; remove unused chunk model 2025-10-24 00:44:40 +08:00
Soulter
59df244173 improve 2025-10-23 21:20:41 +08:00
Soulter
deb31a02cf docs: Update badge links in README.md 2025-10-23 09:53:54 +08:00
Soulter
e3aa1315ae stage 2025-10-23 00:31:15 +08:00
Soulter
65bc5efa19 feat: 集成知识库管理器,优化知识库上下文注入流程,移除冗余代码 2025-10-22 21:59:00 +08:00
Dt8333
abc4bc24b4 fix(dashboard): webchat input textarea is disabled when session controller is active
Removed the disable attribute of Input in isConvRunning. Added an activeSSE counter to correctly determine the current session state and prevent new input from causing interface display errors during session_waiter execution. Set isStreaming after streaming input ends to restore the text box.

#3037 #2892
2025-10-22 20:32:40 +08:00
Soulter
5df3f06f83 fix: persona information is not appearing in the PersonaForm when editing 2025-10-22 17:09:21 +08:00
Soulter
0e1de82bd7 fix: correct indentation in pre-commit config for pyupgrade hook 2025-10-22 17:08:54 +08:00
Soulter
f31e41b3f1 docs: update readme 2025-10-22 13:10:44 +08:00
Soulter
61a68477d0 stage 2025-10-21 14:19:38 +08:00
LIghtJUNction
fe8d2718c4 新增pyupgrade钩子
代码风格统一化
2025-10-21 11:17:20 +08:00
Soulter
8afefada0a fix: image_caption btn 2025-10-21 11:07:39 +08:00
LIghtJUNction
745e1c37c0 Add ruff-check hook to pre-commit config
跟随官方推荐
2025-10-21 11:07:00 +08:00
LIghtJUNction
fdb5988cec 更新 .pre-commit-config.yaml 2025-10-21 11:02:30 +08:00
Soulter
36ffcf3cc3 fix: typing error 2025-10-21 10:56:44 +08:00
Soulter
e74f626383 stage 2025-10-21 09:55:14 +08:00
Soulter
ef99f64291 feat(config): 添加 agent 运行器类型及相关配置支持 2025-10-21 00:47:04 +08:00
Soulter
a0f8f3ae32 style: ruff format 2025-10-21 00:21:42 +08:00
Soulter
130f52f315 chore(monaco-editor): bump monaco-editor version to 0.54.0 2025-10-21 00:18:29 +08:00
lxfight
a05868cc45 feat: 更新知识库管理器以支持重排序模型提供商,调整相关组件的默认配置和提示信息 2025-10-20 22:38:06 +08:00
lxfight
2fc77aed15 feat: 添加知识库检索功能,支持根据知识库 ID 列出相关会话;更新相关界面和国际化文本 2025-10-20 22:23:35 +08:00
lxfight
c56edb4da6 feat: 添加知识库配置功能,支持会话管理中的知识库选择与设置 2025-10-20 21:46:39 +08:00
Soulter
6672190760 feat: add star count display and fetch functionality in sidebar 2025-10-20 18:19:21 +08:00
exynos
f122b17097 fix(update): 取消 WebUI 与核心版本对比,消除“webui有新版本!”的误报 (#3106)
* fix(update): 取消 WebUI 与核心版本对比,消除“webui有新版本!”的误报

不再比较 dv 与核心版本

* fix(update): 保留dv逻辑,新增installed标识以避免误报

新增安装状态布尔值,保留“dv 是否存在”的信息

* Fix dashboard version update check logic

---------

Co-authored-by: LIghtJUNction <lightjunction.me@gmail.com>
2025-10-20 16:15:42 +08:00
Soulter
2c5f68e696 refactor: 重构创建平台时的流程及一些 UI 优化 (#3102)
* refactor: 支持在平台直接选择配置文件

* add webchat

* feat: 支持新建平台时现场预览、创建和编辑配置文件

* fix: update configuration file descriptions and visibility based on updating mode

* perf: use incremental decoder

* perf: update descriptions

* fix: UI update issues in config file dialog

* fix: update UI elements for better readability and organization

* feat: enhance sidebar navigation with group feature and dynamic resizing

Co-authored-by:  IGCrystal <3811541171@qq.com>

* refactor: persona selector

* perf: 修改部分默认行为

* fix: adjust ExtensionCard layout and improve responsiveness

* refactor: 配置文件绑定消息平台重构为消息平台绑定配文件

* style: add custom styling for v-select selection text

* fix: correct subtitle text in provider.json

* refactor: update conversation management terminology and improve session ID handling

* refactor: add Conversation ID localization and update table header reference

* Update astrbot/core/db/migration/migra_45_to_46.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* style: format logger warning for better readability

* refactor: comment out WebChat configuration for future reference

---------

Co-authored-by: IGCrystal <3811541171@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-20 12:01:06 +08:00
MoonShadow1976
e1ca645a32 feat: 增强工具调用参数处理机制 (#3036)
* feat: 增强工具调用参数处理机制

在工具调用时添加参数过滤功能,只传递函数实际需要的参数
解决问题:https://github.com/AstrBotDevs/AstrBot/issues/2988

* feat: 利用现有工具定义信息处理非期望的参数

不使用`inspect`库,利用现有工具定义信息处理非期望的参数

* ruff format for code

合并结果:
移除了多余参数避免报错,代码执行器可以正常工作。
2025-10-20 02:51:16 +08:00
lxfight
333bf56ddc feat:知识库卡片渲染统计信息。 2025-10-19 22:40:01 +08:00
lxfight
b240594859 feat:添加Beta 版本的知识库管理器前端页面;添加i18n相关文件内容。 2025-10-19 21:55:21 +08:00
lxfight
beccae933f fix:修复KBSessionConfig的导入问题 2025-10-19 21:36:01 +08:00
lxfight
e6aa1d2c54 feat:删除v2版本的知识库前端代码;删除i18n相关文件 2025-10-19 21:16:00 +08:00
magisk317
5e808bab65 fix(platform): prevent 'NoneType' object is not iterable in _outline_chain and set_result (#3103)
Guard against cases where message chain is None during pipeline execution. This change enhances error-resilience for logging and processing message chains.

- Updated AstrMessageEvent._outline_chain to return an empty string when input chain is None
- Updated AstrMessageEvent.set_result to ensure result.chain is always at least an empty list

This prevents TypeError when result.chain or chain is unexpectedly None, improving pipeline stability when handling external plugins or corner cases.

Co-authored-by: engine-labs-app[bot] <140088366+engine-labs-app[bot]@users.noreply.github.com>
Co-authored-by: cto-new[bot] <140088366+cto-new[bot]@users.noreply.github.com>
2025-10-19 20:16:14 +08:00
Dt8333
361d78247b fix(core): 修复人格预设对话的重复注入 (#3088)
备份Context避免供应商适配器移除Context内字段导致将预设会话存入历史。深拷贝人格预设会话防止运行时被意外修改。

#3063
2025-10-19 20:13:57 +08:00
a490077
3550103e45 feat: QQ 官方机器人增加沙盒模式选项,让本地部署能跳过 IP 白名单验证 (#3087)
* QQ官方机器人增加沙箱模式选项,让本地部署能跳过IP白名单验证

* chore: ruff format

---------

Co-authored-by: 郭鹏 <gp@pp052.top>
Co-authored-by: Soulter <905617992@qq.com>
2025-10-19 20:09:08 +08:00
PaloMiku
8b0d4d4de4 feat: 优化 Misskey 适配器的通知和聊天消息处理,改进 @用户提及逻辑 (#3075) 2025-10-19 20:05:55 +08:00
shangxue
dc71c04b67 feat(satori): 添加对合并转发消息功能的支持 (#3050)
* Update satori_event.py

* Update satori_event.py

* Update satori_event.py

* Update satori_adapter.py

* style: format code for better readability in satori_adapter.py and satori_event.py

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-10-19 20:05:03 +08:00
lxfight
a0254ed817 refactor: 优化知识库管理器和数据库操作的代码格式 2025-10-19 19:36:26 +08:00
lxfight
2563ecf3c5 feat: 实现知识库前端组件和路由
- 实现知识库 V2 主页面和 4 个子面板组件
- 文档管理面板:支持上传、删除、查看文档分块
- 检索测试面板:支持测试知识库检索效果
- 全局设置面板:配置嵌入模型、重排序、检索参数
- 会话配置面板:管理会话与知识库的绑定关系
- 重构 Alkaid 路由为嵌套结构,添加知识库 V2 路由
- 在翻译系统中注册知识库 V2 多语言支持
- 默认进入 Alkaid 时跳转到原生知识库页面
2025-10-19 18:43:58 +08:00
lxfight
c04738d9fe feat: 实现知识库前端界面(英文国际化)
- 添加知识库 V2 完整英文翻译文件
- 包括:主页、文档管理、检索测试、全局设置、会话配置
- 在 Alkaid 导航中添加 "Native Knowledge Base" 入口
- 区分 "Native Knowledge Base" 和 "Knowledge Base (Plugin)"
2025-10-19 18:43:35 +08:00
lxfight
1266b4d086 feat: 实现知识库前端界面(中文国际化)
- 添加知识库 V2 完整中文翻译文件
- 包括:主页、文档管理、检索测试、全局设置、会话配置
- 在 Alkaid 导航中添加"原生知识库"入口
- 区分"原生知识库"和"知识库(插件)"两个入口
2025-10-19 18:42:43 +08:00
lxfight
99cf0a1522 feat: 添加知识库 Dashboard API 路由
- 实现知识库管理 API(创建、删除、列表、更新)
- 实现文档管理 API(上传、删除、列表、分块信息)
- 实现知识库检索测试 API(支持调试和验证)
- 实现会话配置 API(绑定/解绑知识库、配置检索参数)
- 实现全局配置 API(启用/禁用、模型选择、检索参数)
- 在 Dashboard 服务器中注册知识库路由
2025-10-19 18:41:54 +08:00
lxfight
98a75e923d feat: 集成知识库到核心生命周期和消息流水线
- 在 AstrBotCoreLifecycle 中初始化知识库管理器
- 将知识库注入器添加到消息处理上下文
- 在消息流水线中添加 KBEnhanceStage(知识库增强阶段)
- 实现会话删除时的知识库配置级联清理机制
- 添加会话管理器的回调注册机制,支持零侵入扩展
2025-10-19 18:41:34 +08:00
lxfight
ad96d676e6 feat: 实现知识库核心后端模块
- 实现完整的知识库数据模型(知识库、文档、文档块、会话配置)
- 实现基于 SQLite 的向量数据库存储和检索
- 实现文档解析器(PDF、TXT)和固定大小分块器
- 实现混合检索系统(密集向量检索 + BM25 稀疏检索 + RRF 融合)
- 实现知识库生命周期管理和消息注入器
- 支持会话级别的知识库配置和关联
2025-10-19 18:40:55 +08:00
lxfight
79333bbc35 feat: 添加知识库核心依赖和配置
- 添加 pypdf、aiofiles、rank-bm25 依赖包支持文档解析和检索
- 在 default.py 中添加知识库完整配置项
- 配置包括嵌入模型、重排序、存储路径、分块策略、检索参数等
- 默认禁用知识库功能,需用户主动启用
2025-10-19 18:39:10 +08:00
Soulter
5c5b0f4fde fix: 修复未安装知识库插件时的错误引导 2025-10-18 10:36:11 +08:00
Dt8333
ed6cdfedbb fix: 修复 dashboard 的部分编译错误 (#3041)
* chore(dashboard): adding missing dependency

* fix(dashboard): 修复vertical-header中 $router 类型错误
2025-10-16 10:32:08 +08:00
PaloMiku
23f13ef05f feat:Misskey 适配器支持文件上传、投票内容感知功能和重构部分代码 (#2986)
* feat: 为 Misskey 适配器修正一些问题,添加投票信息读取支持

* feat: 增强 Misskey 平台适配器,添加随机重连延迟和通道重新订阅功能

* feat: 添加文件上传功能并优化消息发送接口,支持同时发送文件和文本

* feat: 增强文件上传功能,支持 MIME 类型检测和外部 URL 回退

* feat: 增加 Misskey 文件上传功能开关,支持配置文件上传启用与并发限制

* feat: 添加 Misskey 文件上传目标文件夹配置,支持将文件上传到指定文件夹

* feat: 优化 Misskey 平台适配器,增强文件上传和消息发送功能,支持更多可选字段

* feat: 代码优化结构与功能

* feat(misskey): 增强消息发送逻辑和工具函数

- 重构了 `misskey_event.py` 中的 `send` 方法,使用新的适配器方法 `send_by_session`,以改进消息处理(包括文件上传)。
- 添加了详细的日志记录,以提高消息发送过程的可追溯性。
- 在 `misskey_utils.py` 中引入了 `FileIDExtractor` 和 `MessagePayloadBuilder` 类,以简化文件 ID 提取和消息载荷构建。
- 在 `misskey_utils.py` 中实现了 MIME 类型检测和文件扩展名解析,以支持多种文件上传。
- 增强了 `resolve_component_url_or_path`,以更好地处理不同类型的组件上传文件。
- 在 `upload_local_with_retries` 中添加了重试逻辑,以优雅地处理不允许的文件类型。

* feat(misskey): 限制文件上传并发数,优化消息处理逻辑

* feat(misskey): ruff formatted

* feat: 大幅优化 misskey 文件上传逻辑,简化上传流程并增强可见性解析

* feat(misskey): 移除 Url上传方式,精简日志

* fix(misskey): 修复错把URL文件当本地文件上传的问题,明确处理 URL 和本地文件的方式

* fix(misskey): 修复 session_id 解析逻辑,确保与 user_cache 键格式匹配

* perf: streaming the file with a file object in FormData to reduce peak memory usage.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* style: format debug log message for local file upload in MisskeyAPI

* refactor: remove unnecessary thread executor for reading file bytes in MisskeyAPI

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
2025-10-16 10:27:04 +08:00
Soulter
f9c59d9706 docs: fix typo 2025-10-16 09:17:09 +08:00
Soulter
e1cec42227 chore: add Node.js setup step in CI workflow 2025-10-15 23:32:53 +08:00
Soulter
8d79c50d53 chore: update CI workflow to use pnpm for package management 2025-10-15 23:12:38 +08:00
Soulter
d77830b97f feat: add markdown-it type definitions as a dev dependency 2025-10-15 23:01:38 +08:00
Soulter
394540f689 docs: Update support status for various platforms 2025-10-15 18:48:25 +08:00
Soulter
7d776e0ce2 chore: bump version to 4.3.5 2025-10-15 12:19:26 +08:00
Soulter
17df1692b9 fix: 修复 /alter_cmd reset scene <num> xxx 不可用的问题 2025-10-15 12:16:13 +08:00
Soulter
9ab652641d feat: 支持配置工具调用超时时间并适配 ModelScope 的 MCP Server 配置 (#3039)
* feat: 支持配置工具调用超时时间并适配 ModelScope 的 MCP Server 配置。

closes: #2939

* fix: Remove unnecessary blank lines in _quick_test_mcp_connection function
2025-10-15 12:06:57 +08:00
shangxue
9119f7166f feat: satori 适配器支持 video、reply 消息类型 (#3035)
* Update satori_event.py

* style: format

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-10-15 10:45:35 +08:00
Soulter
da7d9d8eb9 feat: Add tutorial link for wecom_ai_bot platform 2025-10-15 10:42:31 +08:00
Soulter
80fccc90b7 feat: 支持接入企业微信智能机器人平台 (#3034)
* stage

* stage

* feat: 支持图片收发

* feat: add support for wecom_ai_bot in getPlatformIcon function
2025-10-14 23:20:56 +08:00
Soulter
dcebc70f1a chore: Add new auto-assign users to configuration 2025-10-14 12:16:22 +08:00
dependabot[bot]
259e7bc322 chore(deps): bump github/codeql-action in the github-actions group (#3032)
Bumps the github-actions group with 1 update: [github/codeql-action](https://github.com/github/codeql-action).


Updates `github/codeql-action` from 3 to 4
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: '4'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-14 09:35:57 +08:00
Soulter
37bdb6c6f6 feat: 内置网页搜索功能支持接入百度 AI 搜索 (#3031)
* feat: 内置网页搜索功能支持接入百度 AI 搜索

* fix: 修正配置文件中的拼写错误,更新为正确的键名

* Fix Baidu AI Search initialization logic
2025-10-14 09:35:34 +08:00
Soulter
dc71afdd3f docs: Revise README for clarity and updated support info
Updated README.md to improve clarity and fix formatting issues. Removed outdated developer group information and added support details for new platforms and services.
2025-10-14 09:13:54 +08:00
Soulter
44638108d0 docs: readme 2025-10-14 08:53:23 +08:00
RC-CHN
93fcac498c feat: 添加并优化服务提供商独立测试功能 (#3024)
* feat: 添加并优化服务提供商独立测试功能

* feat: add small size to action buttons in ItemCard and ProviderPage for better UI consistency

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-10-13 13:03:20 +08:00
Soulter
79e2743aac chore: bump version to 4.3.3 2025-10-12 11:42:18 +08:00
anka
5e9c7cdd91 fix: 当没有填写 api key 时,设置为空字符串 (#2834)
* fix: 修复空key导致的无法创建Provider对象的问题

* style: format code

* Update astrbot/core/provider/provider.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
2025-10-12 10:50:01 +08:00
Dt8333
6f73e5087d feat(core): 在新对话中重用先前的对话人格设置 (#3005)
* feat(core): reuse persona conf in new conversation

#2985

* refactor(core): simplify persona retrieval logic

* style: code format

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-10-12 10:42:35 +08:00
Yaron
8c120b020e fix: 修复阿里云百炼平台 TTS 下接入 CosyVoice V2, Qwen TTS 生成报错的问题 (#2964)
* fix: 修复了CosyVoice V2,Qwen TTS生成报错的问题。Fixed compatability problems with CosyVoice V2, Qwen TTS.

* fix: 将urlopen的同步请求替换为aiohttp的异步请求以下载音频

* fix: cozyvoice 报错显示

* fix: 添加阿里云百炼 TTS API Key 获取提示信息

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-10-12 01:03:06 +08:00
Dt8333
12fc6f9d38 fix(LTM): fix LTM not removed when removing conversation (#3002)
#2983
2025-10-12 00:16:42 +08:00
Dt8333
a6e8483b4c fix: 修复session-management中人格错误的显示为默认人格的问题 (#3000)
* fix: 修复session-management中人格错误的显示为默认人格的问题

#2985

* refactor: 使用命名表达式简化赋值和条件

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* style: format edited code with ruff

format code edited by sourcery-ai

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-10-12 00:12:04 +08:00
Soulter
7191d28ada fix: 启动了 TTS 但未配置 TTS 模型时,At 和 Reply 发送人无效
fixes: #2996
2025-10-10 12:11:03 +08:00
Soulter
e6b5e3d282 feat: tokenpony provider 2025-10-09 16:00:31 +08:00
ctrlkk
1413d6b5fe fix: 让事件钩子被暂停时跳出循环,而不是继续执行 (#2989) 2025-10-09 15:01:45 +08:00
ctrlkk
dcd8a1094c feat: 优化 SQLite 参数配置,对话和会话管理增加输入防抖机制 (#2969)
* feat: 优化 SQLite 数据库初始化设置并增强会话搜索功能,会话管理增加输入防抖

* fix: adjust SQLite cache and mmap size

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-10-06 17:13:53 +08:00
Futureppo
e64b31b9ba fix: Correct default modalities for DeepSeek provider (#2963)
* 更新 package.json

* 更新 ExtensionPage.vue

* fix(provider): Correct default modalities for DeepSeek provider
2025-10-06 16:30:05 +08:00
Dt8333
080f347511 feat: clean browser cache after update (#2958)
* feat: clean browser cache after update

* fix: move const to module

* fix: remove self prefix (a stupid mistake)
2025-10-06 16:29:18 +08:00
Dt8333
eaaff4298d fix(Python-Interpreter): fix incorrect file read method (#2970)
fix getting file by property(Sync) in an async handler

#2960
2025-10-06 16:12:05 +08:00
Soulter
dd5a02e8ef chore: bump version to 4.3.2 2025-10-05 01:01:13 +08:00
Soulter
3211ec57ee fix: handle Google search initialization and errors gracefully 2025-10-05 00:55:47 +08:00
Soulter
6796afdaee fix: googlesearch 2025-10-05 00:54:24 +08:00
Soulter
cc6fe57773 fix: on_tool_end无法获得工具返回的结果 (#2956)
fixes: #2940
2025-10-05 00:37:51 +08:00
Soulter
1dfc831938 fix: 修复 reset 没有清除群聊上下文感知数据的问题 (#2954) 2025-10-05 00:05:42 +08:00
Futureppo
cafeda4abf feat: 为插件市场的搜索增加拼音与首字母搜索功能 (#2936)
* 更新 package.json

* 更新 ExtensionPage.vue
2025-10-03 09:42:57 +08:00
Soulter
d951b99718 fix: 发送阶段将 Plain 为空的消息段移除 2025-10-03 00:45:07 +08:00
Soulter
0ad87209e5 chore: bump version to 4.3.1 2025-10-02 17:25:09 +08:00
Soulter
1b50c5404d fix: enhance knowledge base plugin status check to handle empty data response 2025-10-02 17:25:00 +08:00
Soulter
3007f67cab fix: update Dockerfile to remove npm installation and streamline package setup
closes: #2284
2025-10-02 16:59:11 +08:00
Soulter
ee08659f01 chore: bump version to 4.3.0 2025-10-02 16:37:54 +08:00
Soulter
baf5ad0fab fix: 修复接入智谱提供商后,工具调用无限循环的问题,并停止支持 glm-4v-flash (#2931)
fixes: #2912
2025-10-02 16:03:24 +08:00
kterna
8bdd748aec feat: 支持注册消息平台适配器的 logo (#2109)
* feat: 添加平台适配器 logo 支持

* 优化平台logo注册逻辑,增加缓存机制并支持并行处理

* 去除判断绝对路径

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-10-02 14:36:15 +08:00
Soulter
cef0c22f52 feat: update prompt prefix handling to support placeholder replacement 2025-10-02 14:20:52 +08:00
Soulter
13d3fc5cfe fix: fix type checking error and op, deop, wl, dwl command 2025-10-02 00:18:12 +08:00
Soulter
b91141e2be fix: add plugin activation check and corresponding messages in Knowledge Base 2025-10-01 22:14:03 +08:00
Soulter
f8a4b54165 fix: 修复插件指令注解为联合类型时处理异常的问题 (#2925)
* fix: 修复插件指令注解为联合类型时处理异常的问题

* fix: 修复参数类型检查以支持 typing.Union

* Update astrbot/core/star/filter/command.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update astrbot/core/star/filter/command.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix: 修复参数类型检查以支持 typing.Union 的处理逻辑

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-01 21:46:49 +08:00
Soulter
afe007ca0b refactor: 优化 packages/astrbot 内置插件的代码结构以提高可维护性和可读性 (#2924)
* refactor: code structure for improved readability and maintainability

* style: ruff format

* Update packages/astrbot/commands/provider.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* Update packages/astrbot/commands/persona.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* Update packages/astrbot/commands/llm.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* Update packages/astrbot/commands/conversation.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* fix: improve error handling message formatting in key switching

* fix: update LLM command to use safe get for provider settings

* feat: implement ProcessLLMRequest class for handling LLM requests and persona injection

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-10-01 21:29:15 +08:00
Soulter
8a9a044f95 fix: 修复注册指令组指令时的 Pyright 类型检查提示 (#2923) 2025-10-01 20:03:04 +08:00
u0_ani-nya.com
5eaf03e227 perf: 对于 Telegram 群聊,将回复机器人的消息视为唤醒机器人 (#2926)
* reply as at for tg

Add handling for bot replies in group messages.

* style: type checking and ruff format

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-10-01 19:04:37 +08:00
Seayon
a8437d9331 feat: 支持在 Telegram 和飞书下请求 LLM 前预表态功能 (#2737)
*  feat(platform): 为 Telegram 和飞书添加消息表情回应功能

支持在收到命令时自动添加表情回应,提升用户交互体验
新增平台特异配置项,允许自定义启用状态和表情列表

* Update astrbot/core/platform/astr_message_event.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* style: ruff format

* fix: 优化平台特异配置的预回应表情处理逻辑

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
2025-09-30 17:29:34 +08:00
晴空
e0392fa98b fix: 用 mi-googlesearch-python 库代替失效的 googlesearch-python 库 (#2909)
* googlesearch-python库失效,用mi-googlesearch-python库平替,恢复谷歌搜索

* Update googlesearch-python dependency version
2025-09-29 12:54:16 +08:00
ctrlkk
68ff8951de feat: 添加分页和搜索功能以获取会话列表,优化前端与后端的数据交互 (#2906)
* feat: 添加分页和搜索功能以获取会话列表,优化前端与后端的数据交互

* fix: 修复会话计数显示,使用总项数替代会话数组长度

* fix: 将参数类型和名称与实现内容匹配。

* perf: convert for loop into list comprehension

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* fix: type checking error

* fix: 优化 persona_id 的获取逻辑

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
2025-09-28 23:25:30 +08:00
KroMiose
9c6b31e71c Update README.md (#2904) 2025-09-28 14:50:02 +08:00
Soulter
50f74f5ba2 fix: 修复"开启 TTS 时同时输出语音和文字内容"功能不可用的问题 (#2900)
fixes: #2844
2025-09-28 10:48:57 +08:00
Soulter
b9de2aef60 chore: bump version to 4.2.1 2025-09-27 23:36:25 +08:00
Soulter
7a47598538 fix: 修复指令无法使用的问题
fixes: #2897
2025-09-27 23:35:35 +08:00
Soulter
3c8c28ebd5 chore: bump version to 4.2.0 2025-09-27 20:45:50 +08:00
Soulter
524285f767 feat: add cancel button with localized text to AddNewPlatform and update close button in AddNewProvider
fixes: #2889
2025-09-27 20:41:45 +08:00
Soulter
c2a34475f1 feat: 支持删除指定会话以及部分会话管理优化 (#2895)
* feat: add toast notification system with snackbar component

* feat: add session deletion functionality

* feat: support batch operations for updating session persona, provider, LLM, and TTS statuses

fix: #2263

* feat: 修复对话状态关闭,删除对话管理库会导致对话无法恢复

fixes: #2309
2025-09-27 20:36:30 +08:00
Soulter
a69195a02b fix: webchat streaming queue interrupted after user closing tab (#2892)
* feat: add toast notification system with snackbar component

* feat: enhance chat functionality with conversation running state and notifications

* fix: update bot message avatar rendering during streaming

* feat: implement conversation tracking context manager for webchat

* fix: update conversation tracking to remove conversation ID on exit
2025-09-27 17:57:12 +08:00
RC-CHN
19d7438499 fix: unit tests (#2760)
* fix:修复了main和plugin_manager部分单元测试

* fix: 修复了dashboard部分测试

* remove: 删除暂无用的配置测试脚本

* perf:拆分插件增查删改为独立的单元测试

* refactor: 重构插件管理器测试,使用临时环境隔离测试实例

* test: 增加对仪表板文件检查的单元测试,涵盖不同情况

* style: format code

* remove: 删除未使用的导入语句

* delete: remove unused test file for pipeline

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-09-27 14:43:04 +08:00
anka
ccb380ce06 feat: 支持接入 Coze (#2858)
* feat: 适配 coze 供应商
1. 支持文件上传
2. 支持多模态
3. 支持流式传输
4. 支持 API 端的上下文保存历史记录
5. 支持类似 dify 的 forget 接口

* style: format code

* fix: type checking error

* fix: 修复:
1. 使用coze api端的上下文时, 现在不会重复传递上下文
2. 使用 AstrBot 的上下文时, 正确处理其中的图片信息
3. 上传图片时, 提供一个非持久化的缓存避免重复上传(在解析上下文并将文件转化为file_id传递给coze api时, 如果没有缓存会导致很多的网络资源浪费)
4. 修复reset等指令不能正确重置上下文的问题

* fix: 移除某些地方多余的针对 dify 的断言, 以兼容 Coze

* style: 修改配置项显示/webchat平台对于非预期的类型的处理

* fix: 让conversation_id放到请求中正确的位置

* refactor: extract coze api client

* refactor: improve image processing logic in ProviderCoze

* chore: remove file ext guessing

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-09-27 14:23:29 +08:00
Ding Jiatong
a35c439bbd fix: 使用增量解码器修复 Dify 流式返回结果偶现的解码错误 (#2888)
* fix: 修复linux下utf-8解码错误的问题

* feat: use incremental decoder

* fix: add type hint for response parameter in _stream_sse and refactor file upload method

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-09-26 23:04:58 +08:00
Soulter
09d1f96603 fix: 修复 /alter_cmd 指令无法控制指令组、子指令组和子指令组下子指令的问题 (#2873)
* fix: revert changes in command_group.py at 782c036 to fix command group permission check

* fix: 不传递 GroupCommand handler

* perf: alter_cmd 指令支持对子指令、指令组进行配置

* chore: remove test commands and subcommands from test_group

* chore: add cache for complete command names list in CommandFilter and CommandGroupFilter

---------

Co-authored-by: Dt8333 <25431943+Dt8333@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
2025-09-26 14:16:50 +08:00
鸦羽
26aa18d980 Merge pull request #2881 from Raven95676/fix/2879
fix: add missing id field
2025-09-26 11:31:28 +08:00
Raven95676
d10b542797 chore: format 2025-09-26 11:05:32 +08:00
Raven95676
ce4e4fb8dd fix: add missing id field 2025-09-26 10:59:11 +08:00
Soulter
8f4a31cf8c chore: bump version to 4.1.7 2025-09-23 22:16:36 +08:00
Soulter
23549f13d6 Feature: 支持批量删除对话历史 (#2859)
* feat: 支持批量删除对话

closes: #2784

* feat: 添加加载状态禁用功能,优化用户交互体验
2025-09-23 22:10:56 +08:00
Soulter
869d11f9a6 perf: 优化验证配置时的性能,移除配置隐式类型转换
fixes: #2646
2025-09-23 21:04:14 +08:00
Soulter
02e73b82ee fix: 修复无法打开更新对话框的问题 2025-09-23 20:29:10 +08:00
Soulter
f85f87f545 feat: WebChat 支持手动填写模型名
closes: #2830
2025-09-23 15:32:54 +08:00
Soulter
1fff5713f3 refactor: 解耦 PlatformPage 和 ProviderPage 的部分组件 2025-09-23 15:32:54 +08:00
Soulter
8453ec36f0 docs: Revise links for documentation and blog in README
Updated links in the README for documentation and blog.
2025-09-23 14:12:05 +08:00
Soulter
d5b3ce8424 fix: update download_dashboard to log specific dashboard release URLs 2025-09-23 13:10:33 +08:00
Soulter
80cbbfa5ca chore: bump version to 4.1.6 2025-09-23 13:02:06 +08:00
Soulter
9177bb660f fix: improve error handling in run_agent for streaming responses 2025-09-23 10:34:24 +08:00
Soulter
a3df39a01a perf: unified button styles
closes: #2748
2025-09-23 10:27:52 +08:00
Soulter
25dce05cbb refactor: improve webchat UI (#2853) 2025-09-23 10:19:26 +08:00
Soulter
1542ea3e03 fix: context.get_provider_by_id issue 2025-09-22 17:22:50 +08:00
Soulter
6084abbcfe feat: add user_id search capability in get_filtered_conversations 2025-09-21 22:45:55 +08:00
Soulter
ed19b63914 chore: bump version to v4.1.5 2025-09-21 21:47:14 +08:00
Soulter
4efeb85296 chore: remove uv.lock file 2025-09-21 21:47:06 +08:00
shangxue
fc76665615 feat: Satori适配器引用消息无法正确识别 (#2686)
* Update PlatformPage.vue

* Update PlatformPage.vue

* Update PlatformPage.vue

* Update satori_adapter.py

* Update satori_event.py

* Update default.py

* Update satori_adapter.py

* Update satori_adapter.py

* style: format code

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-09-21 21:45:35 +08:00
Soulter
3a044bb71a fix: 修复 Telegram 下流式传输时,第一次输出的内容会被覆盖掉的问题 (#2838)
fixes: #2481
2025-09-21 21:24:47 +08:00
Soulter
cddd606562 perf: 优化 ExtensionPage 2025-09-21 21:10:03 +08:00
Soulter
7a5bc51c11 fix: 识别引用消息的图片时优先使用默认图片转述提供商 (#2836)
* fix: 识别引用消息的图片时优先使用默认图片转述提供商

closes: #2821

* fix: 添加日志记录以处理未找到图片标题提供者的情况

* style: format code
2025-09-21 20:55:32 +08:00
Soulter
9f939b4b6f fix: 修复对话管理页面的关键词搜索功能失效的问题并优化一些 UI 样式 (#2837)
* fix: 修复对话管理页面的关键词搜索功能失效的问题并优化一些 UI 样式

fixes: #2782

* style: format code

* fix: remove debug print statements from conversation retrieval methods
2025-09-21 20:55:15 +08:00
Soulter
80a86f5b1b fix: 修复 astrbot.core.star 等包下的 type checking error (#2787)
* fix: 修复 astrbot.core.star 等包下的 type checking error

* refactor: improve type checking and annotations

* chore: ruff format
2025-09-21 18:10:04 +08:00
yitaikarma
a0ce1855ab fix: 优化统计页内存占用和消息数据趋势的样式 (#2826)
* fix: 调整统计页内存占用和消息趋势分析的布局,优化响应式显示

* fix: 隐藏增长率为零时的趋势图标
2025-09-21 17:06:47 +08:00
anka
a4b43b884a fix: 修复aiocqhttp适配器at会获取群昵称而消息不会获取的逻辑不一致 (#2769)
* fix: 修复at会获取群昵称而消息不会获取的逻辑不一致

* style: format code
2025-09-19 13:04:51 +08:00
PaloMiku
824c0f6667 feat: 新增 Misskey 平台适配器 (#2774)
* feat: add Misskey platform adapter

* fix: 修复 Misskey 配置项的大小写问题

* feat: 添加消息链序列化功能和可见性解析逻辑

* chore: 删除损坏的 Misskey 平台适配器工具函数文件

* docs: 更新 Misskey 消息适配器设置描述信息

* feat: Misskey 单用户连续上下文对话支持

* feat: 为 Astrbot 添加 Misskey 平台适配器的 ID 配置

* feat: 重构 Misskey 平台适配器,提取通用工具函数并优化消息处理逻辑

* refactor: 清理 Misskey 平台适配器和 API 代码,移除冗余注释

* fix: 修复了使用中和使用者反馈的多个问题

* fix: 修改提及格式,确保提及在新行开始,提升帖子美观和易读性。

* feat: 添加默认可见性和本地仅限设置,优化 Misskey 平台适配器的配置

* fix: 更新 Misskey 平台适配器配置,使用前缀以防止和其他适配器未来可能的冲突问题

* chore: rename 'misskey' to 'Misskey' in config

* feat: Misskey 适配器添加聊天消息响应功能,重构接收和发送逻辑为 Websockets 处理

* fix: 增强 Misskey WebSocket 消息日志输出

* refactor: 优化 Misskey 适配器的消息处理和日志输出

* fix: 增强 Misskey WebSocket 重连接逻辑

* feat: 增强 Misskey 适配器的消息处理,支持房间消息和相关功能,重构通用函数,清理代码重复冗余

* fix: 不屏蔽唤醒前缀对默认 LLM 的唤醒

* fix: 透传所有的群聊消息事件

* fix: 修复 message_type

* perf: 实现 send_streaming 以支援流式请求

* docs(README): update README.md

* fix: super().send(message) 被忽略

* fix: 修正 session 结构

: 作为分隔符可能会导致 umo 组装出现问题

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
2025-09-18 23:34:41 +08:00
Soulter
a030fe8491 feat: add audioop-lts dependencies (#2809)
pydub needs audioop as a requirement but this builtin package has been removed in 3.13
2025-09-18 23:32:04 +08:00
Soulter
3a9429e8ef fix: on_tool_end hook unavailable 2025-09-17 15:48:57 +08:00
anka
c4eb1ab748 chore: bump version to 4.1.4 2025-09-16 20:09:11 +08:00
anka
29ed19d600 Merge pull request #2783 from AstrBotDevs/revert-2778-fix-handler-type
Revert "fix: parameter type/default handling in CommandFilter"
2025-09-16 20:01:23 +08:00
anka
0cc65513a5 Revert "fix: parameter type/default handling in CommandFilter" 2025-09-16 20:01:05 +08:00
Soulter
debc048659 chore: bump version to 4.1.3 2025-09-16 13:16:21 +08:00
邹永赫
92f5c918dd Merge pull request #2778 from MliKiowa/fix-handler-type
fix: parameter type/default handling in CommandFilter
2025-09-16 13:43:53 +09:00
手瓜一十雪
9519f1e8e2 fix: parameter type/default handling in CommandFilter
Adjusts logic to prioritize type annotations over default values when setting handler_params in CommandFilter. This ensures that parameter types are correctly inferred when available.
2025-09-16 11:49:27 +08:00
Soulter
a8f874bf05 fix: 修复分段回复时,引用消息单独发送导致第一条消息内容为空的问题 (#2757) 2025-09-16 10:45:39 +08:00
anka
9d9917e45b feat: 增加群名称识别到 system prompt, 并提供相应的配置 (#2770)
* feat🤖: 增加群名称识别到system prompt, 并提供相应的配置

* feat: 优化实现方式, 重构AstrBotMessage, 向后兼容

* style: format
2025-09-16 10:23:08 +08:00
Soulter
91ee0a870d fix: handle image value correctly for mcp BlobResourceContents (#2753) 2025-09-16 08:22:18 +08:00
dependabot[bot]
6cbbffc5a9 chore(deps): bump the github-actions group with 2 updates (#2771)
Bumps the github-actions group with 2 updates: [actions/checkout](https://github.com/actions/checkout) and [actions/setup-python](https://github.com/actions/setup-python).


Updates `actions/checkout` from 4 to 5
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

Updates `actions/setup-python` from 5 to 6
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: actions/setup-python
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-16 08:19:31 +08:00
Yokami
8f26fd34d1 feat: add copy button for service providers (#2767) 2025-09-15 22:17:00 +08:00
Soulter
fda655f6d7 fix: 修复配置默认 TTS 或者 STT 模型之后仍无法生效的问题 (#2758)
fixes: #2731
2025-09-15 22:08:40 +08:00
Soulter
a663d6509b chore: bump version to 4.1.2 2025-09-14 21:07:36 +08:00
Soulter
9ec8839efa perf: 检查服务提供商可用性时跳过未启用的提供商 2025-09-14 21:01:32 +08:00
Soulter
a7a0350eb2 fix: 平台配置下的「内容安全」组无法生效 (#2751) 2025-09-14 20:25:53 +08:00
Soulter
39a7a0d960 fix: revert "feat: 兼容指令名和第一个参数之间没有空格的情况 (#2650)" for command issue
This reverts commit 9bfa726107.
2025-09-14 19:31:15 +08:00
Soulter
7740e1e131 ci: add ci stage of code format checking (#2750)
* style: ruff format

* ci(dashboard-ci): ensure GitHub Release action only runs on push events

* ci(code-format): ruff format and ruff check
2025-09-14 18:05:58 +08:00
Soulter
9dce1ed47e chore(github): revise PR template
Updated the pull request template to improve clarity and fix formatting issues.
2025-09-14 14:44:46 +08:00
Soulter
e84a00d3a5 fix: 修复多配置文件配置的不同人格无法生效的问题 (#2739)
fixes: #2724
2025-09-14 14:09:46 +08:00
anka
88a944cb57 chore(github): 优化 PR 模板 2025-09-14 12:58:34 +08:00
Soulter
20c32e72cc chore: bump version to 4.1.1 2025-09-13 16:19:40 +08:00
Soulter
4788c20816 fix: model variable referenced before assignment 2025-09-13 16:18:22 +08:00
Soulter
e83fc570a4 chore: bump version to 4.1.0 2025-09-13 13:31:49 +08:00
Yokami
e841b6af88 feat: 支持在 WebUI 自定义 OpenAI API extra_body 参数 (#2719)
* feat: 支持OPENAI系 模型的自定义标头,以解决qwen模型无法使用的问题

* fix: 修复AI说的问题

* fix: 布尔开关向右对齐
2025-09-13 13:23:49 +08:00
Dt8333
ea6f209557 fix: 修复LLM仍会调用已禁用的工具的问题 (#2729)
* fix: 修复LLM调用已禁用的工具

* feat: 修改工具禁用判断位置,提高效率
未设置可用工具时仍旧循环判断
设置可用工具后在获取工具时即判断
2025-09-12 21:36:10 +08:00
Zhalslar
9bfa726107 feat: 兼容指令名和第一个参数之间没有空格的情况 (#2650)
插件中@filter.command的指令在用户输入“命令+参数” 无空格隔开时无法处理,但只要稍微改动几行代码就可以兼容
2025-09-12 15:40:37 +08:00
shangxue
d24902c66d feat: 添加 --webui-dir 启动参数以支持指定 WebUI 构建文件目录 (#2680)
* Update main.py

* Update server.py

* Update main.py

* Update main.py

* Update main.py

* Update initial_loader.py

* Update server.py

* Update main.py

* chore: update webui_dir type hint and improve dashboard file check logic

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-09-12 15:15:29 +08:00
RC-CHN
72aea2d3f3 feat: 允许添加多个 tavily API Key 进行轮询 (#2725)
* feat: 允许添加多个tavily API Key进行轮询

* perf: 并发安全的从列表中获取并轮换Tavily API密钥

* fix: 自动迁移旧版 websearch_tavily_key 为列表格式并保存
2025-09-12 15:03:47 +08:00
RC-CHN
dc9612d564 fix: 修复自定义文转图模板更新版本后会被覆盖的问题 (#2677)
* perf: 更新模板管理逻辑,在data目录中管理用户自定义模板,优化热重载逻辑

* refactor: 优化模板管理逻辑,重构模板复制和初始化流程,增强用户模板管理功能

* chore:移除无用注释

* remove:移除了t2i部分中不会走到的异常

* style: format code

* fix: trim whitespace from template names in create, update, and delete operations

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-09-12 13:34:07 +08:00
Soulter
1770556d56 fix: 修复工具调用时的 content 内容在重新加载后没有显示在 webchat 的问题 (#2727) 2025-09-12 13:05:33 +08:00
Soulter
888fb84aee fix: 修复 WebChat 下,Agent 长时任务时,SSE 连接自动断开的问题 2025-09-12 13:04:27 +08:00
Soulter
d597fd056d fix: 修复知识库不能创建的问题 2025-09-11 17:27:57 +08:00
quirrel
dea0ab3974 fix: 解决插件页表格视图中,点击状态字段表头排序不起作用的问题 (#2714) 2025-09-11 16:20:33 +08:00
Soulter
da6facd7d7 docs: 修复开发者群组错误 2025-09-11 12:44:59 +08:00
Soulter
bb8ab5f173 docs: update readme 2025-09-11 10:40:30 +08:00
Soulter
ac8a541059 docs: remove message stat badge
Removed the old dynamic JSON badge for message volume.
2025-09-11 10:36:18 +08:00
Soulter
0e66771f0e docs: revise acknowledgments and add similar projects
Updated project acknowledgments and added links to similar open-source bot projects.
2025-09-11 10:35:11 +08:00
Soulter
d3a295a801 ci: add auto_assign.yml for auto PR reviewer assignment 2025-09-10 13:21:34 +08:00
shangxue
f2df771771 fix: 修复 Satori 适配器教程链接 (#2668)
* Update PlatformPage.vue

* Update PlatformPage.vue
2025-09-09 21:59:06 +08:00
dependabot[bot]
7b72cd87a5 chore(deps): bump the github-actions group with 2 updates (#2674)
Bumps the github-actions group with 2 updates: [actions/setup-python](https://github.com/actions/setup-python) and [actions/stale](https://github.com/actions/stale).


Updates `actions/setup-python` from 5 to 6
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5...v6)

Updates `actions/stale` from 9 to 10
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v9...v10)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: actions/stale
  dependency-version: '10'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-09 08:46:04 +08:00
anka
9431efc6d1 feat: 增加 on_platform_loaded 钩子以在消息平台适配器实例化完成后触发 (#2651)
* feat⚒️: 增加平台加载时的钩子

* fix: 补充api

* fix: 只捕获Exception
2025-09-09 08:44:37 +08:00
Soulter
7c3f5431ba chore: bump version to 4.0.0 2025-09-07 21:19:19 +08:00
anka
d98cf16a4c feat: 增加根据qq号/群号主动发送消息的封装, 增加事件构造 (#2629)
* feat: 增加根据qq号/群号主动发送消息的封装, 增加事件构造

* fix: 增加不支持平台提示, 修正文档字符串

* chore: lint

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-09-07 17:04:29 +08:00
Soulter
2c3c3ae546 fix: 移除无用的调试日志以简化命令注册逻辑 2025-09-07 11:37:34 +08:00
Soulter
905eef48e3 feat: 增加 OneBot 服务 Token 为空时的安全提醒 (#2648) 2025-09-07 00:51:46 +08:00
RC-CHN
b31b520c7c feat: 支持管理 T2I 模版 (#2638)
* feat:添加t2i模板管理后端api,移除config.py中重复功能

* feat: 添加T2I模板管理功能前端,支持模板的创建、应用和重置

* refactor: 修复错误的保存逻辑,将t2i注册时打印路由信息部分移到基类实现

* remove:移除了路由注册时的打印

* chore: format code

* fix: update input variant from solo to outlined for better UI consistency

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-09-07 00:14:28 +08:00
shangxue
17aee086a3 feat: 添加 Satori 协议适配器支持 (#2633)
* Create satori_adapter.py

* Add files via upload

* Update default.py

* Update manager.py

* Update platform_adapter_type.py

* Update PlatformPage.vue

* Add files via upload

* Update default.py

* Update manager.py

* Update platform_adapter_type.py

* Update PlatformPage.vue

* Add files via upload

* Update default.py

* chore: format code

* feat: 修复 Image, Audio 的解析,修复 message_str 的解析

* perf: 增强鲁棒性

* feat: 添加 Satori 配置项描述,移除适配器默认配置

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-09-06 23:52:00 +08:00
Zhalslar
c1756e5767 fix: 修复组件 type 属性为枚举值 (#2628)
当前components.py中每个组件的type属性都是直接字符串赋值,IDE会爆红。
修正为使用本就定义好的ComponentType枚举类
用时修正多个组件中当url为空时convert_to_base64检查路径导致的报错
2025-09-06 19:22:49 +08:00
anka
2920279c64 Fix: 修正 QQ 群成员昵称获取 (#2626)
* feat: 修正群昵称获取

* fix: 增加兜底机制
2025-09-06 19:16:57 +08:00
Soulter
1f0f985b01 docs: update readme
Updated README to improve clarity and organization. Changed section titles and removed unnecessary content.
2025-09-06 16:21:33 +08:00
Soulter
0762c81633 Update Discord link in README.md 2025-09-06 11:49:05 +08:00
Soulter
28ef301ccc docs: update readme 2025-09-06 11:48:37 +08:00
Soulter
26c6a2950f 📦 release: bump version to v4.0.0-beta.5 2025-09-05 17:42:38 +08:00
Soulter
5082876de3 fix: 修复 v4.0.0 版本下可能无法得到 LLM 的响应的问题
closes: #2622
2025-09-05 17:20:29 +08:00
Soulter
e50e7ad3d5 fix: ensure deep copy of config_data before posting 2025-09-05 16:45:34 +08:00
卢小辉
45a4a6b6da feat: 给添加 edge_tts 新增 rate, volume, pitch 参数 (#2625)
* 修复python执行器文件上传qq提示参数错误问题,修改策略为本地url

* 给edge_tts 添加3个默认参数,方便通过ui配置
2025-09-05 15:23:21 +08:00
Soulter
02918b7267 perf: 增加 abconf_data 缓存,优化性能 2025-09-04 23:48:33 +08:00
Soulter
6c662a36c1 fix: 适配 qwen3 的 thinking 类模型
fixes: #2631
2025-09-04 20:26:52 +08:00
Soulter
b78fe3822a perf: 完善对 rerank model 的可用性检测 2025-09-04 15:46:23 +08:00
Soulter
35eda37e83 Merge remote-tracking branch 'origin/releases/v3.5.27' 2025-09-04 15:30:15 +08:00
Soulter
176a8e7067 chore: add no_proxy config item 2025-09-04 15:23:58 +08:00
Soulter
61d4f1fd4b 📦 release: v3.5.27 2025-09-04 15:01:44 +08:00
Soulter
121b68995e chore: update changelog 2025-09-04 14:34:23 +08:00
Soulter
d11f1d8dae perf: enhance update checks to consider pre-release versions 2025-09-04 14:33:19 +08:00
Soulter
c0ef2b5064 📦 release: v3.5.27 2025-09-04 13:56:47 +08:00
Soulter
2a7308363e fix: 下载 WebUI 时,明确版本号 2025-09-04 13:54:16 +08:00
Soulter
dc0c556f96 ci: build docker image 时同时 build webui,并放入 image 中 2025-09-04 13:42:26 +08:00
Soulter
ba2ee1c0aa fix: 初次下载 webui 构建文件时下载指定版本而非 latest 2025-09-04 13:27:55 +08:00
Zhalslar
0f8b550d68 fix: aiocqhttp优先使用session_id发送消息 (#2623)
* fix: aiocqhttp优先使用session_id发送消息

当前aiocqhttp依赖raw_message来发送消息,raw_message为空时也无法有效回退到用group_id或user_id来发送,更符合逻辑的应该:优先使用session_id(group_id or user_id),raw_message兜底

* Update aiocqhttp_message_event.py

* fix: validate session_id as integer and improve send_message docstring

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-09-04 11:34:40 +08:00
Soulter
ed1fc98821 Merge pull request #2621 from AstrBotDevs/fix/gemini-api-error-handle
Fix: 修复 e.message 为 None 时报错的问题和部分 lint error
2025-09-04 11:20:05 +08:00
Soulter
fa53b468fd fix: ensure function call name and args are not None before processing 2025-09-04 11:18:58 +08:00
Soulter
4e2533d320 feat: add pre-release check for Docker image tagging 2025-09-04 09:30:21 +08:00
Soulter
388ae49e55 fix: 修复 e.message 为 None 时报错的问题和一些 lint error 2025-09-03 22:25:18 +08:00
Soulter
f3f347dcba 📦 release: bump verstion to v4.0.0-beta.4 2025-09-03 13:29:20 +08:00
Soulter
655be3519c perf: 数据迁移完毕之后引导重启程序
closes: #2613
2025-09-03 13:21:56 +08:00
Soulter
06df2940af chore: change identifier description 2025-09-03 12:46:10 +08:00
Soulter
4149549e42 fix: KeyError arprompt 2025-09-03 12:45:34 +08:00
Soulter
da351991f8 📦 release: bump verstion to v4.0.0-beta.3 2025-09-03 01:01:48 +08:00
Soulter
3305152e50 fix: 修复当人格 ID 为中文时,不可保存的问题 2025-09-03 00:59:07 +08:00
Soulter
bea7bae674 fix: dict read 2025-09-03 00:56:41 +08:00
Soulter
45773d38ed 📦 release: bump verstion to v4.0.0-beta.2 2025-09-03 00:32:49 +08:00
Soulter
8d4c176314 fix: correct image_caption logic and remove redundant config call 2025-09-03 00:31:18 +08:00
Soulter
9ca5c87c4c fix: complete requirements.txt 2025-09-03 00:05:43 +08:00
Soulter
36a6f00e5f Merge pull request #2610 from AstrBotDevs/releases/4.0.0 (#2610)
Release: v4.0.0-beta.1
2025-09-02 23:47:21 +08:00
Soulter
e24a5b4cb5 Revert "Release: v4.0.0-beta.1 (#2509)" (#2609)
This reverts commit f88031b0c9.
2025-09-02 23:44:36 +08:00
Soulter
f88031b0c9 Release: v4.0.0-beta.1 (#2509)
* Refactor: using sqlmodel(sqlchemy+pydantic) as ORM framework and switch to async-based sqlite operation (#2294)

* stage

* stage

* refactor: using sqlchemy as ORM framework, switch to async-based sqlite operation

- using sqlmodel as ORM(based on sqlchemy and pydantic)
- add Persona, Preference, PlatformMessageHistory table

* fix: conversation

* fix: remove redundant explicit session.commit, and fix some type error

* fix: conversation context issue

* chore: remove comments

* chore: remove exclude_content param

* Fix: 当多个相同消息平台实例部署时上下文可能混乱(共享) (#2298)

* perf: update astrbot event session format, using platfrom id to ensure uniqueness

fixes: #1000

* fix: 更新 MessageSession 类以使用 platform_id 作为唯一标识符,并调整相关方法以确保一致性

* fix: 更新 MessageSession 文档以明确 platform_id 的赋值规则,并调整 get_platform 和 get_platform_inst 方法的返回类型

* Improve: 引入全新的人格管理模式以及重构函数工具管理器 (#2305)

* feat: add persona management

* refactor:  重构函数工具管理器,引入 ToolSet,并让 Persona 支持绑定 Tools

* feat: 更新 Persona 工具选择逻辑,支持全选和指定工具的切换

* feat: 更新 BaseDatabase 中的 persona 方法返回类型,支持返回 None

* fix: platform id

* feat: add support to sync mcp servers from ModelScope (#2313)

* fix: 修复访问令牌的空格问题

* chore: 移除 MCP 市场相关逻辑 (#2314)

* chore: 移除 MCP 市场相关路由

* Refactor: 重构配置文件管理,以支持更灵活的、会话粒度的(基于 umo part)配置文件隔离 (#2328)

* refactor: 重构配置文件管理,以支持更灵活的、基于 umo part 的配置文件隔离

* Refactor: 重构配置前端页面,新增数个配置项 (#2331)

* refactor: 重构配置前端页面,新增数个配置项

* feat: 完善多配置文件结构

* perf: 系统配置入口

* fix: normal config item list not display

* fix: 修复 axios 请求中的上下文引用问题

* chore: remove status checking in chat page

* fix: 修复 stage 在不同 pipeline 中被重复使用的问题和 persona 相关问题

* Feature: 增加图片转述提供商配置、支持用户自定义模型模态能力 (#2422)

* feat: 增加图片转述提供商配置、支持用户自定义模型模态能力

* fix: 修复 LLMRequestSubStage 中会话管理方法参数不一致的问题,简化方法调用

* Feature: 优化 WebSearch 的爬取网页速度并且支持使用 Tavily 作为搜索引擎 (#2427)

* feat: 优化了 websearch 的速度;支持 Tavily 作为搜索引擎

* fix: 优化日志记录格式,修复搜索结果处理中的索引和内容显示问题

* feat: 添加对话选中状态管理,优化默认对话加载逻辑

* feat: 支持通过解析URL 的方式导入网页数据到知识库 (#2280)

* feat:为webchat页面添加一个手动上传文件按钮(目前只处理图片)

* fix:上传后清空value,允许触发change事件以多次上传同一张图片

* perf:webchat页面消息发送后清空图片预览缩略图,维持与文本信息行为一致

* perf:将文件输入的值重置为空字符串以提升浏览器兼容性

* feat:webchat文件上传按钮支持多选文件上传

* fix:释放blob URL以防止内存泄漏

* perf:并行化sendMessage中的图片获取逻辑

* feat:完成从url获取部分的UI

* feat: 添加从URL导入功能的组件

* fix: 优化导入结果处理,添加整体摘要和主题摘要的文件命名

* perf: 更新url导入选项添加默认值

* perf: 在导入url的部分配置项未启用时隐藏暂不使用的下拉框选项

* feat: 添加上传前提提示信息至导入url至知识库功能

* feat: 更新导入功能提示信息,添加上传状态通知

* fix: 优化url转知识库错误处理

* feat: 合并知识库的上传文件和 URL 标签页

* feat: 删除导入URL至知识库功能的相关组件

---------

Co-authored-by: Soulter <905617992@qq.com>

* feat: 添加条件显示逻辑以优化插件配置项的可见性管理 (#2433)

* Feature: 支持在 WebUI 配置文件页中配置默认知识库 (#2437)

* feat: 支持配置默认知识库

* chore: clean code

* refactor: 重构 Function Tool 管理并初步引入 Multi Agent 及 Agent Handsoff 机制  (#2454)

* stage

* refactor: 重构 Function Tool 管理并引入 multi agent handsoff 机制

- Updated `star_request.py` to use the global `call_handler` instead of context-specific calls.
- Modified `entities.py` to remove the dependency on `FunctionToolManager` and streamline the function tool handling.
- Refactored `func_tool_manager.py` to simplify the `FunctionTool` class and its methods, removing deprecated code and enhancing clarity.
- Adjusted `provider.py` to align with the new function tool structure, removing unnecessary type unions.
- Enhanced `star_handler.py` to support agent registration and tool association, introducing `RegisteringAgent` for better encapsulation.
- Updated `star_manager.py` to handle tool registration for agents, ensuring proper binding of handlers.
- Revised `main.py` in the web searcher package to utilize the new agent registration system for web search tools.

* chore: websearch

* perf: 减少嵌套

* chore: 移除未使用的 mcp 导入

* feat: 添加 WebUI 迁移助手以及相关迁移方法 (#2477)

* fix: 修复迁移对话时的一些问题

* feat: 增加工具使用模型能力选项

* feat: 添加知识库插件更新检查和更新功能

* perf: 调整 WebUI sidebar 顺序

* refactor: 重构 SharedPreference 类并采用数据库存储替换 json 存储 (#2482)

* perf: 使用 run_coroutine_threadsafe

Co-authored-by: Raven95676 <raven95676@gmail.com>

* Feature: 支持配置重排序模型(vLLM API 格式)用于 score 任务 (#2496)

* feat: 支持添加重排序模型(vLLM API 格式)用于 score 任务

* fix: update rerank API base URL to use localhost

* feat: 知识库支持配置重排序模型

* fix: remove debug print statement for reranked results in FaissVecDB

* fix: 移除知识库中的提示文本

* Feature: 支持在配置文件配置可用的插件组 (#2505)

* feat: 增加可用插件集合配置项

* remove: 旧版平台可用性配置

已经基于多配置文件实现。

* feat: 应用配置文件插件可用性配置

* perf: hoist if from if

* feat: llm_tool 装饰器返回值支持返回 mcp 库中 tool 的返回值类型(mcp.type.CallToolResult) (#2507)

* fix: add type definition for migrationDialog and ensure open method exists before calling

* chore: update project version to 4.0.0

* feat: 多 t2i 服务的随机负载均衡 (#2529)

* fix: bugfixes

* Improve: 扩大配置文件生效范围的自定义程度到会话粒度 (#2532)

* feat: 扩大配置文件生效范围的自定义程度

* perf: 冲突检测

* refactor: simplify config form validation and improve conflict message clarity

* chore: clean code

* feat: 插件配置支持多个快捷魔法配置项

* chore: 修复当自动更新 webchat title 时,history 被重置的问题

* bugfixes

* feat: add custom T2I template editor (#2581)

* perf: add option to clear provider selection in ProviderSelector component

* 📦 release: bump verstion to v4.0.0-beta.1

* chore: delete uv.lock

---------

Co-authored-by: RC-CHN <67079377+RC-CHN@users.noreply.github.com>
Co-authored-by: Raven95676 <raven95676@gmail.com>
2025-09-02 23:39:24 +08:00
Soulter
830151e6da chore: delete uv.lock 2025-09-02 23:31:51 +08:00
Soulter
1e14fba81a 📦 release: bump verstion to v4.0.0-beta.1 2025-09-02 23:27:55 +08:00
Soulter
7b8800c4eb perf: add option to clear provider selection in ProviderSelector component 2025-09-02 21:49:11 +08:00
Soulter
8f4625f53b Merge remote-tracking branch 'origin/master' into releases/4.0.0 2025-08-31 20:37:53 +08:00
Soulter
1e5f243edb 📦 release: v3.5.26 2025-08-31 20:25:05 +08:00
Soulter
e5eab2af34 fix: specify type for devCommits to enhance type safety 2025-08-31 20:16:18 +08:00
Soulter
c10973e160 fix: update getDevCommits function to support GitHub proxy and handle errors more gracefully 2025-08-31 20:06:37 +08:00
Soulter
b1e4bff3ec feat: 支持升级的同时更新到指定版本的 WebUI 2025-08-31 19:55:46 +08:00
Soulter
c1202cda63 fix: update GitHub release action to use correct commit SHA variable 2025-08-31 11:52:42 +08:00
Soulter
32d6cd7776 fix: update GitHub release action parameters for clarity 2025-08-31 11:50:26 +08:00
Soulter
2f78d30e93 feat: automated release from every commit in master branch 2025-08-31 11:42:28 +08:00
Junhua Don
33407c9f0d fix: 修复编辑会话名称窗口的圆角和左右边距问题 (#2583) 2025-08-31 11:12:25 +08:00
Soulter
d2d5ef1c5c feat: add custom T2I template editor (#2581) 2025-08-31 11:11:55 +08:00
RC-CHN
98d8eaee02 feat: 添加 no_proxy 配置支持以优化代理设置 (#2564) 2025-08-26 21:08:46 +08:00
ZvZPvz
10b9228060 feat: 调用 deepseek-reasoner 时自动移除 tools (#2531)
* 调用DeepSeek为思考模式时自动移除tools

* Update astrbot/core/provider/sources/openai_source.py

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* Update openai_source.py

* Update openai_source.py

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-08-26 20:44:56 +08:00
xiewoc
5872f1e017 feat: 支持官方 QQ 接口发送语音 (#2525)
* Update dingtalk_event.py

* Add files via upload

* Add files via upload

* Update qqofficial_platform_adapter.py

* Add files via upload

* chore: clean comments

* chore: clean code

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
2025-08-26 20:40:59 +08:00
Soulter
5073f21002 bugfixes 2025-08-24 23:40:17 +08:00
Soulter
69aaf09ac8 chore: 修复当自动更新 webchat title 时,history 被重置的问题 2025-08-24 00:23:08 +08:00
Soulter
6e61ee81d8 feat: 插件配置支持多个快捷魔法配置项 2025-08-23 21:53:26 +08:00
Soulter
cfd05a8d17 chore: clean code 2025-08-23 21:46:59 +08:00
Raven95676
29845fcc4c feat: Gemini添加对LLMResponse的raw_completion支持 2025-08-23 14:14:25 +08:00
Soulter
e204b180a8 Improve: 扩大配置文件生效范围的自定义程度到会话粒度 (#2532)
* feat: 扩大配置文件生效范围的自定义程度

* perf: 冲突检测

* refactor: simplify config form validation and improve conflict message clarity
2025-08-22 19:31:55 +08:00
Soulter
563972fd29 fix: bugfixes 2025-08-22 17:41:06 +08:00
AkkoYK
cbe94b84fc feat: 为 FishAudio TTS 添加可选的 reference_id 直接指定功能 (#2513)
* 移除TTS提供商:FishAudio TTS的角色名称查询机制,改为直接使用参考模型ID

/// 修改内容 ///
- 移除复杂的角色查询逻辑
删除了 get_reference_id_by_character 方法
移除了通过角色名称搜索模型ID的API调用逻辑
- 简化配置字段
将 fishaudio-tts-character 字段替换为 fishaudio-tts-reference-id
设置默认值为可莉的模型ID:626bb6d3f3364c9cbc3aa6a67300a664
- 优化代码结构
直接在初始化时获取reference_id
简化请求生成逻辑,直接使用配置的模型ID

/// 修改原因 ///
避免同名冲突:不同模型可能使用相同的角色名称,导致获取错误的模型
提高性能:移除了额外的API查询步骤,减少延迟
增强可靠性:用户直接指定准确的模型ID,避免搜索失败的情况
简化维护:减少了代码复杂度,降低维护成本

/// 新的使用方式 ///
用户需要从 FishAudio 模型的详情页面/URL 中获取具体的模型ID(如 626bb6d3f3364c9cbc3aa6a67300a664),并在配置中直接填入 fishaudio-tts-reference-id 字段。

这个修改使得FishAudio TTS的配置更加直观和可靠,同时提升了系统的整体性能。

* Refactor: 添加FishAudio TTS reference_id格式验证

添加ID格式验证逻辑,防止无效的reference_id调用API失败。
验证32位十六进制格式并提供详细错误提示。

* Feat: 添加FishAudio TTS可选reference_id配置实现向前兼容

新增可选的reference_id字段,优先使用直接ID,未配置时回退到角色名称查询。
保持完全向前兼容,现有配置无需修改。
2025-08-22 16:55:07 +08:00
Soulter
aa6f73574d feat: 多 t2i 服务的随机负载均衡 (#2529) 2025-08-22 16:43:59 +08:00
Soulter
94f0419ef7 docs: update readme 2025-08-20 16:41:22 +08:00
Soulter
cefd2d7f49 chore: update project version to 4.0.0 2025-08-20 15:48:37 +08:00
Soulter
81e1e545fb fix: add type definition for migrationDialog and ensure open method exists before calling 2025-08-20 15:48:12 +08:00
Soulter
d516920e72 Merge remote-tracking branch 'origin/master' into releases/4.0.0 2025-08-20 15:43:54 +08:00
Soulter
2171372246 feat: llm_tool 装饰器返回值支持返回 mcp 库中 tool 的返回值类型(mcp.type.CallToolResult) (#2507) 2025-08-20 15:33:46 +08:00
Soulter
d2df4d0cce Feature: 支持在配置文件配置可用的插件组 (#2505)
* feat: 增加可用插件集合配置项

* remove: 旧版平台可用性配置

已经基于多配置文件实现。

* feat: 应用配置文件插件可用性配置

* perf: hoist if from if
2025-08-20 15:25:41 +08:00
Soulter
6ab90fc123 fix: 移除知识库中的提示文本 2025-08-20 11:27:02 +08:00
Soulter
1a84ebbb1e fix: remove debug print statement for reranked results in FaissVecDB 2025-08-19 17:57:16 +08:00
Soulter
c9c0352369 feat: 知识库支持配置重排序模型 2025-08-19 17:51:01 +08:00
Soulter
9903b028a3 Feature: 支持配置重排序模型(vLLM API 格式)用于 score 任务 (#2496)
* feat: 支持添加重排序模型(vLLM API 格式)用于 score 任务

* fix: update rerank API base URL to use localhost
2025-08-19 16:15:31 +08:00
Soulter
49def5d883 📦 release: v3.5.25 2025-08-19 01:32:24 +08:00
Soulter
6975525b70 feat: 添加预发布版本提醒和检测功能 2025-08-19 01:15:56 +08:00
Soulter
fbc4f8527b Merge remote-tracking branch 'origin/master' into releases/4.0.0 2025-08-19 00:53:36 +08:00
Soulter
90cb5a1951 fix: 当返回文本为空并且存在工具调用时错误地被终止事件,导致工具调用结果未被返回 (#2491)
fixes: #2448 #2379
2025-08-19 00:52:13 +08:00
Soulter
ac71d9f034 perf: 使用 run_coroutine_threadsafe
Co-authored-by: Raven95676 <raven95676@gmail.com>
2025-08-18 19:32:35 +08:00
Soulter
64bcbc9fc0 refactor: 重构 SharedPreference 类并采用数据库存储替换 json 存储 (#2482) 2025-08-18 19:12:26 +08:00
Soulter
9e7d46f956 perf: 调整 WebUI sidebar 顺序 2025-08-18 11:57:01 +08:00
Soulter
e911896cfb feat: 添加知识库插件更新检查和更新功能 2025-08-18 11:27:48 +08:00
Soulter
9c6d66093f feat: 增加工具使用模型能力选项 2025-08-18 10:37:10 +08:00
Soulter
b2e39b9701 fix: 修复迁移对话时的一些问题 2025-08-17 23:44:08 +08:00
Soulter
e95ad4049b feat: 添加 WebUI 迁移助手以及相关迁移方法 (#2477) 2025-08-17 23:24:30 +08:00
Soulter
1df49d1d6f refactor: 重构 Function Tool 管理并初步引入 Multi Agent 及 Agent Handsoff 机制 (#2454)
* stage

* refactor: 重构 Function Tool 管理并引入 multi agent handsoff 机制

- Updated `star_request.py` to use the global `call_handler` instead of context-specific calls.
- Modified `entities.py` to remove the dependency on `FunctionToolManager` and streamline the function tool handling.
- Refactored `func_tool_manager.py` to simplify the `FunctionTool` class and its methods, removing deprecated code and enhancing clarity.
- Adjusted `provider.py` to align with the new function tool structure, removing unnecessary type unions.
- Enhanced `star_handler.py` to support agent registration and tool association, introducing `RegisteringAgent` for better encapsulation.
- Updated `star_manager.py` to handle tool registration for agents, ensuring proper binding of handlers.
- Revised `main.py` in the web searcher package to utilize the new agent registration system for web search tools.

* chore: websearch

* perf: 减少嵌套

* chore: 移除未使用的 mcp 导入
2025-08-17 10:57:25 +08:00
Junhua Don
b71000e2f3 fix: 修复无法清空 http_proxy 代理的问题 (#2434)
* fix: 修复无法清空http_proxy代理的问题

* perf: 将“127.0.0.1”和“::1”添加到“no_proxy”以确保所有本地流量绕过代理。

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
2025-08-17 10:49:23 +08:00
Soulter
47e6ed455e Feature: 支持在 WebUI 配置文件页中配置默认知识库 (#2437)
* feat: 支持配置默认知识库

* chore: clean code
2025-08-15 12:40:46 +08:00
Soulter
92592fb9d9 Merge remote-tracking branch 'origin/master' into releases/4.0.0 2025-08-14 23:55:08 +08:00
Soulter
02a9769b35 fix: 补充工具调用轮数上限配置 2025-08-14 23:51:22 +08:00
Soulter
7640f11bfc docs: update readme 2025-08-14 17:28:51 +08:00
Soulter
be8a0991ed feat: 添加条件显示逻辑以优化插件配置项的可见性管理 (#2433) 2025-08-14 14:56:31 +08:00
Soulter
9fa44dbcfa docs: update readme 2025-08-14 14:16:22 +08:00
RC-CHN
61aac9c80c feat: 支持通过解析URL 的方式导入网页数据到知识库 (#2280)
* feat:为webchat页面添加一个手动上传文件按钮(目前只处理图片)

* fix:上传后清空value,允许触发change事件以多次上传同一张图片

* perf:webchat页面消息发送后清空图片预览缩略图,维持与文本信息行为一致

* perf:将文件输入的值重置为空字符串以提升浏览器兼容性

* feat:webchat文件上传按钮支持多选文件上传

* fix:释放blob URL以防止内存泄漏

* perf:并行化sendMessage中的图片获取逻辑

* feat:完成从url获取部分的UI

* feat: 添加从URL导入功能的组件

* fix: 优化导入结果处理,添加整体摘要和主题摘要的文件命名

* perf: 更新url导入选项添加默认值

* perf: 在导入url的部分配置项未启用时隐藏暂不使用的下拉框选项

* feat: 添加上传前提提示信息至导入url至知识库功能

* feat: 更新导入功能提示信息,添加上传状态通知

* fix: 优化url转知识库错误处理

* feat: 合并知识库的上传文件和 URL 标签页

* feat: 删除导入URL至知识库功能的相关组件

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-08-14 14:01:11 +08:00
Soulter
60af83cfee feat: 添加对话选中状态管理,优化默认对话加载逻辑 2025-08-14 13:53:36 +08:00
Soulter
cf64e6c231 Merge remote-tracking branch 'origin/master' into releases/4.0.0 2025-08-14 13:36:19 +08:00
Copilot
2cae941bae Fix incomplete Gemini streaming responses in chat history (#2429)
* Initial plan

* Fix incomplete Gemini streaming responses in chat history

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: Raven95676 <Raven95676@gmail.com>
2025-08-14 11:56:50 +08:00
Copilot
bc0784f41d fix: enable_thinking parameter for qwen3 models in non-streaming calls (#2424)
* Initial plan

* Fix ModelScope enable_thinking parameter for non-streaming calls

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* Tighten enable_thinking condition to only Qwen/Qwen3 models

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* qwen3 model handle

* Update astrbot/core/provider/sources/openai_source.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-14 11:18:29 +08:00
Soulter
b711140f26 Feature: 优化 WebSearch 的爬取网页速度并且支持使用 Tavily 作为搜索引擎 (#2427)
* feat: 优化了 websearch 的速度;支持 Tavily 作为搜索引擎

* fix: 优化日志记录格式,修复搜索结果处理中的索引和内容显示问题
2025-08-14 10:52:35 +08:00
Copilot
c57d75e01a feat: add comprehensive GitHub Copilot instructions for AstrBot development (#2426)
* Initial plan

* Initial progress - completed repository exploration and dependency installation

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* Complete copilot-instructions.md with comprehensive development guide

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* Update copilot-instructions.md

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
2025-08-13 23:31:28 +08:00
Soulter
1d766001bb Feature: 增加图片转述提供商配置、支持用户自定义模型模态能力 (#2422)
* feat: 增加图片转述提供商配置、支持用户自定义模型模态能力

* fix: 修复 LLMRequestSubStage 中会话管理方法参数不一致的问题,简化方法调用
2025-08-13 19:11:17 +08:00
Soulter
0759a11a85 fix: 修复 stage 在不同 pipeline 中被重复使用的问题和 persona 相关问题 2025-08-13 13:13:04 +08:00
Soulter
cb749a38ab chore: remove status checking in chat page 2025-08-13 10:45:50 +08:00
Soulter
369eab18ab Refactor: 重构配置文件管理,以支持更灵活的、会话粒度的(基于 umo part)配置文件隔离 (#2328)
* refactor: 重构配置文件管理,以支持更灵活的、基于 umo part 的配置文件隔离

* Refactor: 重构配置前端页面,新增数个配置项 (#2331)

* refactor: 重构配置前端页面,新增数个配置项

* feat: 完善多配置文件结构

* perf: 系统配置入口

* fix: normal config item list not display

* fix: 修复 axios 请求中的上下文引用问题
2025-08-13 09:18:49 +08:00
RC-CHN
73edeae013 perf: 优化hint渲染方式,为部分类型供应商添加默认的温度选项 (#2321)
* feat:为webchat页面添加一个手动上传文件按钮(目前只处理图片)

* fix:上传后清空value,允许触发change事件以多次上传同一张图片

* perf:webchat页面消息发送后清空图片预览缩略图,维持与文本信息行为一致

* perf:将文件输入的值重置为空字符串以提升浏览器兼容性

* feat:webchat文件上传按钮支持多选文件上传

* fix:释放blob URL以防止内存泄漏

* perf:并行化sendMessage中的图片获取逻辑

* perf:优化hint渲染方式,为部分类型供应商添加默认的温度选项
2025-08-12 21:53:06 +08:00
MUKAPP
7d46314dc8 fix: 修复注册文件时由于 file:/// 前缀,导致文件被误判为不存在的问题 (#2325)
fixes #2222
2025-08-12 21:47:31 +08:00
你们的饺子
d5a53a89eb fix: 修复插件的 terminate 无法被正常调用的问题 (#2352) 2025-08-12 21:41:19 +08:00
dependabot[bot]
a85bc510dd chore(deps): bump actions/checkout in the github-actions group (#2400)
Bumps the github-actions group with 1 update: [actions/checkout](https://github.com/actions/checkout).


Updates `actions/checkout` from 4 to 5
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 15:15:28 +08:00
Soulter
2beea7d218 📦 release: v3.5.24 2025-08-07 20:36:59 +08:00
Soulter
a93cd3dd5f feat: compshare provider 2025-08-07 20:25:45 +08:00
Soulter
6c1f540170 chore: 移除 MCP 市场相关路由 2025-08-04 19:21:27 +08:00
Soulter
d026a9f009 chore: 移除 MCP 市场相关逻辑 (#2314) 2025-08-04 17:40:21 +08:00
Soulter
a8e7dadd39 fix: 修复访问令牌的空格问题 2025-08-04 17:26:22 +08:00
Soulter
2f8d921adf feat: add support to sync mcp servers from ModelScope (#2313) 2025-08-04 17:24:07 +08:00
Soulter
0c6e526f94 fix: platform id 2025-08-04 13:10:37 +08:00
Soulter
b1e3018b6b Improve: 引入全新的人格管理模式以及重构函数工具管理器 (#2305)
* feat: add persona management

* refactor:  重构函数工具管理器,引入 ToolSet,并让 Persona 支持绑定 Tools

* feat: 更新 Persona 工具选择逻辑,支持全选和指定工具的切换

* feat: 更新 BaseDatabase 中的 persona 方法返回类型,支持返回 None
2025-08-04 00:56:26 +08:00
Soulter
87f05fce66 Fix: 当多个相同消息平台实例部署时上下文可能混乱(共享) (#2298)
* perf: update astrbot event session format, using platfrom id to ensure uniqueness

fixes: #1000

* fix: 更新 MessageSession 类以使用 platform_id 作为唯一标识符,并调整相关方法以确保一致性

* fix: 更新 MessageSession 文档以明确 platform_id 的赋值规则,并调整 get_platform 和 get_platform_inst 方法的返回类型
2025-08-02 21:38:55 +08:00
Soulter
1b37530c96 Merge remote-tracking branch 'origin/master' into releases/4.0.0 2025-08-02 20:14:18 +08:00
Soulter
db4d02c2e2 docs: add 1panel deployment method 2025-08-02 19:01:49 +08:00
Soulter
fd7811402b fix: 添加对 metadata 中 description 字段的支持,确保元数据完整性
fixes: #2245
2025-08-02 16:01:10 +08:00
你们的饺子
eb0325e627 fix: 修复了 OpenAI 类型的 LLM 空内容响应导致的无法解析 completion 的错误。 (#2279) 2025-08-02 15:46:11 +08:00
Soulter
842c3c8ea9 Refactor: using sqlmodel(sqlchemy+pydantic) as ORM framework and switch to async-based sqlite operation (#2294)
* stage

* stage

* refactor: using sqlchemy as ORM framework, switch to async-based sqlite operation

- using sqlmodel as ORM(based on sqlchemy and pydantic)
- add Persona, Preference, PlatformMessageHistory table

* fix: conversation

* fix: remove redundant explicit session.commit, and fix some type error

* fix: conversation context issue

* chore: remove comments

* chore: remove exclude_content param
2025-08-02 15:44:00 +08:00
IGCrystal
8b4b04ec09 fix(i18n): add missing noTemplates key (#2292) 2025-08-02 14:16:59 +08:00
Larch-C
9f32c9280f chore: update and rename PLUGIN_PUBLISH.md to PLUGIN_PUBLISH.yml (#2289) 2025-08-02 14:16:19 +08:00
yrk111222
4fcd09cfa8 feat: add ModelScope API support (#2230)
* add ModelScope API support

* update
2025-08-02 14:14:08 +08:00
Misaka Mikoto
7a8d65d37d feat: add plugins local cache and remote file MD5 validation (#2211)
* 修改openai的嵌入模型默认维度为1024

* 为插件市场添加本地缓存
- 优先使用api获取,获取失败时则使用本地缓存
- 每次获取后会更新本地缓存
- 如果获取结果为空,判定为获取失败,使用本地缓存
- 前端页面添加刷新按钮,用于手动刷新本地缓存

* feat: 增强插件市场缓存机制,支持MD5校验以确保数据有效性

---------

Co-authored-by: Soulter <905617992@qq.com>
2025-08-02 14:03:53 +08:00
Raven95676
23129a9ba2 Merge branch 'releases/3.5.23' 2025-07-26 16:49:38 +08:00
Raven95676
7f791e730b fix: changelogs 2025-07-26 16:49:05 +08:00
Raven95676
f7e296b349 Merge branch 'releases/3.5.23' 2025-07-26 16:34:30 +08:00
Raven95676
712d4acaaa release: v3.5.23 2025-07-26 16:32:06 +08:00
Raven95676
74a5c01f21 refactor: remove code and documentation references related to gewechat 2025-07-26 14:19:17 +08:00
Raven95676
3ba8724d77 Merge branch 'master' into dev 2025-07-26 14:02:05 +08:00
鸦羽
6313a7d8a9 Merge pull request #2221 from Raven95676/fix/axios-dependency
fix: update axios version range for vulnerability fix
2025-07-24 18:34:14 +08:00
Raven95676
432a3f520c fix: update axios version range for vulnerability fix 2025-07-24 18:28:02 +08:00
Soulter
191b3e42d4 feat: implement log history retrieval and improve log streaming handling (#2190) 2025-07-23 23:36:08 +08:00
Misaka Mikoto
a27f05fcb4 chore: 修改 OpenAI 嵌入模型提供商默认向量维度为1024 (#2209) 2025-07-23 23:35:04 +08:00
Soulter
2f33e0b873 chore: remove adapters of wechat personal account 2025-07-23 10:51:42 +08:00
Soulter
f0359467f1 chore: remove adapters of wechat personal account 2025-07-23 10:50:43 +08:00
Soulter
d1db8cf2c8 chore: remove adapters of wechat personal account 2025-07-23 10:48:58 +08:00
Soulter
b1985ed2ce Merge branch 'dev' 2025-07-23 00:38:08 +08:00
Gao Jinzhe
140ddc70e6 feat: 使用会话锁保证分段回复时的消息发送顺序 (#2130)
* 优化分段消息发送逻辑,为分段消息添加消息队列

* 删除了不必要的代码

* style: code quality

* 将消息队列机制重构为会话锁机制

* perf: narrow the lock scope

* refactor: replace get_lock with async context manager for session locks

* refactor: optimize session lock management with defaultdict

---------

Co-authored-by: Soulter <905617992@qq.com>
Co-authored-by: Raven95676 <Raven95676@gmail.com>
2025-07-23 00:37:29 +08:00
Soulter
d7fd616470 style: code quality 2025-07-21 17:04:29 +08:00
Soulter
3ccbef141e perf: extension ui 2025-07-21 15:16:49 +08:00
Soulter
e92fbb0443 feat: add ProxySelector component for GitHub proxy configuration and connection testing (#2185) 2025-07-21 15:05:49 +08:00
Soulter
bd270aed68 fix: handle event construction errors in message reply processing 2025-07-20 22:52:14 +08:00
Soulter
28d7864393 perf: tool use page UI (#2182)
* perf: tool use UI

* fix: update background color of item cards in ToolUsePage
2025-07-20 20:24:03 +08:00
RC-CHN
b5d8173ee3 feat: add a file uplod button in WebChat page (#2136)
* feat:为webchat页面添加一个手动上传文件按钮(目前只处理图片)

* fix:上传后清空value,允许触发change事件以多次上传同一张图片

* perf:webchat页面消息发送后清空图片预览缩略图,维持与文本信息行为一致

* perf:将文件输入的值重置为空字符串以提升浏览器兼容性

* feat:webchat文件上传按钮支持多选文件上传

* fix:释放blob URL以防止内存泄漏

* perf:并行化sendMessage中的图片获取逻辑
2025-07-20 16:02:28 +08:00
Soulter
17d62a9af7 refactor: mcp server reload mechanism (#2161)
* refactor: mcp server reload mechanism

* fix: wait for client events

* fix: all other mcp servers are terminated when disable selected server

* fix: resolve type hinting issues in MCPClient and FuncCall methods

* perf: optimize mcp server loaders

* perf: improve MCP client connection testing

* perf: improve error message

* perf: clean code

* perf: increase default timeout for MCP connection and reset dialog message on close

---------

Co-authored-by: Raven95676 <Raven95676@gmail.com>
2025-07-20 15:53:13 +08:00
Soulter
d89fb863ed fix: improve logging and error message details in LLMRequestSubStage 2025-07-18 16:13:27 +08:00
Soulter
a21ad77820 Merge pull request #2146 from Raven95676/fix/mcp
fix: 修复MCP导致的持续占用100% CPU
2025-07-18 13:04:50 +08:00
Raven95676
f86c8e8cab perf: ensure MCP client termination in cleanup process 2025-07-17 23:17:23 +08:00
Raven95676
cb12cbdd3d fix: managing MCP connections with AsyncExitStack 2025-07-16 23:44:51 +08:00
Soulter
6661fa996c fix: audio block does not display 2025-07-14 22:20:03 +08:00
Soulter
c19bca798b fix: xfyun model tool use error workaround
fixes: #1359
2025-07-14 22:07:33 +08:00
Soulter
8f98b411db Merge pull request #2129 from AstrBotDevs/perf-refine-webui-chatpage
Improve: WebUI ChatPage markdown code block background
2025-07-14 21:49:18 +08:00
Soulter
a8aa03847e feat: enhance theme customization with new background properties and markdown styling 2025-07-14 21:47:25 +08:00
Soulter
1bfd747cc6 perf: add system_prompt to payload_vars in dify text_chat method 2025-07-14 11:00:13 +08:00
Soulter
ae06d945a7 Merge pull request #2054 from RC-CHN/master
Feature: Add provider_type field for ProviderMetadata and improve provider availabiliby test
2025-07-13 17:38:22 +08:00
Soulter
9f41d5f34d Merge remote-tracking branch 'origin/master' into RC-CHN/master 2025-07-13 17:35:53 +08:00
Soulter
ef61c52908 fix: remove non-existent Response field 2025-07-13 17:33:13 +08:00
Soulter
d8842ef274 perf: code quality 2025-07-13 17:27:40 +08:00
Soulter
c88fdaf353 Merge pull request #1949 from advent259141/Astrbot_session_manage
[Feature] 支持在 WebUI 上管理会话
2025-07-13 17:23:52 +08:00
Soulter
af295da871 chore: remove /mcp command 2025-07-13 17:11:02 +08:00
Soulter
083235a2fe feat: enhance session management page with tooltips and layout adjustments 2025-07-13 17:06:15 +08:00
Soulter
2a3a5f7eb2 perf: refine session management page ui 2025-07-13 16:57:36 +08:00
Soulter
77c48f280f fix: session management paginator error 2025-07-13 16:36:25 +08:00
Soulter
0ee1eb2f9f chore: remove useless file 2025-07-13 16:30:00 +08:00
Soulter
c2b20365bb Merge pull request #2097 from SheffeyG/fix-status-checking
Fix: add status checking for embedding model providers
2025-07-13 16:19:37 +08:00
Soulter
cfdc7e4452 fix: add debug logging for provider request handling in LLMRequestSubStage
fixes: #2104
2025-07-13 16:12:48 +08:00
Soulter
2363f61aa9 chore: remove 'obvious_hint' fields from configuration metadata and remove some deprecated config 2025-07-13 16:03:47 +08:00
Soulter
557ac6f9fa Merge pull request #2112 from AstrBotDevs/perf-provider-logo
Improve: WebUI provider logo display
2025-07-13 15:34:19 +08:00
Soulter
a49b871cf9 fix: update Azure provider icon URL in getProviderIcon method 2025-07-13 15:33:47 +08:00
Soulter
a0d6b3efba perf: improve provider logo display in webui 2025-07-13 15:27:53 +08:00
Gao Jinzhe
6cabf07bc0 Merge branch 'AstrBotDevs:master' into Astrbot_session_manage 2025-07-13 00:23:29 +08:00
advent259141
a15444ee8c 移除了mcp会话级的启停,增加了批量设置的选项,对相关问题进行了修复 2025-07-13 00:15:21 +08:00
Soulter
ceb5f5669e fix: update active reply bot prefix in logging for clarity 2025-07-13 00:12:31 +08:00
Gao Jinzhe
25b75e05e4 Merge branch 'AstrBotDevs:master' into Astrbot_session_manage 2025-07-12 22:25:20 +08:00
sheffey
4d214bb5c1 check general numbers type instead 2025-07-11 18:36:47 +08:00
sheffey
7cbaed8c6c fix: add status checking for embedding model providers 2025-07-11 18:36:40 +08:00
Soulter
2915fdf665 release: v3.5.22 2025-07-11 12:29:26 +08:00
Soulter
a66c385b08 fix: deadlock when docker is not available 2025-07-11 12:27:49 +08:00
Raven95676
4dace7c5d8 chore: format code 2025-07-11 11:23:53 +08:00
Soulter
8ebf087dbf chore: optimize codes 2025-07-10 23:28:00 +08:00
Soulter
2fa8bda5bb chore: ruff lint 2025-07-10 23:23:29 +08:00
advent259141
7cfbc4ab8f 增加了针对整个会话启停的开关 2025-07-09 11:58:52 +08:00
Gao Jinzhe
0f692b1608 Merge branch 'master' into Astrbot_session_manage 2025-07-09 10:13:51 +08:00
Ruochen
2cc1eb1abc feat:实现了speech_to_text类型的供应商可用性检查 2025-07-08 21:55:31 +08:00
RC-CHN
90dbcbb4e2 Merge branch 'AstrBotDevs:master' into master 2025-07-08 21:28:50 +08:00
Ruochen
66503d58be feat:实现了text_to_speech类型的供应商可用性测试 2025-07-08 17:52:22 +08:00
Ruochen
8e10f0ce2b feat:实现了embedding类型的供应商可用性检查 2025-07-08 16:51:57 +08:00
Ruochen
c44f085b47 fix:对非文本生成类供应商暂时跳过测试 2025-07-08 16:32:39 +08:00
RC-CHN
a35f36eeaf Merge branch 'AstrBotDevs:master' into master 2025-07-08 15:34:19 +08:00
Ruochen
14564c392a feat:meta方法增加provider_type字段 2025-07-08 15:33:02 +08:00
advent259141
28a87351f1 新增对会话重命名的功能 2025-07-01 21:41:19 +08:00
advent259141
dcd7dcbbdf 解决了conflict 2025-07-01 17:24:56 +08:00
Gao Jinzhe
1538759ba7 Merge branch 'master' into Astrbot_session_manage 2025-07-01 17:19:30 +08:00
advent259141
ec5d71d0e1 修复了一下重复的代码问题,删除了不必要的会话级别 LLM 启停状态检查。 2025-06-29 10:02:04 +08:00
advent259141
d121d08d05 大致凭借自己理解修复了一下整个检查流程,防止钩子出现问题 2025-06-29 09:57:31 +08:00
Gao Jinzhe
be08f4a558 Merge branch 'AstrBotDevs:master' into Astrbot_session_manage 2025-06-29 09:11:25 +08:00
Soulter
4df8606ab6 style: code quality 2025-06-28 20:08:57 +08:00
Soulter
71442d26ec chore: 移除不必要的 MCP 会话控制 2025-06-28 19:58:36 +08:00
advent259141
4f5528869c Merge branch 'Astrbot_session_manage' of https://github.com/advent259141/AstrBot into Astrbot_session_manage 2025-06-28 17:00:00 +08:00
advent259141
f16feff17b 根据会话mcp开关情况选择性传入 func_tool
修改import的位置
deleted:    astrbot/core/star/session_tts_manager.py
复原被覆盖的修改
2025-06-28 16:59:00 +08:00
Gao Jinzhe
d8aae538cd Merge branch 'AstrBotDevs:master' into Astrbot_session_manage 2025-06-28 14:55:38 +08:00
advent259141
31670e75e5 Merge branch 'Astrbot_session_manage' of https://github.com/advent259141/AstrBot into Astrbot_session_manage 2025-06-27 18:47:25 +08:00
advent259141
ed6011a2be modified: dashboard/src/i18n/loader.ts
modified:   dashboard/src/i18n/locales/en-US/core/navigation.json
增加会话管理英文页面
	modified:   dashboard/src/i18n/locales/zh-CN/core/navigation.json
增加会话管理中文页面
	modified:   dashboard/src/i18n/translations.ts
	modified:   dashboard/src/layouts/full/vertical-sidebar/sidebarItem.ts
	modified:   dashboard/src/views/SessionManagementPage.vue
增加会话管理国际化适配
2025-06-27 18:46:02 +08:00
Gao Jinzhe
cdded38ade Merge branch 'AstrBotDevs:master' into Astrbot_session_manage 2025-06-27 17:10:08 +08:00
advent259141
f536f24833 astrbot/core/pipeline/process_stage/method/llm_request.py
astrbot/core/pipeline/result_decorate/stage.py
   astrbot/core/star/session_llm_manager.py
   astrbot/core/star/session_tts_manager.py
   astrbot/dashboard/routes/session_management.py
   astrbot/dashboard/server.py
   dashboard/src/views/SessionManagementPage.vue
   packages/astrbot/main.py
2025-06-27 17:08:05 +08:00
Gao Jinzhe
646b18d910 Merge branch 'AstrBotDevs:master' into master 2025-06-27 12:26:15 +08:00
Gao Jinzhe
e24225c828 Merge branch 'master' into master 2025-06-23 15:21:08 +08:00
Gao Jinzhe
50a296de20 Merge branch 'AstrBotDevs:master' into master 2025-06-20 14:39:57 +08:00
Gao Jinzhe
c79e38e044 Merge branch 'AstrBotDevs:master' into master 2025-06-17 20:29:32 +08:00
Gao Jinzhe
dae745d925 Update server.py 2025-06-16 10:03:18 +08:00
Gao Jinzhe
791db65526 resolve conflict with master branch 2025-06-16 09:50:35 +08:00
Gao Jinzhe
02e2e617f5 Merge branch 'AstrBotDevs:master' into master 2025-06-15 22:04:06 +08:00
advent259141
bfc8024119 modified: astrbot/core/pipeline/process_stage/method/llm_request.py
new file:   astrbot/core/star/session_llm_manager.py
	modified:   astrbot/dashboard/routes/session_management.py
	modified:   dashboard/src/views/SessionManagementPage.vue

 增加了精确到会话的LLM启停管理以及插件启停管理
2025-06-14 03:42:21 +08:00
Gao Jinzhe
f26cf6ed6f Merge branch 'AstrBotDevs:master' into master 2025-06-14 03:03:41 +08:00
advent259141
f2be55bd8e Merge branch 'master' of https://github.com/advent259141/AstrBot 2025-06-13 06:20:49 +08:00
advent259141
d241dd17ca Merge branch 'master' of https://github.com/advent259141/AstrBot 2025-06-13 06:20:09 +08:00
advent259141
cecafdfe6c Merge branch 'master' of https://github.com/advent259141/AstrBot 2025-06-13 03:54:35 +08:00
Soulter
6fecfd1a0e Merge pull request #1800 from AstrBotDevs/feat-weixinkefu-record
feat: 微信客服支持语音的收发
2025-06-13 03:52:15 +08:00
523 changed files with 54953 additions and 21145 deletions

View File

@@ -1,9 +1,9 @@
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# github acions
# github actions
.git
.github/
.*ignore
.git/
# User-specific stuff
.idea/
# Byte-compiled / optimized / DLL files
@@ -15,10 +15,10 @@ env/
venv*/
ENV/
.conda/
README*.md
dashboard/
data/
changelogs/
tests/
.ruff_cache/
.astrbot
astrbot.lock

View File

@@ -1,31 +0,0 @@
---
name: '🥳 发布插件'
title: "[Plugin] 插件名"
about: 提交插件到插件市场
labels: [ "plugin-publish" ]
assignees: ''
---
欢迎发布插件到插件市场!
## 插件基本信息
请将插件信息填写到下方的 Json 代码块中。`tags`(插件标签)和 `social_link`(社交链接)选填。
```json
{
"name": "插件名",
"desc": "插件介绍",
"author": "作者名",
"repo": "插件仓库链接",
"tags": [],
"social_link": ""
}
```
## 检查
- [ ] 我的插件经过完整的测试
- [ ] 我的插件不包含恶意代码
- [ ] 我已阅读并同意遵守该项目的 [行为准则](https://docs.github.com/zh/site-policy/github-terms/github-community-code-of-conduct)。

View File

@@ -0,0 +1,57 @@
name: 🥳 发布插件
description: 提交插件到插件市场
title: "[Plugin] 插件名"
labels: ["plugin-publish"]
assignees: []
body:
- type: markdown
attributes:
value: |
欢迎发布插件到插件市场!
- type: markdown
attributes:
value: |
## 插件基本信息
请将插件信息填写到下方的 JSON 代码块中。其中 `tags`(插件标签)和 `social_link`(社交链接)选填。
不熟悉 JSON ?可以从 [此站](https://plugins.astrbot.app) 右下角提交。
- type: textarea
id: plugin-info
attributes:
label: 插件信息
description: 请在下方代码块中填写您的插件信息确保反引号包裹了JSON
value: |
```json
{
"name": "插件名,请以 astrbot_plugin_ 开头",
"display_name": "用于展示的插件名,方便人类阅读",
"desc": "插件的简短介绍",
"author": "作者名",
"repo": "插件仓库链接",
"tags": [],
"social_link": "",
}
```
validations:
required: true
- type: markdown
attributes:
value: |
## 检查
- type: checkboxes
id: checks
attributes:
label: 插件检查清单
description: 请确认以下所有项目
options:
- label: 我的插件经过完整的测试
required: true
- label: 我的插件不包含恶意代码
required: true
- label: 我已阅读并同意遵守该项目的 [行为准则](https://docs.github.com/zh/site-policy/github-terms/github-community-code-of-conduct)。
required: true

View File

@@ -1,46 +1,44 @@
name: '🐛 报告 Bug'
name: '🐛 Report Bug / 报告 Bug'
title: '[Bug]'
description: 提交报告帮助我们改进。
description: Submit bug report to help us improve. / 提交报告帮助我们改进。
labels: [ 'bug' ]
body:
- type: markdown
attributes:
value: |
感谢您抽出时间报告问题!请准确解释您的问题。如果可能,请提供一个可复现的片段(这有助于更快地解决问题)。
Thank you for taking the time to report this issue! Please describe your problem accurately. If possible, please provide a reproducible snippet (this will help resolve the issue more quickly). Please note that issues that are not detailed or have no logs will be closed immediately. Thank you for your understanding. / 感谢您抽出时间报告问题!请准确解释您的问题。如果可能,请提供一个可复现的片段(这有助于更快地解决问题)。请注意,不详细 / 没有日志的 issue 会被直接关闭,谢谢理解。
- type: textarea
attributes:
label: 发生了什么
description: 描述你遇到的异常
label: What happened / 发生了什么
description: Description
placeholder: >
一个清晰且具体的描述这个异常是什么
Please provide a clear and specific description of what this exception is. Please note that issues that are not detailed or have no logs will be closed immediately. Thank you for your understanding. / 一个清晰且具体的描述这个异常是什么。请注意,不详细 / 没有日志的 issue 会被直接关闭,谢谢理解
validations:
required: true
- type: textarea
attributes:
label: 如何复现?
label: Reproduce / 如何复现?
description: >
复现该问题的步骤
The steps to reproduce the issue. / 复现该问题的步骤
placeholder: >
: 1. 打开 '...'
Example: 1. Open '...'
validations:
required: true
- type: textarea
attributes:
label: AstrBot 版本、部署方式(如 Windows Docker Desktop 部署)、使用的提供商、使用的消息平台适配器
description: >
请提供您的 AstrBot 版本和部署方式。
label: AstrBot version, deployment method (e.g., Windows Docker Desktop deployment), provider used, and messaging platform used. / AstrBot 版本、部署方式(如 Windows Docker Desktop 部署)、使用的提供商、使用的消息平台适配器
placeholder: >
如: 3.1.8 Docker, 3.1.7 Windows启动器
Example: 4.5.7 Docker, 3.1.7 Windows Launcher
validations:
required: true
- type: dropdown
attributes:
label: 操作系统
label: OS
description: |
你在哪个操作系统上遇到了这个问题?
On which operating system did you encounter this problem? / 你在哪个操作系统上遇到了这个问题?
multiple: false
options:
- 'Windows'
@@ -53,30 +51,30 @@ body:
- type: textarea
attributes:
label: 报错日志
label: Logs / 报错日志
description: >
如报错日志、截图等。请提供完整的 Debug 级别的日志,不要介意它很长!
Please provide complete Debug-level logs, such as error logs and screenshots. Don't worry if they're long! Please note that issues with insufficient details or no logs will be closed immediately. Thank you for your understanding. / 如报错日志、截图等。请提供完整的 Debug 级别的日志,不要介意它很长!请注意,不详细 / 没有日志的 issue 会被直接关闭,谢谢理解。
placeholder: >
请提供完整的报错日志或截图。
Please provide a complete error log or screenshot. / 请提供完整的报错日志或截图。
validations:
required: true
- type: checkboxes
attributes:
label: 你愿意提交 PR 吗?
label: Are you willing to submit a PR? / 你愿意提交 PR 吗?
description: >
这不是必需的,但我们很乐意在贡献过程中为您提供指导特别是如果你已经很好地理解了如何实现修复。
This is not required, but we would be happy to provide guidance during the contribution process, especially if you already have a good understanding of how to implement the fix. / 这不是必需的,但我们很乐意在贡献过程中为您提供指导特别是如果你已经很好地理解了如何实现修复。
options:
- label: 是的,我愿意提交 PR!
- label: Yes!
- type: checkboxes
attributes:
label: Code of Conduct
options:
- label: >
我已阅读并同意遵守该项目的 [行为准则](https://docs.github.com/zh/site-policy/github-terms/github-community-code-of-conduct)。
I have read and agree to abide by the project's [Code of Conduct](https://docs.github.com/zh/site-policy/github-terms/github-community-code-of-conduct)。
required: true
- type: markdown
attributes:
value: "感谢您填写我们的表单!"
value: "Thank you for filling out our form! / 感谢您填写我们的表单!"

View File

@@ -1,19 +1,27 @@
<!-- 如果有的话,指定这个 PR 要解决的 ISSUE -->
解决了 #XYZ
<!--Please describe the motivation for this change: What problem does it solve? (e.g., Fixes XX issue, adds YY feature)-->
<!--请描述此项更改的动机:它解决了什么问题?(例如:修复了 XX issue添加了 YY 功能)-->
### Motivation
### Modifications / 改动点
<!--解释为什么要改动-->
<!--请总结你的改动:哪些核心文件被修改了?实现了什么功能?-->
<!--Please summarize your changes: What core files were modified? What functionality was implemented?-->
### Modifications
- [x] This is NOT a breaking change. / 这不是一个破坏性变更。
<!-- If your changes is a breaking change, please uncheck the checkbox above -->
<!--简单解释你的改动-->
### Screenshots or Test Results / 运行截图或测试结果
### Check
<!--Please paste screenshots, GIFs, or test logs here as evidence of executing the "Verification Steps" to prove this change is effective.-->
<!--请粘贴截图、GIF 或测试日志,作为执行“验证步骤”的证据,证明此改动有效。-->
<!--如果分支被合并,您的代码将服务于数万名用户!在提交前,请核查一下几点内容-->
---
- [ ] 😊 我的 Commit Message 符合良好的[规范](https://www.conventionalcommits.org/en/v1.0.0/#summary)
- [ ] 👀 我的更改经过良好的测试
- [ ] 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 `requirements.txt``pyproject.toml` 文件相应位置。
- [ ] 😮 我的更改没有引入恶意代码
### Checklist / 检查清单
<!--If merged, your code will serve tens of thousands of users! Please double-check the following items before submitting.-->
<!--如果分支被合并,您的代码将服务于数万名用户!在提交前,请核查一下几点内容。-->
- [ ] 😊 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。/ If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
- [ ] 👀 我的更改经过了良好的测试,**并已在上方提供了“验证步骤”和“运行截图”**。/ My changes have been well-tested, **and "Verification Steps" and "Screenshots" have been provided above**.
- [ ] 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 `requirements.txt``pyproject.toml` 文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in `requirements.txt` and `pyproject.toml`.
- [ ] 😮 我的更改没有引入恶意代码。/ My changes do not introduce malicious code.

38
.github/auto_assign.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
# Set to true to add reviewers to pull requests
addReviewers: true
# Set to true to add assignees to pull requests
addAssignees: false
# A list of reviewers to be added to pull requests (GitHub user name)
reviewers:
- Soulter
- Raven95676
- Larch-C
- anka-afk
- advent259141
- Fridemn
- LIghtJUNction
# - zouyonghe
# A number of reviewers added to the pull request
# Set 0 to add all the reviewers (default: 0)
numberOfReviewers: 2
# A list of assignees, overrides reviewers if set
# assignees:
# - assigneeA
# A number of assignees to add to the pull request
# Set to 0 to add all of the assignees.
# Uses numberOfReviewers if unset.
# numberOfAssignees: 2
# A list of keywords to be skipped the process that add reviewers if pull requests include it
skipKeywords:
- wip
- draft
# A list of users to be skipped by both the add reviewers and add assignees processes
# skipUsers:
# - dependabot[bot]

63
.github/copilot-instructions.md vendored Normal file
View File

@@ -0,0 +1,63 @@
# AstrBot Development Instructions
AstrBot is a multi-platform LLM chatbot and development framework written in Python with a Vue.js dashboard. It supports multiple messaging platforms (QQ, Telegram, Discord, etc.) and various LLM providers (OpenAI, Anthropic, Google Gemini, etc.).
Always reference these instructions first and fallback to search or bash commands only when you encounter unexpected information that does not match the info here.
## Working Effectively
### Bootstrap and Install Dependencies
- **Python 3.10+ required** - Check `.python-version` file
- Install UV package manager: `pip install uv`
- Install project dependencies: `uv sync` -- takes 6-7 minutes. NEVER CANCEL. Set timeout to 10+ minutes.
- Create required directories: `mkdir -p data/plugins data/config data/temp`
### Running the Application
- Run main application: `uv run main.py` -- starts in ~3 seconds
- Application creates WebUI on http://localhost:6185 (default credentials: `astrbot`/`astrbot`)
- Application loads plugins automatically from `packages/` and `data/plugins/` directories
### Dashboard Build (Vue.js/Node.js)
- **Prerequisites**: Node.js 20+ and npm 10+ required
- Navigate to dashboard: `cd dashboard`
- Install dashboard dependencies: `npm install` -- takes 2-3 minutes. NEVER CANCEL. Set timeout to 5+ minutes.
- Build dashboard: `npm run build` -- takes 25-30 seconds. NEVER CANCEL.
- Dashboard creates optimized production build in `dashboard/dist/`
### Testing
- Do not generate test files for now.
### Code Quality and Linting
- Install ruff linter: `uv add --dev ruff`
- Check code style: `uv run ruff check .` -- takes <1 second
- Check formatting: `uv run ruff format --check .` -- takes <1 second
- Fix formatting: `uv run ruff format .`
- **ALWAYS** run `uv run ruff check .` and `uv run ruff format .` before committing changes
### Plugin Development
- Plugins load from `packages/` (built-in) and `data/plugins/` (user-installed)
- Plugin system supports function tools and message handlers
- Key plugins: python_interpreter, web_searcher, astrbot, reminder, session_controller
### Common Issues and Workarounds
- **Dashboard download fails**: Known issue with "division by zero" error - application still works
- **Import errors in tests**: Ensure `uv run` is used to run tests in proper environment
=- **Build timeouts**: Always set appropriate timeouts (10+ minutes for uv sync, 5+ minutes for npm install)
## CI/CD Integration
- GitHub Actions workflows in `.github/workflows/`
- Docker builds supported via `Dockerfile`
- Pre-commit hooks enforce ruff formatting and linting
## Docker Support
- Primary deployment method: `docker run soulter/astrbot:latest`
- Compose file available: `compose.yml`
- Exposes ports: 6185 (WebUI), 6195 (WeChat), 6199 (QQ), etc.
- Volume mount required: `./data:/AstrBot/data`
## Multi-language Support
- Documentation in Chinese (README.md), English (README_en.md), Japanese (README_ja.md)
- UI supports internationalization
- Default language is Chinese
Remember: This is a production chatbot framework with real users. Always test thoroughly and ensure changes don't break existing functionality.

View File

@@ -13,7 +13,7 @@ jobs:
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Dashboard Build
run: |
@@ -70,10 +70,10 @@ jobs:
needs: build-and-publish-to-github-release
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v6
with:
python-version: '3.10'

34
.github/workflows/code-format.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: Code Format Check
on:
pull_request:
branches: [ master ]
push:
branches: [ master ]
jobs:
format-check:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: '3.10'
- name: Install UV
run: pip install uv
- name: Install dependencies
run: uv sync
- name: Check code formatting with ruff
run: |
uv run ruff format --check .
- name: Check code style with ruff
run: |
uv run ruff check .

View File

@@ -56,11 +56,11 @@ jobs:
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v4
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
@@ -88,6 +88,6 @@ jobs:
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v4
with:
category: "/language:${{matrix.language}}"

View File

@@ -17,12 +17,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v6
- name: Install dependencies
run: |

View File

@@ -11,13 +11,20 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Node.js
uses: actions/setup-node@v6
with:
node-version: 'latest'
- name: npm install, build
run: |
cd dashboard
npm install
npm run build
npm install pnpm -g
pnpm install
pnpm i --save-dev @types/markdown-it
pnpm run build
- name: Inject Commit SHA
id: get_sha
@@ -25,11 +32,24 @@ jobs:
echo "COMMIT_SHA=$(git rev-parse HEAD)" >> $GITHUB_ENV
mkdir -p dashboard/dist/assets
echo $COMMIT_SHA > dashboard/dist/assets/version
cd dashboard
zip -r dist.zip dist
- name: Archive production artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: dist-without-markdown
path: |
dashboard/dist
!dist/**/*.md
- name: Create GitHub Release
if: github.event_name == 'push'
uses: ncipollo/release-action@v1
with:
tag: release-${{ github.sha }}
owner: AstrBotDevs
repo: astrbot-release-harbour
body: "Automated release from commit ${{ github.sha }}"
token: ${{ secrets.ASTRBOT_HARBOUR_TOKEN }}
artifacts: "dashboard/dist.zip"

View File

@@ -3,29 +3,65 @@ name: Docker Image CI/CD
on:
push:
tags:
- 'v*'
- "v*"
schedule:
# Run at 00:00 UTC every day
- cron: "0 0 * * *"
workflow_dispatch:
jobs:
publish-docker:
build-nightly-image:
if: github.event_name == 'schedule'
runs-on: ubuntu-latest
env:
DOCKER_HUB_USERNAME: ${{ secrets.DOCKER_HUB_USERNAME }}
GHCR_OWNER: soulter
HAS_GHCR_TOKEN: ${{ secrets.GHCR_GITHUB_TOKEN != '' }}
steps:
- name: Pull The Codes
uses: actions/checkout@v4
- name: Checkout
uses: actions/checkout@v6
with:
fetch-depth: 0 # Must be 0 so we can fetch tags
fetch-depth: 1
fetch-tag: true
- name: Get latest tag (only on manual trigger)
id: get-latest-tag
if: github.event_name == 'workflow_dispatch'
- name: Check for new commits today
if: github.event_name == 'schedule'
id: check-commits
run: |
tag=$(git describe --tags --abbrev=0)
echo "latest_tag=$tag" >> $GITHUB_OUTPUT
# Get commits from the last 24 hours
commits=$(git log --since="24 hours ago" --oneline)
if [ -z "$commits" ]; then
echo "No commits in the last 24 hours, skipping build"
echo "has_commits=false" >> $GITHUB_OUTPUT
else
echo "Found commits in the last 24 hours:"
echo "$commits"
echo "has_commits=true" >> $GITHUB_OUTPUT
fi
- name: Checkout to latest tag (only on manual trigger)
if: github.event_name == 'workflow_dispatch'
run: git checkout ${{ steps.get-latest-tag.outputs.latest_tag }}
- name: Exit if no commits
if: github.event_name == 'schedule' && steps.check-commits.outputs.has_commits == 'false'
run: exit 0
- name: Build Dashboard
run: |
cd dashboard
npm install
npm run build
mkdir -p dist/assets
echo $(git rev-parse HEAD) > dist/assets/version
cd ..
mkdir -p data
cp -r dashboard/dist data/
- name: Determine test image tags
id: test-meta
run: |
short_sha=$(echo "${GITHUB_SHA}" | cut -c1-12)
build_date=$(date +%Y%m%d)
echo "short_sha=$short_sha" >> $GITHUB_OUTPUT
echo "build_date=$build_date" >> $GITHUB_OUTPUT
- name: Set QEMU
uses: docker/setup-qemu-action@v3
@@ -40,23 +76,123 @@ jobs:
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Login to GitHub Container Registry
if: env.HAS_GHCR_TOKEN == 'true'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: Soulter
username: ${{ env.GHCR_OWNER }}
password: ${{ secrets.GHCR_GITHUB_TOKEN }}
- name: Build and Push Docker to DockerHub and Github GHCR
- name: Build nightly image tags list
id: test-tags
run: |
TAGS="${{ env.DOCKER_HUB_USERNAME }}/astrbot:nightly-latest
${{ env.DOCKER_HUB_USERNAME }}/astrbot:nightly-${{ steps.test-meta.outputs.build_date }}-${{ steps.test-meta.outputs.short_sha }}"
if [ "${{ env.HAS_GHCR_TOKEN }}" = "true" ]; then
TAGS="$TAGS
ghcr.io/${{ env.GHCR_OWNER }}/astrbot:nightly-latest
ghcr.io/${{ env.GHCR_OWNER }}/astrbot:nightly-${{ steps.test-meta.outputs.build_date }}-${{ steps.test-meta.outputs.short_sha }}"
fi
echo "tags<<EOF" >> $GITHUB_OUTPUT
echo "$TAGS" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Build and Push Nightly Image
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.test-tags.outputs.tags }}
- name: Post build notifications
run: echo "Test Docker image has been built and pushed successfully"
build-release-image:
if: github.event_name == 'workflow_dispatch' || (github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v'))
runs-on: ubuntu-latest
env:
DOCKER_HUB_USERNAME: ${{ secrets.DOCKER_HUB_USERNAME }}
GHCR_OWNER: soulter
HAS_GHCR_TOKEN: ${{ secrets.GHCR_GITHUB_TOKEN != '' }}
steps:
- name: Checkout
uses: actions/checkout@v6
with:
fetch-depth: 1
fetch-tag: true
- name: Get latest tag (only on manual trigger)
id: get-latest-tag
if: github.event_name == 'workflow_dispatch'
run: |
tag=$(git describe --tags --abbrev=0)
echo "latest_tag=$tag" >> $GITHUB_OUTPUT
- name: Checkout to latest tag (only on manual trigger)
if: github.event_name == 'workflow_dispatch'
run: git checkout ${{ steps.get-latest-tag.outputs.latest_tag }}
- name: Compute release metadata
id: release-meta
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
version="${{ steps.get-latest-tag.outputs.latest_tag }}"
else
version="${GITHUB_REF#refs/tags/}"
fi
if [[ "$version" == *"beta"* ]] || [[ "$version" == *"alpha"* ]]; then
echo "is_prerelease=true" >> $GITHUB_OUTPUT
echo "Version $version marked as pre-release"
else
echo "is_prerelease=false" >> $GITHUB_OUTPUT
echo "Version $version marked as stable"
fi
echo "version=$version" >> $GITHUB_OUTPUT
- name: Build Dashboard
run: |
cd dashboard
npm install
npm run build
mkdir -p dist/assets
echo $(git rev-parse HEAD) > dist/assets/version
cd ..
mkdir -p data
cp -r dashboard/dist data/
- name: Set QEMU
uses: docker/setup-qemu-action@v3
- name: Set Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Login to GitHub Container Registry
if: env.HAS_GHCR_TOKEN == 'true'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ env.GHCR_OWNER }}
password: ${{ secrets.GHCR_GITHUB_TOKEN }}
- name: Build and Push Release Image
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:latest
${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:${{ github.event_name == 'workflow_dispatch' && steps.get-latest-tag.outputs.latest_tag || github.ref_name }}
ghcr.io/soulter/astrbot:latest
ghcr.io/soulter/astrbot:${{ github.event_name == 'workflow_dispatch' && steps.get-latest-tag.outputs.latest_tag || github.ref_name }}
${{ steps.release-meta.outputs.is_prerelease == 'false' && format('{0}/astrbot:latest', env.DOCKER_HUB_USERNAME) || '' }}
${{ steps.release-meta.outputs.is_prerelease == 'false' && env.HAS_GHCR_TOKEN == 'true' && format('ghcr.io/{0}/astrbot:latest', env.GHCR_OWNER) || '' }}
${{ format('{0}/astrbot:{1}', env.DOCKER_HUB_USERNAME, steps.release-meta.outputs.version) }}
${{ env.HAS_GHCR_TOKEN == 'true' && format('ghcr.io/{0}/astrbot:{1}', env.GHCR_OWNER, steps.release-meta.outputs.version) || '' }}
- name: Post build notifications
run: echo "Docker image has been built and pushed successfully"
run: echo "Release Docker image has been built and pushed successfully"

View File

@@ -18,7 +18,7 @@ jobs:
pull-requests: write
steps:
- uses: actions/stale@v9
- uses: actions/stale@v10
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'Stale issue message'

63
.gitignore vendored
View File

@@ -1,33 +1,52 @@
# Python related
__pycache__
botpy.log
.vscode
.mypy_cache
.venv*
.idea
data_v2.db
data_v3.db
configs/session
configs/config.yaml
**/.DS_Store
temp
cmd_config.json
data
cookies.json
logs/
addons/plugins
.conda/
uv.lock
.coverage
# IDE and editors
.vscode
.idea
# Logs and temporary files
botpy.log
logs/
temp
cookies.json
# Data files
data_v2.db
data_v3.db
data
configs/session
configs/config.yaml
cmd_config.json
# Plugins and packages
addons/plugins
packages/python_interpreter/workplace
tests/astrbot_plugin_openai
chroma
# Dashboard
dashboard/node_modules/
dashboard/dist/
.DS_Store
package-lock.json
package.json
venv/*
packages/python_interpreter/workplace
.venv/*
.conda/
.idea
pytest.ini
yarn.lock
# Operating System
**/.DS_Store
.DS_Store
# AstrBot specific
.astrbot
astrbot.lock
# Other
chroma
venv/*
pytest.ini
AGENTS.md
IFLOW.md

View File

@@ -7,7 +7,19 @@ ci:
autoupdate_commit_msg: ":balloon: pre-commit autoupdate"
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.11.2
# Ruff version.
rev: v0.14.1
hooks:
- id: ruff
# Run the linter.
- id: ruff-check
types_or: [ python, pyi ]
args: [ --fix ]
# Run the formatter.
- id: ruff-format
types_or: [ python, pyi ]
- repo: https://github.com/asottile/pyupgrade
rev: v3.21.0
hooks:
- id: pyupgrade
args: [--py310-plus]

View File

@@ -4,8 +4,6 @@ WORKDIR /AstrBot
COPY . /AstrBot/
RUN apt-get update && apt-get install -y --no-install-recommends \
nodejs \
npm \
gcc \
build-essential \
python3-dev \
@@ -13,23 +11,22 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libssl-dev \
ca-certificates \
bash \
ffmpeg \
curl \
gnupg \
git \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN python -m pip install uv
RUN apt-get update && apt-get install -y curl gnupg \
&& curl -fsSL https://deb.nodesource.com/setup_lts.x | bash - \
&& apt-get install -y nodejs
RUN python -m pip install uv \
&& echo "3.11" > .python-version
RUN uv pip install -r requirements.txt --no-cache-dir --system
RUN uv pip install socksio uv pyffmpeg pilk --no-cache-dir --system
# 释出 ffmpeg
RUN python -c "from pyffmpeg import FFmpeg; ff = FFmpeg();"
# add /root/.pyffmpeg/bin/ffmpeg to PATH, inorder to use ffmpeg
RUN echo 'export PATH=$PATH:/root/.pyffmpeg/bin' >> ~/.bashrc
RUN uv pip install socksio uv pilk --no-cache-dir --system
EXPOSE 6185
EXPOSE 6186
CMD ["python", "main.py"]

View File

@@ -1,35 +0,0 @@
FROM python:3.10-slim
WORKDIR /AstrBot
COPY . /AstrBot/
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
build-essential \
python3-dev \
libffi-dev \
libssl-dev \
curl \
unzip \
ca-certificates \
bash \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Installation of Node.js
ENV NVM_DIR="/root/.nvm"
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash && \
. "$NVM_DIR/nvm.sh" && \
nvm install 22 && \
nvm use 22
RUN /bin/bash -c ". \"$NVM_DIR/nvm.sh\" && node -v && npm -v"
RUN python -m pip install uv
RUN uv pip install -r requirements.txt --no-cache-dir --system
RUN uv pip install socksio uv pyffmpeg --no-cache-dir --system
EXPOSE 6185
EXPOSE 6186
CMD ["python", "main.py"]

291
README.md
View File

@@ -1,83 +1,83 @@
<p align="center">
![AstrBot-Logo-Simplified](https://github.com/user-attachments/assets/ffd99b6b-3272-4682-beaa-6fe74250f7d9)
</p>
<div align="center">
_✨ 易上手的多平台 LLM 聊天机器人及开发框架 ✨_
<br>
<div>
<a href="https://trendshift.io/repositories/12875" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12875" alt="Soulter%2FAstrBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
<a href="https://hellogithub.com/repository/AstrBotDevs/AstrBot" target="_blank"><img src="https://api.hellogithub.com/v1/widgets/recommend.svg?rid=d127d50cd5e54c5382328acc3bb25483&claim_uid=ZO9by7qCXgSd6Lp&t=2" alt="FeaturedHelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a>
</div>
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/Soulter/AstrBot?style=for-the-badge&color=76bad9)](https://github.com/Soulter/AstrBot/releases/latest)
<br>
<div>
<img src="https://img.shields.io/github/v/release/AstrBotDevs/AstrBot?style=for-the-badge&color=76bad9" href="https://github.com/AstrBotDevs/AstrBot/releases/latest">
<img src="https://img.shields.io/badge/python-3.10+-blue.svg?style=for-the-badge&color=76bad9" alt="python">
<a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg?style=for-the-badge&color=76bad9"/></a>
<a href="https://qm.qq.com/cgi-bin/qm/qr?k=wtbaNx7EioxeaqS9z7RQWVXPIxg2zYr7&jump_from=webapi&authKey=vlqnv/AV2DbJEvGIcxdlNSpfxVy+8vVqijgreRdnVKOaydpc+YSw4MctmEbr0k5"><img alt="QQ_community" src="https://img.shields.io/badge/QQ群-775869627-purple?style=for-the-badge&color=76bad9"></a>
<a href="https://t.me/+hAsD2Ebl5as3NmY1"><img alt="Telegram_community" src="https://img.shields.io/badge/Telegram-AstrBot-purple?style=for-the-badge&color=76bad9"></a>
[![wakatime](https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/018e705a-a1a7-409a-a849-3013485e6c8e.svg?style=for-the-badge&color=76bad9)](https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/018e705a-a1a7-409a-a849-3013485e6c8e)
![Dynamic JSON Badge](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.soulter.top%2Fastrbot%2Fstats&query=v&label=7日消息量&cacheSeconds=3600&style=for-the-badge&color=3b618e)
![Dynamic JSON Badge](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.soulter.top%2Fastrbot%2Fplugin-num&query=%24.result&suffix=%E4%B8%AA&style=for-the-badge&label=%E6%8F%92%E4%BB%B6%E5%B8%82%E5%9C%BA&cacheSeconds=3600)
<a href="https://github.com/Soulter/AstrBot/blob/master/README_en.md">English</a>
<a href="https://github.com/Soulter/AstrBot/blob/master/README_ja.md">日本語</a>
<a href="https://astrbot.app/">查看文档</a>
<a href="https://github.com/Soulter/AstrBot/issues">问题提交</a>
<img src="https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.soulter.top%2Fastrbot%2Fplugin-num&query=%24.result&suffix=%E4%B8%AA&style=for-the-badge&label=%E6%8F%92%E4%BB%B6%E5%B8%82%E5%9C%BA&cacheSeconds=3600">
</div>
AstrBot 是一个松耦合、异步、支持多消息平台部署、具有易用的插件系统和完善的大语言模型LLM接入功能的聊天机器人及开发框架。
<br>
<a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README_en.md">English</a>
<a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README_ja.md">日本語</a>
<a href="https://astrbot.app/">文档</a>
<a href="https://blog.astrbot.app/">Blog</a>
<a href="https://astrbot.featurebase.app/roadmap">路线图</a>
<a href="https://github.com/AstrBotDevs/AstrBot/issues">问题提交</a>
</div>
<!-- [![codecov](https://img.shields.io/codecov/c/github/soulter/astrbot?style=for-the-badge)](https://codecov.io/gh/Soulter/AstrBot)
-->
AstrBot 是一个开源的一站式 Agent 聊天机器人平台,可无缝接入主流即时通讯软件,为个人、开发者和团队打造可靠、可扩展的对话式智能基础设施。无论是个人 AI 伙伴、智能客服、自动化助手还是企业知识库AstrBot 都能在你的即时通讯软件平台的工作流中快速构建生产可用的 AI 应用。
> [!WARNING]
>
> 请务必修改默认密码以及保证 AstrBot 版本 >= 3.5.13。
## 主要功能
## ✨ 近期更新
1. **大模型对话**。支持接入多种大模型服务。支持多模态、工具调用、MCP、原生知识库、人设等功能。
2. **多消息平台支持**。支持接入 QQ、企业微信、微信公众号、飞书、Telegram、钉钉、Discord、KOOK 等平台。支持速率限制、白名单、百度内容审核。
3. **Agent**。完善适配的 Agentic 能力。支持多轮工具调用、内置沙盒代码执行器、网页搜索等功能。
4. **插件扩展**。深度优化的插件机制,支持[开发插件](https://astrbot.app/dev/plugin.html)扩展功能,社区插件生态丰富。
5. **WebUI**。可视化配置和管理机器人,功能齐全。
<details><summary>1. AstrBot 现已自带知识库能力</summary>
## 部署方式
📚 详见[文档](https://astrbot.app/use/knowledge-base.html)
#### Docker 部署(推荐 🥳)
![image](https://github.com/user-attachments/assets/28b639b0-bb5c-4958-8e94-92ae8cfd1ab4)
</details>
2. AstrBot 现已支持接入 [MCP](https://modelcontextprotocol.io/) 服务器!
## ✨ 主要功能
> [!NOTE]
> 🪧 我们正基于前沿科研成果,设计并实现适用于角色扮演和情感陪伴的长短期记忆模型及情绪控制模型,旨在提升对话的真实性与情感表达能力。敬请期待 `v3.6.0` 版本!
1. **大语言模型对话**。支持各种大语言模型,包括 OpenAI API、Google Gemini、Llama、Deepseek、ChatGLM 等,支持接入本地部署的大模型,通过 Ollama、LLMTuner。具有多轮对话、人格情境、多模态能力支持图片理解、语音转文字Whisper
2. **多消息平台接入**。支持接入 QQOneBot、QQ 官方机器人平台、QQ 频道、微信、企业微信、微信公众号、飞书、Telegram、钉钉、Discord、KOOK、VoceChat。支持速率限制、白名单、关键词过滤、百度内容审核。
3. **Agent**。原生支持部分 Agent 能力,如代码执行器、自然语言待办、网页搜索。对接 [Dify 平台](https://dify.ai/),便捷接入 Dify 智能助手、知识库和 Dify 工作流。
4. **插件扩展**。深度优化的插件机制,支持[开发插件](https://astrbot.app/dev/plugin.html)扩展功能,极简开发。已支持安装多个插件。
5. **可视化管理面板**。支持可视化修改配置、插件管理、日志查看等功能,降低配置难度。集成 WebChat可在面板上与大模型对话。
6. **高稳定性、高模块化**。基于事件总线和流水线的架构设计,高度模块化,低耦合。
> [!TIP]
> WebUI 在线体验 Demo: [https://demo.astrbot.app/](https://demo.astrbot.app/)
>
> 用户名: `astrbot`, 密码: `astrbot`。
## ✨ 使用方式
#### Docker 部署
推荐使用 Docker / Docker Compose 方式部署 AstrBot。
请参阅官方文档 [使用 Docker 部署 AstrBot](https://astrbot.app/deploy/astrbot/docker.html#%E4%BD%BF%E7%94%A8-docker-%E9%83%A8%E7%BD%B2-astrbot) 。
#### 宝塔面板部署
AstrBot 与宝塔面板合作,已上架至宝塔面板。
请参阅官方文档 [宝塔面板部署](https://astrbot.app/deploy/astrbot/btpanel.html) 。
#### 1Panel 部署
AstrBot 已由 1Panel 官方上架至 1Panel 面板。
请参阅官方文档 [1Panel 部署](https://astrbot.app/deploy/astrbot/1panel.html) 。
#### 在 雨云 上部署
AstrBot 已由雨云官方上架至云应用平台,可一键部署。
[![Deploy on RainYun](https://rainyun-apps.cn-nb1.rains3.com/materials/deploy-on-rainyun-en.svg)](https://app.rainyun.com/apps/rca/store/5994?ref=NjU1ODg0)
#### 在 Replit 上部署
社区贡献的部署方式。
[![Run on Repl.it](https://repl.it/badge/github/AstrBotDevs/AstrBot)](https://repl.it/github/AstrBotDevs/AstrBot)
#### Windows 一键安装器部署
请参阅官方文档 [使用 Windows 一键安装器部署 AstrBot](https://astrbot.app/deploy/astrbot/windows.html) 。
#### 宝塔面板部署
请参阅官方文档 [宝塔面板部署](https://astrbot.app/deploy/astrbot/btpanel.html) 。
#### CasaOS 部署
社区贡献的部署方式。
@@ -86,9 +86,7 @@ AstrBot 是一个松耦合、异步、支持多消息平台部署、具有易用
#### 手动部署
> 推荐使用 `uv`。
首先,安装 uv
首先安装 uv
```bash
pip install uv
@@ -101,72 +99,93 @@ git clone https://github.com/AstrBotDevs/AstrBot && cd AstrBot
uv run main.py
```
或者,直接通过 uvx 安装 AstrBot
```bash
mkdir astrbot && cd astrbot
uvx astrbot init
# uvx astrbot run
```
或者请参阅官方文档 [通过源码部署 AstrBot](https://astrbot.app/deploy/astrbot/cli.html) 。
#### 在 Replit 上部署
## 🌍 社区
[![Run on Repl.it](https://repl.it/badge/github/Soulter/AstrBot)](https://repl.it/github/Soulter/AstrBot)
### QQ 群组
#### 在 雨云 上部署
- 1 群322154837
- 3 群630166526
- 5 群822130018
- 6 群753075035
- 开发者群975206796
[![Deploy on RainYun](https://rainyun-apps.cn-nb1.rains3.com/materials/deploy-on-rainyun-en.svg)](https://app.rainyun.com/apps/rca/store/5994?ref=NjU1ODg0)
### Telegram 群组
## ⚡ 消息平台支持情况
<a href="https://t.me/+hAsD2Ebl5as3NmY1"><img alt="Telegram_community" src="https://img.shields.io/badge/Telegram-AstrBot-purple?style=for-the-badge&color=76bad9"></a>
| 平台 | 支持性 |
| -------- | ------- |
| QQ(官方机器人接口) | ✔ |
| QQ(OneBot) | ✔ |
| 微信个人号 | ✔ |
| Telegram | ✔ |
| 企业微信 | ✔ |
| 微信客服 | ✔ |
| 微信公众号 | ✔ |
| 飞书 | ✔ |
| 钉钉 | ✔ |
| Slack | ✔ |
| Discord | ✔ |
| [KOOK](https://github.com/wuyan1003/astrbot_plugin_kook_adapter) | ✔ |
| [VoceChat](https://github.com/HikariFroya/astrbot_plugin_vocechat) | ✔ |
| 微信对话开放平台 | 🚧 |
| WhatsApp | 🚧 |
| 小爱音响 | 🚧 |
### Discord 群组
## ⚡ 提供商支持情况
<a href="https://discord.gg/hAVk6tgV36"><img alt="Discord_community" src="https://img.shields.io/badge/Discord-AstrBot-purple?style=for-the-badge&color=76bad9"></a>
| 名称 | 支持性 | 类型 | 备注 |
| -------- | ------- | ------- | ------- |
| OpenAI API | ✔ | 文本生成 | 也支持 DeepSeek、Gemini、Kimi、xAI 等兼容 OpenAI API 的服务 |
| Claude API | ✔ | 文本生成 | |
| Google Gemini API | ✔ | 文本生成 | |
| Dify | ✔ | LLMOps | |
| 阿里云百炼应用 | ✔ | LLMOps | |
| Ollama | ✔ | 模型加载器 | 本地部署 DeepSeek、Llama 等开源语言模型 |
| LM Studio | ✔ | 模型加载器 | 本地部署 DeepSeek、Llama 等开源语言模型 |
| LLMTuner | ✔ | 模型加载器 | 本地加载 lora 等微调模型 |
| [优云智算](https://www.compshare.cn/?ytag=GPU_YY-gh_astrbot&referral_code=FV7DcGowN4hB5UuXKgpE74) | ✔ | 模型 API 及算力服务平台 | |
| [302.AI](https://share.302.ai/rr1M3l) | ✔ | 模型 API 服务平台 | |
| 硅基流动 | ✔ | 模型 API 服务平台 | |
| PPIO 派欧云 | ✔ | 模型 API 服务平台 | |
| OneAPI | ✔ | LLM 分发系统 | |
| Whisper | ✔ | 语音转文本 | 支持 API、本地部署 |
| SenseVoice | ✔ | 语音转文本 | 本地部署 |
| OpenAI TTS API | ✔ | 文本转语音 | |
| GSVI | ✔ | 文本转语音 | GPT-Sovits-Inference |
| GPT-SoVITs | ✔ | 文本转语音 | GPT-Sovits-Inference |
| FishAudio | ✔ | 文本转语音 | GPT-Sovits 作者参与的项目 |
| Edge TTS | ✔ | 文本转语音 | Edge 浏览器的免费 TTS |
| 阿里云百炼 TTS | ✔ | 文本转语音 | |
| Azure TTS | ✔ | 文本转语音 | Microsoft Azure TTS |
## 支持的消息平台
**官方维护**
- QQ (官方平台 & OneBot)
- Telegram
- 企微应用 & 企微智能机器人
- 微信客服 & 微信公众号
- 飞书
- 钉钉
- Slack
- Discord
- Satori
- Misskey
- Whatsapp (将支持)
- LINE (将支持)
**社区维护**
- [KOOK](https://github.com/wuyan1003/astrbot_plugin_kook_adapter)
- [VoceChat](https://github.com/HikariFroya/astrbot_plugin_vocechat)
- [Bilibili 私信](https://github.com/Hina-Chat/astrbot_plugin_bilibili_adapter)
- [wxauto](https://github.com/luosheng520qaq/wxauto-repost-onebotv11)
## 支持的模型服务
**大模型服务**
- OpenAI 及兼容服务
- Anthropic
- Google Gemini
- Moonshot AI
- 智谱 AI
- DeepSeek
- Ollama (本地部署)
- LM Studio (本地部署)
- [优云智算](https://www.compshare.cn/?ytag=GPU_YY-gh_astrbot&referral_code=FV7DcGowN4hB5UuXKgpE74)
- [302.AI](https://share.302.ai/rr1M3l)
- [小马算力](https://www.tokenpony.cn/3YPyf)
- [硅基流动](https://docs.siliconflow.cn/cn/usercases/use-siliconcloud-in-astrbot)
- [PPIO 派欧云](https://ppio.com/user/register?invited_by=AIOONE)
- ModelScope
- OneAPI
**LLMOps 平台**
- Dify
- 阿里云百炼应用
- Coze
**语音转文本服务**
- OpenAI Whisper
- SenseVoice
**文本转语音服务**
- OpenAI TTS
- Gemini TTS
- GPT-Sovits-Inference
- GPT-Sovits
- FishAudio
- Edge TTS
- 阿里云百炼 TTS
- Azure TTS
- Minimax TTS
- 火山引擎 TTS
## ❤️ 贡献
@@ -181,44 +200,11 @@ uvx astrbot init
AstrBot 使用 `ruff` 进行代码格式化和检查。
```bash
git clone https://github.com/Soulter/AstrBot
git clone https://github.com/AstrBotDevs/AstrBot
pip install pre-commit
pre-commit install
```
## 🌟 支持
- Star 这个项目!
- 在[爱发电](https://afdian.com/a/soulter)支持我!
## ✨ Demo
<details><summary>👉 点击展开多张 Demo 截图 👈</summary>
<div align='center'>
<img src="https://github.com/user-attachments/assets/4ee688d9-467d-45c8-99d6-368f9a8a92d8" width="600">
_✨基于 Docker 的沙箱化代码执行器Beta 测试✨_
<img src="https://github.com/user-attachments/assets/0378f407-6079-4f64-ae4c-e97ab20611d2" height=500>
_✨ 多模态、网页搜索、长文本转图片(可配置) ✨_
<img src="https://github.com/user-attachments/assets/e137a9e1-340a-4bf2-bb2b-771132780735" height=150>
<img src="https://github.com/user-attachments/assets/480f5e82-cf6a-4955-a869-0d73137aa6e1" height=150>
_✨ 插件系统——部分插件展示 ✨_
<img src="https://github.com/user-attachments/assets/0cdbf564-2f59-4da5-b524-ce0e7ef3d978" width=600>
_✨ WebUI ✨_
</div>
</details>
## ❤️ Special Thanks
特别感谢所有 Contributors 和插件开发者对 AstrBot 的贡献 ❤️
@@ -227,30 +213,21 @@ _✨ WebUI ✨_
<img src="https://contrib.rocks/image?repo=AstrBotDevs/AstrBot" />
</a>
此外,本项目的诞生离不开以下开源项目:
此外,本项目的诞生离不开以下开源项目的帮助
- [NapNeko/NapCatQQ](https://github.com/NapNeko/NapCatQQ) - 伟大的猫猫框架
- [wechatpy/wechatpy](https://github.com/wechatpy/wechatpy)
## ⭐ Star History
> [!TIP]
> 如果本项目对您的生活 / 工作产生了帮助,或者您关注本项目的未来发展,请给项目 Star这是我维护这个开源项目的动力 <3
> 如果本项目对您的生活 / 工作产生了帮助,或者您关注本项目的未来发展,请给项目 Star这是我维护这个开源项目的动力 <3
<div align="center">
[![Star History Chart](https://api.star-history.com/svg?repos=soulter/astrbot&type=Date)](https://star-history.com/#soulter/astrbot&Date)
[![Star History Chart](https://api.star-history.com/svg?repos=astrbotdevs/astrbot&type=Date)](https://star-history.com/#astrbotdevs/astrbot&Date)
</div>
![10k-star-banner-credit-by-kevin](https://github.com/user-attachments/assets/c97fc5fb-20b9-4bc8-9998-c20b930ab097)
## Disclaimer
1. The project is protected under the `AGPL-v3` opensource license.
2. The deployment of WeChat (personal account) utilizes [Gewechat](https://github.com/Devo919/Gewechat) service. AstrBot only guarantees connectivity with Gewechat and recommends using a WeChat account that is not frequently used. In the event of account risk control, the author of this project shall not bear any responsibility.
3. Please ensure compliance with local laws and regulations when using this project.
</details>
_私は、高性能ですから!_

View File

@@ -1,182 +1,233 @@
<p align="center">
![6e1279651f16d7fdf4727558b72bbaf1](https://github.com/user-attachments/assets/ead4c551-fc3c-48f7-a6f7-afbfdb820512)
![AstrBot-Logo-Simplified](https://github.com/user-attachments/assets/ffd99b6b-3272-4682-beaa-6fe74250f7d9)
</p>
<div align="center">
_✨ Easy-to-use Multi-platform LLM Chatbot & Development Framework ✨_
<br>
<div>
<a href="https://trendshift.io/repositories/12875" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12875" alt="Soulter%2FAstrBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/Soulter/AstrBot)](https://github.com/Soulter/AstrBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10+-blue.svg" alt="python">
<a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg"/></a>
<a href="https://qm.qq.com/cgi-bin/qm/qr?k=wtbaNx7EioxeaqS9z7RQWVXPIxg2zYr7&jump_from=webapi&authKey=vlqnv/AV2DbJEvGIcxdlNSpfxVy+8vVqijgreRdnVKOaydpc+YSw4MctmEbr0k5"><img alt="Static Badge" src="https://img.shields.io/badge/QQ群-630166526-purple"></a>
[![wakatime](https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/018e705a-a1a7-409a-a849-3013485e6c8e.svg)](https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/018e705a-a1a7-409a-a849-3013485e6c8e)
![Dynamic JSON Badge](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.soulter.top%2Fastrbot%2Fstats&query=v&label=7%E6%97%A5%E6%B6%88%E6%81%AF%E4%B8%8A%E8%A1%8C%E9%87%8F&cacheSeconds=3600)
[![codecov](https://codecov.io/gh/Soulter/AstrBot/graph/badge.svg?token=FF3P5967B8)](https://codecov.io/gh/Soulter/AstrBot)
<a href="https://astrbot.app/">Documentation</a>
<a href="https://github.com/Soulter/AstrBot/issues">Issue Tracking</a>
<a href="https://hellogithub.com/repository/AstrBotDevs/AstrBot" target="_blank"><img src="https://api.hellogithub.com/v1/widgets/recommend.svg?rid=d127d50cd5e54c5382328acc3bb25483&claim_uid=ZO9by7qCXgSd6Lp&t=2" alt="FeaturedHelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a>
</div>
AstrBot is a loosely coupled, asynchronous chatbot and development framework that supports multi-platform deployment, featuring an easy-to-use plugin system and comprehensive Large Language Model (LLM) integration capabilities.
<br>
## ✨ Key Features
<div>
<img src="https://img.shields.io/github/v/release/AstrBotDevs/AstrBot?style=for-the-badge&color=76bad9" href="https://github.com/AstrBotDevs/AstrBot/releases/latest">
<img src="https://img.shields.io/badge/python-3.10+-blue.svg?style=for-the-badge&color=76bad9" alt="python">
<a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg?style=for-the-badge&color=76bad9"/></a>
<a href="https://qm.qq.com/cgi-bin/qm/qr?k=wtbaNx7EioxeaqS9z7RQWVXPIxg2zYr7&jump_from=webapi&authKey=vlqnv/AV2DbJEvGIcxdlNSpfxVy+8vVqijgreRdnVKOaydpc+YSw4MctmEbr0k5"><img alt="QQ_community" src="https://img.shields.io/badge/QQ群-775869627-purple?style=for-the-badge&color=76bad9"></a>
<a href="https://t.me/+hAsD2Ebl5as3NmY1"><img alt="Telegram_community" src="https://img.shields.io/badge/Telegram-AstrBot-purple?style=for-the-badge&color=76bad9"></a>
<img src="https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.soulter.top%2Fastrbot%2Fplugin-num&query=%24.result&suffix=%E4%B8%AA&style=for-the-badge&label=%E6%8F%92%E4%BB%B6%E5%B8%82%E5%9C%BA&cacheSeconds=3600">
</div>
1. **LLM Conversations** - Supports various LLMs including OpenAI API, Google Gemini, Llama, Deepseek, ChatGLM, etc. Enables local model deployment via Ollama/LLMTuner. Features multi-turn dialogues, personality contexts, multimodal capabilities (image understanding), and speech-to-text (Whisper).
2. **Multi-platform Integration** - Supports QQ (OneBot), QQ Channels, WeChat (Gewechat), Feishu, and Telegram. Planned support for DingTalk, Discord, WhatsApp, and Xiaomi Smart Speakers. Includes rate limiting, whitelisting, keyword filtering, and Baidu content moderation.
3. **Agent Capabilities** - Native support for code execution, natural language TODO lists, web search. Integrates with [Dify Platform](https://dify.ai/) for easy access to Dify assistants/knowledge bases/workflows.
4. **Plugin System** - Optimized plugin mechanism with minimal development effort. Supports multiple installed plugins.
5. **Web Dashboard** - Visual configuration management, plugin controls, logging, and WebChat interface for direct LLM interaction.
6. **High Stability & Modularity** - Event bus and pipeline architecture ensures high modularization and loose coupling.
<br>
> [!TIP]
> Dashboard Demo: [https://demo.astrbot.app/](https://demo.astrbot.app/)
> Username: `astrbot`, Password: `astrbot` (LLM not configured for chat page)
<a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README.md">中文</a>
<a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README_ja.md">日本語</a>
<a href="https://astrbot.app/">Documentation</a>
<a href="https://blog.astrbot.app/">Blog</a>
<a href="https://astrbot.featurebase.app/roadmap">Roadmap</a>
<a href="https://github.com/AstrBotDevs/AstrBot/issues">Issue Tracker</a>
</div>
## ✨ Deployment
AstrBot is an open-source all-in-one Agent chatbot platform and development framework.
#### Docker Deployment
## Key Features
See docs: [Deploy with Docker](https://astrbot.app/deploy/astrbot/docker.html#docker-deployment)
1. **LLM Conversations**. Supports integration with various large language model services. Features include multimodal capabilities, tool calling, MCP, native knowledge base, character personas, and more.
2. **Multi-Platform Support**. Integrates with QQ, WeChat Work, WeChat Official Accounts, Feishu, Telegram, DingTalk, Discord, KOOK, and other platforms. Supports rate limiting, whitelisting, and Baidu content moderation.
3. **Agent Capabilities**. Fully optimized agentic features including multi-turn tool calling, built-in sandboxed code executor, web search, and more.
4. **Plugin Extensions**. Deeply optimized plugin mechanism supporting [plugin development](https://astrbot.app/dev/plugin.html) to extend functionality, with a rich community plugin ecosystem.
5. **Web UI**. Visual configuration and management of your bot with comprehensive features.
#### Windows Installer
## Deployment Methods
Requires Python (>3.10). See docs: [Windows Installer Guide](https://astrbot.app/deploy/astrbot/windows.html)
#### Docker Deployment (Recommended 🥳)
#### Replit Deployment
We recommend deploying AstrBot using Docker or Docker Compose.
[![Run on Repl.it](https://repl.it/badge/github/Soulter/AstrBot)](https://repl.it/github/Soulter/AstrBot)
Please refer to the official documentation: [Deploy AstrBot with Docker](https://astrbot.app/deploy/astrbot/docker.html#%E4%BD%BF%E7%94%A8-docker-%E9%83%A8%E7%BD%B2-astrbot).
#### BT-Panel Deployment
AstrBot has partnered with BT-Panel and is now available in their marketplace.
Please refer to the official documentation: [BT-Panel Deployment](https://astrbot.app/deploy/astrbot/btpanel.html).
#### 1Panel Deployment
AstrBot has been officially listed on the 1Panel marketplace.
Please refer to the official documentation: [1Panel Deployment](https://astrbot.app/deploy/astrbot/1panel.html).
#### Deploy on RainYun
AstrBot has been officially listed on RainYun's cloud application platform with one-click deployment.
[![Deploy on RainYun](https://rainyun-apps.cn-nb1.rains3.com/materials/deploy-on-rainyun-en.svg)](https://app.rainyun.com/apps/rca/store/5994?ref=NjU1ODg0)
#### Deploy on Replit
Community-contributed deployment method.
[![Run on Repl.it](https://repl.it/badge/github/AstrBotDevs/AstrBot)](https://repl.it/github/AstrBotDevs/AstrBot)
#### Windows One-Click Installer
Please refer to the official documentation: [Deploy AstrBot with Windows One-Click Installer](https://astrbot.app/deploy/astrbot/windows.html).
#### CasaOS Deployment
Community-contributed method.
See docs: [CasaOS Deployment](https://astrbot.app/deploy/astrbot/casaos.html)
Community-contributed deployment method.
Please refer to the official documentation: [CasaOS Deployment](https://astrbot.app/deploy/astrbot/casaos.html).
#### Manual Deployment
See docs: [Source Code Deployment](https://astrbot.app/deploy/astrbot/cli.html)
First, install uv:
## ⚡ Platform Support
```bash
pip install uv
```
| Platform | Status | Details | Message Types |
| -------------------------------------------------------------- | ------ | ------------------- | ------------------- |
| QQ (Official Bot) | ✔ | Private/Group chats | Text, Images |
| QQ (OneBot) | ✔ | Private/Group chats | Text, Images, Voice |
| WeChat (Personal) | ✔ | Private/Group chats | Text, Images, Voice |
| [Telegram](https://github.com/Soulter/astrbot_plugin_telegram) | ✔ | Private/Group chats | Text, Images |
| [WeChat Work](https://github.com/Soulter/astrbot_plugin_wecom) | ✔ | Private chats | Text, Images, Voice |
| Feishu | ✔ | Group chats | Text, Images |
| WeChat Open Platform | 🚧 | Planned | - |
| Discord | 🚧 | Planned | - |
| WhatsApp | 🚧 | Planned | - |
| Xiaomi Speakers | 🚧 | Planned | - |
Install AstrBot via Git Clone:
## Provider Support Status
```bash
git clone https://github.com/AstrBotDevs/AstrBot && cd AstrBot
uv run main.py
```
| Name | Support | Type | Notes |
|---------------------------|---------|------------------------|-----------------------------------------------------------------------|
| OpenAI API | ✔ | Text Generation | Supports all OpenAI API-compatible services including DeepSeek, Google Gemini, GLM, Moonshot, Alibaba Cloud Bailian, Silicon Flow, xAI, etc. |
| Claude API | ✔ | Text Generation | |
| Google Gemini API | ✔ | Text Generation | |
| Dify | ✔ | LLMOps | |
| DashScope (Alibaba Cloud) | ✔ | LLMOps | |
| Ollama | ✔ | Model Loader | Local deployment for open-source LLMs (DeepSeek, Llama, etc.) |
| LM Studio | ✔ | Model Loader | Local deployment for open-source LLMs (DeepSeek, Llama, etc.) |
| LLMTuner | ✔ | Model Loader | Local loading of fine-tuned models (e.g. LoRA) |
| OneAPI | ✔ | LLM Distribution | |
| Whisper | ✔ | Speech-to-Text | Supports API and local deployment |
| SenseVoice | ✔ | Speech-to-Text | Local deployment |
| OpenAI TTS API | ✔ | Text-to-Speech | |
| Fishaudio | ✔ | Text-to-Speech | Project involving GPT-Sovits author |
Or refer to the official documentation: [Deploy AstrBot from Source](https://astrbot.app/deploy/astrbot/cli.html).
# 🦌 Roadmap
## 🌍 Community
> [!TIP]
> Suggestions welcome via Issues <3
### QQ Groups
- [ ] Ensure feature parity across all platform adapters
- [ ] Optimize plugin APIs
- [ ] Add default TTS services (e.g., GPT-Sovits)
- [ ] Enhance chat features with persistent memory
- [ ] i18n Planning
- Group 1: 322154837
- Group 3: 630166526
- Group 5: 822130018
- Group 6: 753075035
- Developer Group: 975206796
## ❤️ Contributions
### Telegram Group
All Issues/PRs welcome! Simply submit your changes to this project :)
<a href="https://t.me/+hAsD2Ebl5as3NmY1"><img alt="Telegram_community" src="https://img.shields.io/badge/Telegram-AstrBot-purple?style=for-the-badge&color=76bad9"></a>
For major features, please discuss via Issues first.
### Discord Server
## 🌟 Support
<a href="https://discord.gg/hAVk6tgV36"><img alt="Discord_community" src="https://img.shields.io/badge/Discord-AstrBot-purple?style=for-the-badge&color=76bad9"></a>
- Star this project!
- Support via [Afdian](https://afdian.com/a/soulter)
- WeChat support: [QR Code](https://drive.soulter.top/f/pYfA/d903f4fa49a496fda3f16d2be9e023b5.png)
## Supported Messaging Platforms
## ✨ Demos
**Officially Maintained**
> [!NOTE]
> Code executor file I/O currently tested with Napcat(QQ)/Lagrange(QQ)
- QQ (Official Platform & OneBot)
- Telegram
- WeChat Work Application & WeChat Work Intelligent Bot
- WeChat Customer Service & WeChat Official Accounts
- Feishu (Lark)
- DingTalk
- Slack
- Discord
- Satori
- Misskey
- WhatsApp (Coming Soon)
- LINE (Coming Soon)
<div align='center'>
**Community Maintained**
<img src="https://github.com/user-attachments/assets/4ee688d9-467d-45c8-99d6-368f9a8a92d8" width="600">
- [KOOK](https://github.com/wuyan1003/astrbot_plugin_kook_adapter)
- [VoceChat](https://github.com/HikariFroya/astrbot_plugin_vocechat)
- [Bilibili Direct Messages](https://github.com/Hina-Chat/astrbot_plugin_bilibili_adapter)
- [wxauto](https://github.com/luosheng520qaq/wxauto-repost-onebotv11)
_✨ Docker-based Sandboxed Code Executor (Beta) ✨_
## Supported Model Services
<img src="https://github.com/user-attachments/assets/0378f407-6079-4f64-ae4c-e97ab20611d2" height=500>
**LLM Services**
_✨ Multimodal Input, Web Search, Text-to-Image ✨_
- OpenAI and Compatible Services
- Anthropic
- Google Gemini
- Moonshot AI
- Zhipu AI
- DeepSeek
- Ollama (Self-hosted)
- LM Studio (Self-hosted)
- [CompShare](https://www.compshare.cn/?ytag=GPU_YY-gh_astrbot&referral_code=FV7DcGowN4hB5UuXKgpE74)
- [302.AI](https://share.302.ai/rr1M3l)
- [TokenPony](https://www.tokenpony.cn/3YPyf)
- [SiliconFlow](https://docs.siliconflow.cn/cn/usecases/use-siliconcloud-in-astrbot)
- [PPIO Cloud](https://ppio.com/user/register?invited_by=AIOONE)
- ModelScope
- OneAPI
<img src="https://github.com/user-attachments/assets/8ec12797-e70f-460a-959e-48eca39ca2bb" height=100>
**LLMOps Platforms**
_✨ Natural Language TODO Lists ✨_
- Dify
- Alibaba Cloud Bailian Applications
- Coze
<img src="https://github.com/user-attachments/assets/e137a9e1-340a-4bf2-bb2b-771132780735" height=150>
<img src="https://github.com/user-attachments/assets/480f5e82-cf6a-4955-a869-0d73137aa6e1" height=150>
**Speech-to-Text Services**
_✨ Plugin System Showcase ✨_
- OpenAI Whisper
- SenseVoice
<img src="https://github.com/user-attachments/assets/592a8630-14c7-4e06-b496-9c0386e4f36c" width=600>
**Text-to-Speech Services**
_✨ Web Dashboard ✨_
- OpenAI TTS
- Gemini TTS
- GPT-Sovits-Inference
- GPT-Sovits
- FishAudio
- Edge TTS
- Alibaba Cloud Bailian TTS
- Azure TTS
- Minimax TTS
- Volcano Engine TTS
![webchat](https://drive.soulter.top/f/vlsA/ezgif-5-fb044b2542.gif)
## ❤️ Contributing
_✨ Built-in Web Chat Interface ✨_
Issues and Pull Requests are always welcome! Feel free to submit your changes to this project :)
</div>
### How to Contribute
You can contribute by reviewing issues or helping with pull request reviews. Any issues or PRs are welcome to encourage community participation. Of course, these are just suggestions—you can contribute in any way you like. For adding new features, please discuss through an Issue first.
### Development Environment
AstrBot uses `ruff` for code formatting and linting.
```bash
git clone https://github.com/AstrBotDevs/AstrBot
pip install pre-commit
pre-commit install
```
## ❤️ Special Thanks
Special thanks to all Contributors and plugin developers for their contributions to AstrBot ❤️
<a href="https://github.com/AstrBotDevs/AstrBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=AstrBotDevs/AstrBot" />
</a>
Additionally, the birth of this project would not have been possible without the help of the following open-source projects:
- [NapNeko/NapCatQQ](https://github.com/NapNeko/NapCatQQ) - The amazing cat framework
## ⭐ Star History
> [!TIP]
> If this project helps you, please give it a star <3
> If this project has helped you in your life or work, or if you're interested in its future development, please give the project a Star. It's the driving force behind maintaining this open-source project <3
<div align="center">
[![Star History Chart](https://api.star-history.com/svg?repos=soulter/astrbot&type=Date)](https://star-history.com/#soulter/astrbot&Date)
[![Star History Chart](https://api.star-history.com/svg?repos=astrbotdevs/astrbot&type=Date)](https://star-history.com/#astrbotdevs/astrbot&Date)
</div>
## Disclaimer
1. Licensed under `AGPL-v3`.
2. WeChat integration uses [Gewechat](https://github.com/Devo919/Gewechat). Use at your own risk with non-critical accounts.
3. Users must comply with local laws and regulations.
<!-- ## ✨ ATRI [Beta]
Available as plugin: [astrbot_plugin_atri](https://github.com/Soulter/astrbot_plugin_atri)
1. Qwen1.5-7B-Chat Lora model fine-tuned with ATRI character data
2. Long-term memory
3. Meme understanding & responses
4. TTS integration
-->
</details>
_私は、高性能ですから!_

View File

@@ -1,170 +1,233 @@
<p align="center">
![6e1279651f16d7fdf4727558b72bbaf1](https://github.com/user-attachments/assets/ead4c551-fc3c-48f7-a6f7-afbfdb820512)
![AstrBot-Logo-Simplified](https://github.com/user-attachments/assets/ffd99b6b-3272-4682-beaa-6fe74250f7d9)
</p>
<div align="center">
_✨ 簡単に使えるマルチプラットフォーム LLM チャットボットおよび開発フレームワーク ✨_
<br>
<div>
<a href="https://trendshift.io/repositories/12875" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12875" alt="Soulter%2FAstrBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/Soulter/AstrBot)](https://github.com/Soulter/AstrBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.10+-blue.svg" alt="python">
<a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg"/></a>
<img alt="Static Badge" src="https://img.shields.io/badge/QQ群-630166526-purple">
[![wakatime](https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/018e705a-a1a7-409a-a849-3013485e6c8e.svg)](https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/018e705a-a1a7-409a-a849-3013485e6c8e)
![Dynamic JSON Badge](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.soulter.top%2Fastrbot%2Fstats&query=v&label=7%E6%97%A5%E6%B6%88%E6%81%AF%E4%B8%8A%E8%A1%8C%E9%87%8F&cacheSeconds=3600)
[![codecov](https://codecov.io/gh/Soulter/AstrBot/graph/badge.svg?token=FF3P5967B8)](https://codecov.io/gh/Soulter/AstrBot)
<a href="https://astrbot.app/">ドキュメントを見る</a>
<a href="https://github.com/Soulter/AstrBot/issues">問題を報告する</a>
<a href="https://hellogithub.com/repository/AstrBotDevs/AstrBot" target="_blank"><img src="https://api.hellogithub.com/v1/widgets/recommend.svg?rid=d127d50cd5e54c5382328acc3bb25483&claim_uid=ZO9by7qCXgSd6Lp&t=2" alt="FeaturedHelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a>
</div>
AstrBot は、疎結合、非同期、複数のメッセージプラットフォームに対応したデプロイ、使いやすいプラグインシステム、および包括的な大規模言語モデルLLM接続機能を備えたチャットボットおよび開発フレームワークです。
<br>
## ✨ 主な機能
<div>
<img src="https://img.shields.io/github/v/release/AstrBotDevs/AstrBot?style=for-the-badge&color=76bad9" href="https://github.com/AstrBotDevs/AstrBot/releases/latest">
<img src="https://img.shields.io/badge/python-3.10+-blue.svg?style=for-the-badge&color=76bad9" alt="python">
<a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg?style=for-the-badge&color=76bad9"/></a>
<a href="https://qm.qq.com/cgi-bin/qm/qr?k=wtbaNx7EioxeaqS9z7RQWVXPIxg2zYr7&jump_from=webapi&authKey=vlqnv/AV2DbJEvGIcxdlNSpfxVy+8vVqijgreRdnVKOaydpc+YSw4MctmEbr0k5"><img alt="QQ_community" src="https://img.shields.io/badge/QQ群-775869627-purple?style=for-the-badge&color=76bad9"></a>
<a href="https://t.me/+hAsD2Ebl5as3NmY1"><img alt="Telegram_community" src="https://img.shields.io/badge/Telegram-AstrBot-purple?style=for-the-badge&color=76bad9"></a>
<img src="https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.soulter.top%2Fastrbot%2Fplugin-num&query=%24.result&suffix=%E4%B8%AA&style=for-the-badge&label=%E6%8F%92%E4%BB%B6%E5%B8%82%E5%9C%BA&cacheSeconds=3600">
</div>
1. **大規模言語モデルの対話**。OpenAI API、Google Gemini、Llama、Deepseek、ChatGLM など、さまざまな大規模言語モデルをサポートし、Ollama、LLMTuner を介してローカルにデプロイされた大規模モデルをサポートします。多輪対話、人格シナリオ、多モーダル機能を備え、画像理解、音声からテキストへの変換Whisperをサポートします。
2. **複数のメッセージプラットフォームの接続**。QQOneBot、QQ チャンネル、WeChatGewechat、Feishu、Telegram への接続をサポートします。今後、DingTalk、Discord、WhatsApp、Xiaoai 音響をサポートする予定です。レート制限、ホワイトリスト、キーワードフィルタリング、Baidu コンテンツ監査をサポートします。
3. **エージェント**。一部のエージェント機能をネイティブにサポートし、コードエグゼキューター、自然言語タスク、ウェブ検索などを提供します。[Dify プラットフォーム](https://dify.ai/)と連携し、Dify スマートアシスタント、ナレッジベース、Dify ワークフローを簡単に接続できます。
4. **プラグインの拡張**。深く最適化されたプラグインメカニズムを備え、[プラグインの開発](https://astrbot.app/dev/plugin.html)をサポートし、機能を拡張できます。複数のプラグインのインストールをサポートします。
5. **ビジュアル管理パネル**。設定の視覚的な変更、プラグイン管理、ログの表示などをサポートし、設定の難易度を低減します。WebChat を統合し、パネル上で大規模モデルと対話できます。
6. **高い安定性と高いモジュール性**。イベントバスとパイプラインに基づくアーキテクチャ設計により、高度にモジュール化され、低結合です。
<br>
> [!TIP]
> 管理パネルのオンラインデモを体験する: [https://demo.astrbot.app/](https://demo.astrbot.app/)
>
> ユーザー名: `astrbot`, パスワード: `astrbot`。LLM が設定されていないため、チャットページで大規模モデルを使用することはできません。(デモのログインパスワードを変更しないでください 😭)
<a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README.md">中文</a>
<a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README_en.md">English</a>
<a href="https://astrbot.app/">ドキュメント</a>
<a href="https://blog.astrbot.app/">Blog</a>
<a href="https://astrbot.featurebase.app/roadmap">ロードマップ</a>
<a href="https://github.com/AstrBotDevs/AstrBot/issues">Issue</a>
</div>
## ✨ 使用方法
AstrBot は、オープンソースのオールインワン Agent チャットボットプラットフォーム及び開発フレームワークです。
#### Docker デプロイ
## 主な機能
公式ドキュメント [Docker を使用して AstrBot をデプロイする](https://astrbot.app/deploy/astrbot/docker.html#%E4%BD%BF%E7%94%A8-docker-%E9%83%A8%E7%BD%B2-astrbot) を参照してください
1. **大規模言語モデル対話**。多様な大規模言語モデルサービスとの統合をサポート。マルチモーダル、ツール呼び出し、MCP、ネイティブナレッジベース、キャラクター設定などの機能を搭載
2. **マルチメッセージプラットフォームサポート**。QQ、WeChat Work、WeChat公式アカウント、Feishu、Telegram、DingTalk、Discord、KOOK などのプラットフォームと統合可能。レート制限、ホワイトリスト、Baidu コンテンツ審査をサポート。
3. **Agent**。完全に最適化された Agentic 機能。マルチターンツール呼び出し、内蔵サンドボックスコード実行環境、Web 検索などの機能をサポート。
4. **プラグイン拡張**。深く最適化されたプラグインメカニズムで、[プラグイン開発](https://astrbot.app/dev/plugin.html)による機能拡張をサポート。豊富なコミュニティプラグインエコシステム。
5. **WebUI**。ビジュアル設定とボット管理、充実した機能。
#### Windows ワンクリックインストーラーのデプロイ
## デプロイ方法
コンピュータに Python>3.10)がインストールされている必要があります。公式ドキュメント [Windows ワンクリックインストーラーを使用して AstrBot をデプロイする](https://astrbot.app/deploy/astrbot/windows.html) を参照してください。
#### Docker デプロイ(推奨 🥳)
#### Replit デプロイ
Docker / Docker Compose を使用した AstrBot デプロイを推奨します。
[![Run on Repl.it](https://repl.it/badge/github/Soulter/AstrBot)](https://repl.it/github/Soulter/AstrBot)
公式ドキュメント [Docker を使用した AstrBot のデプロイ](https://astrbot.app/deploy/astrbot/docker.html#%E4%BD%BF%E7%94%A8-docker-%E9%83%A8%E7%BD%B2-astrbot) をご参照ください。
#### 宝塔パネルデプロイ
AstrBot は宝塔パネルと提携し、宝塔パネルに公開されています。
公式ドキュメント [宝塔パネルデプロイ](https://astrbot.app/deploy/astrbot/btpanel.html) をご参照ください。
#### 1Panel デプロイ
AstrBot は 1Panel 公式により 1Panel パネルに公開されています。
公式ドキュメント [1Panel デプロイ](https://astrbot.app/deploy/astrbot/1panel.html) をご参照ください。
#### 雨云でのデプロイ
AstrBot は雨云公式によりクラウドアプリケーションプラットフォームに公開され、ワンクリックでデプロイ可能です。
[![Deploy on RainYun](https://rainyun-apps.cn-nb1.rains3.com/materials/deploy-on-rainyun-en.svg)](https://app.rainyun.com/apps/rca/store/5994?ref=NjU1ODg0)
#### Replit でのデプロイ
コミュニティ貢献によるデプロイ方法。
[![Run on Repl.it](https://repl.it/badge/github/AstrBotDevs/AstrBot)](https://repl.it/github/AstrBotDevs/AstrBot)
#### Windows ワンクリックインストーラーデプロイ
公式ドキュメント [Windows ワンクリックインストーラーを使用した AstrBot のデプロイ](https://astrbot.app/deploy/astrbot/windows.html) をご参照ください。
#### CasaOS デプロイ
コミュニティが提供するデプロイ方法です
コミュニティ貢献によるデプロイ方法。
公式ドキュメント [ソースコードを使用して AstrBot をデプロイする](https://astrbot.app/deploy/astrbot/casaos.html) を参照してください。
公式ドキュメント [CasaOS デプロイ](https://astrbot.app/deploy/astrbot/casaos.html) を参照ください。
#### 手動デプロイ
公式ドキュメント [ソースコードを使用して AstrBot をデプロイする](https://astrbot.app/deploy/astrbot/cli.html) を参照してください。
まず uv をインストールします:
## ⚡ メッセージプラットフォームのサポート状況
```bash
pip install uv
```
| プラットフォーム | サポート状況 | 詳細 | メッセージタイプ |
| -------- | ------- | ------- | ------ |
| QQ(公式ロボットインターフェース) | ✔ | プライベートチャット、グループチャット、QQ チャンネルプライベートチャット、グループチャット | テキスト、画像 |
| QQ(OneBot) | ✔ | プライベートチャット、グループチャット | テキスト、画像、音声 |
| WeChat(個人アカウント) | ✔ | WeChat 個人アカウントのプライベートチャット、グループチャット | テキスト、画像、音声 |
| [Telegram](https://github.com/Soulter/astrbot_plugin_telegram) | ✔ | プライベートチャット、グループチャット | テキスト、画像 |
| [WeChat(企業 WeChat)](https://github.com/Soulter/astrbot_plugin_wecom) | ✔ | プライベートチャット | テキスト、画像、音声 |
| Feishu | ✔ | グループチャット | テキスト、画像 |
| WeChat 対話オープンプラットフォーム | 🚧 | 計画中 | - |
| Discord | 🚧 | 計画中 | - |
| WhatsApp | 🚧 | 計画中 | - |
| Xiaoai 音響 | 🚧 | 計画中 | - |
Git Clone で AstrBot をインストール:
# 🦌 今後のロードマップ
```bash
git clone https://github.com/AstrBotDevs/AstrBot && cd AstrBot
uv run main.py
```
> [!TIP]
> Issue でさらに多くの提案を歓迎します <3
または、公式ドキュメント [ソースコードから AstrBot をデプロイ](https://astrbot.app/deploy/astrbot/cli.html) をご参照ください。
- [ ] 現在のすべてのプラットフォームアダプターの機能の一貫性を確保し、改善する
- [ ] プラグインインターフェースの最適化
- [ ] GPT-Sovits などの TTS サービスをデフォルトでサポート
- [ ] "チャット強化" 部分を完成させ、永続的な記憶をサポート
- [ ] i18n の計画
## 🌍 コミュニティ
## ❤️ 貢献
### QQ グループ
Issue や Pull Request を歓迎します!このプロジェクトに変更を加えるだけです :)
- 1群:322154837
- 3群:630166526
- 5群:822130018
- 6群:753075035
- 開発者群:975206796
新機能の追加については、まず Issue で議論してください。
### Telegram グループ
## 🌟 サポート
<a href="https://t.me/+hAsD2Ebl5as3NmY1"><img alt="Telegram_community" src="https://img.shields.io/badge/Telegram-AstrBot-purple?style=for-the-badge&color=76bad9"></a>
- このプロジェクトに Star を付けてください!
- [愛発電](https://afdian.com/a/soulter)で私をサポートしてください!
- [WeChat](https://drive.soulter.top/f/pYfA/d903f4fa49a496fda3f16d2be9e023b5.png)で私をサポートしてください~
### Discord サーバー
## ✨ デモ
<a href="https://discord.gg/hAVk6tgV36"><img alt="Discord_community" src="https://img.shields.io/badge/Discord-AstrBot-purple?style=for-the-badge&color=76bad9"></a>
> [!NOTE]
> コードエグゼキューターのファイル入力/出力は現在 Napcat(QQ)、Lagrange(QQ) でのみテストされています
## サポートされているメッセージプラットフォーム
<div align='center'>
**公式メンテナンス**
<img src="https://github.com/user-attachments/assets/4ee688d9-467d-45c8-99d6-368f9a8a92d8" width="600">
- QQ (公式プラットフォーム & OneBot)
- Telegram
- WeChat Work アプリケーション & WeChat Work インテリジェントボット
- WeChat カスタマーサービス & WeChat 公式アカウント
- Feishu (Lark)
- DingTalk
- Slack
- Discord
- Satori
- Misskey
- WhatsApp (近日対応予定)
- LINE (近日対応予定)
_✨ Docker ベースのサンドボックス化されたコードエグゼキューターベータテスト中✨_
**コミュニティメンテナンス**
<img src="https://github.com/user-attachments/assets/0378f407-6079-4f64-ae4c-e97ab20611d2" height=500>
- [KOOK](https://github.com/wuyan1003/astrbot_plugin_kook_adapter)
- [VoceChat](https://github.com/HikariFroya/astrbot_plugin_vocechat)
- [Bilibili ダイレクトメッセージ](https://github.com/Hina-Chat/astrbot_plugin_bilibili_adapter)
- [wxauto](https://github.com/luosheng520qaq/wxauto-repost-onebotv11)
_✨ 多モーダル、ウェブ検索、長文の画像変換設定可能✨_
## サポートされているモデルサービス
<img src="https://github.com/user-attachments/assets/8ec12797-e70f-460a-959e-48eca39ca2bb" height=100>
**大規模言語モデルサービス**
_✨ 自然言語タスク ✨_
- OpenAI および互換サービス
- Anthropic
- Google Gemini
- Moonshot AI
- 智谱 AI
- DeepSeek
- Ollama (セルフホスト)
- LM Studio (セルフホスト)
- [優云智算](https://www.compshare.cn/?ytag=GPU_YY-gh_astrbot&referral_code=FV7DcGowN4hB5UuXKgpE74)
- [302.AI](https://share.302.ai/rr1M3l)
- [小馬算力](https://www.tokenpony.cn/3YPyf)
- [硅基流動](https://docs.siliconflow.cn/cn/usercases/use-siliconcloud-in-astrbot)
- [PPIO 派欧云](https://ppio.com/user/register?invited_by=AIOONE)
- ModelScope
- OneAPI
<img src="https://github.com/user-attachments/assets/e137a9e1-340a-4bf2-bb2b-771132780735" height=150>
<img src="https://github.com/user-attachments/assets/480f5e82-cf6a-4955-a869-0d73137aa6e1" height=150>
**LLMOps プラットフォーム**
_✨ プラグインシステム - 一部のプラグインの展示 ✨_
- Dify
- Alibaba Cloud 百炼アプリケーション
- Coze
<img src="https://github.com/user-attachments/assets/592a8630-14c7-4e06-b496-9c0386e4f36c" width="600">
**音声認識サービス**
_✨ 管理パネル ✨_
- OpenAI Whisper
- SenseVoice
![webchat](https://drive.soulter.top/f/vlsA/ezgif-5-fb044b2542.gif)
**音声合成サービス**
_✨ 内蔵 Web Chat、オンラインでボットと対話 ✨_
- OpenAI TTS
- Gemini TTS
- GPT-Sovits-Inference
- GPT-Sovits
- FishAudio
- Edge TTS
- Alibaba Cloud 百炼 TTS
- Azure TTS
- Minimax TTS
- Volcano Engine TTS
</div>
## ❤️ コントリビューション
Issue や Pull Request は大歓迎です!このプロジェクトに変更を送信してください :)
### コントリビュート方法
Issue を確認したり、PR(プルリクエスト)のレビューを手伝うことで貢献できます。どんな Issue や PR への参加も歓迎され、コミュニティ貢献を促進します。もちろん、これらは提案に過ぎず、どんな方法でも貢献できます。新機能の追加については、まず Issue で議論してください。
### 開発環境
AstrBot はコードのフォーマットとチェックに `ruff` を使用しています。
```bash
git clone https://github.com/AstrBotDevs/AstrBot
pip install pre-commit
pre-commit install
```
## ❤️ Special Thanks
AstrBot への貢献をしていただいたすべてのコントリビューターとプラグイン開発者に特別な感謝を ❤️
<a href="https://github.com/AstrBotDevs/AstrBot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=AstrBotDevs/AstrBot" />
</a>
また、このプロジェクトの誕生は以下のオープンソースプロジェクトの助けなしには実現できませんでした:
- [NapNeko/NapCatQQ](https://github.com/NapNeko/NapCatQQ) - 素晴らしい猫猫フレームワーク
## ⭐ Star History
> [!TIP]
> このプロジェクトがあなたの生活や仕事に役立った場合、またはこのプロジェクトの将来の発展に関心がある場合は、プロジェクトに Star を付けてください。これこのオープンソースプロジェクトを維持するためのモチベーションです <3
> このプロジェクトがあなたの生活や仕事に役立ったり、このプロジェクトの今後の発展に関心がある場合は、プロジェクトに Star をください。これこのオープンソースプロジェクトを維持する原動力です <3
<div align="center">
[![Star History Chart](https://api.star-history.com/svg?repos=soulter/astrbot&type=Date)](https://star-history.com/#soulter/astrbot&Date)
[![Star History Chart](https://api.star-history.com/svg?repos=astrbotdevs/astrbot&type=Date)](https://star-history.com/#astrbotdevs/astrbot&Date)
</div>
## スポンサー
[<img src="https://api.gitsponsors.com/api/badge/img?id=575865240" height="20">](https://api.gitsponsors.com/api/badge/link?p=XEpbdGxlitw/RbcwiTX93UMzNK/jgDYC8NiSzamIPMoKvG2lBFmyXhSS/b0hFoWlBBMX2L5X5CxTDsUdyvcIEHTOfnkXz47UNOZvMwyt5CzbYpq0SEzsSV1OJF1cCo90qC/ZyYKYOWedal3MhZ3ikw==)
## 免責事項
1. このプロジェクトは `AGPL-v3` オープンソースライセンスの下で保護されています。
2. WeChat個人アカウントのデプロイメントには [Gewechat](https://github.com/Devo919/Gewechat) サービスを利用しています。AstrBot は Gewechat との接続を保証するだけであり、アカウントのリスク管理に関しては、このプロジェクトの著者は一切の責任を負いません。
3. このプロジェクトを使用する際は、現地の法律および規制を遵守してください。
<!-- ## ✨ ATRI [ベータテスト]
この機能はプラグインとしてロードされます。プラグインリポジトリのアドレス:[astrbot_plugin_atri](https://github.com/Soulter/astrbot_plugin_atri)
1. 《ATRI ~ My Dear Moments》の主人公 ATRI のキャラクターセリフを微調整データセットとして使用した `Qwen1.5-7B-Chat Lora` 微調整モデル。
2. 長期記憶
3. ミームの理解と返信
4. TTS
-->
</details>
_私は、高性能ですから!_

View File

@@ -1,7 +1,19 @@
from astrbot.core.config.astrbot_config import AstrBotConfig
from astrbot import logger
from astrbot.core import html_renderer
from astrbot.core import sp
from astrbot.core import html_renderer, sp
from astrbot.core.agent.tool import FunctionTool, ToolSet
from astrbot.core.agent.tool_executor import BaseFunctionToolExecutor
from astrbot.core.config.astrbot_config import AstrBotConfig
from astrbot.core.star.register import register_agent as agent
from astrbot.core.star.register import register_llm_tool as llm_tool
__all__ = ["AstrBotConfig", "logger", "html_renderer", "llm_tool", "sp"]
__all__ = [
"AstrBotConfig",
"BaseFunctionToolExecutor",
"FunctionTool",
"ToolSet",
"agent",
"html_renderer",
"llm_tool",
"logger",
"sp",
]

View File

@@ -36,7 +36,8 @@ from astrbot.core.star.config import *
# provider
from astrbot.core.provider import Provider, Personality, ProviderMetaData
from astrbot.core.provider import Provider, ProviderMetaData
from astrbot.core.db.po import Personality
# platform
from astrbot.core.platform import (

View File

@@ -1,18 +1,17 @@
from astrbot.core.message.message_event_result import (
MessageEventResult,
MessageChain,
CommandResult,
EventResultType,
MessageChain,
MessageEventResult,
ResultContentType,
)
from astrbot.core.platform import AstrMessageEvent
__all__ = [
"MessageEventResult",
"MessageChain",
"AstrMessageEvent",
"CommandResult",
"EventResultType",
"AstrMessageEvent",
"MessageChain",
"MessageEventResult",
"ResultContentType",
]

View File

@@ -1,49 +1,52 @@
from astrbot.core.star.register import (
register_command as command,
register_command_group as command_group,
register_event_message_type as event_message_type,
register_regex as regex,
register_platform_adapter_type as platform_adapter_type,
register_permission_type as permission_type,
register_custom_filter as custom_filter,
register_on_astrbot_loaded as on_astrbot_loaded,
register_on_llm_request as on_llm_request,
register_on_llm_response as on_llm_response,
register_llm_tool as llm_tool,
register_on_decorating_result as on_decorating_result,
register_after_message_sent as after_message_sent,
)
from astrbot.core.star.filter.event_message_type import (
EventMessageTypeFilter,
EventMessageType,
)
from astrbot.core.star.filter.platform_adapter_type import (
PlatformAdapterTypeFilter,
PlatformAdapterType,
)
from astrbot.core.star.filter.permission import PermissionTypeFilter, PermissionType
from astrbot.core.star.filter.custom_filter import CustomFilter
from astrbot.core.star.filter.event_message_type import (
EventMessageType,
EventMessageTypeFilter,
)
from astrbot.core.star.filter.permission import PermissionType, PermissionTypeFilter
from astrbot.core.star.filter.platform_adapter_type import (
PlatformAdapterType,
PlatformAdapterTypeFilter,
)
from astrbot.core.star.register import register_after_message_sent as after_message_sent
from astrbot.core.star.register import register_command as command
from astrbot.core.star.register import register_command_group as command_group
from astrbot.core.star.register import register_custom_filter as custom_filter
from astrbot.core.star.register import register_event_message_type as event_message_type
from astrbot.core.star.register import register_llm_tool as llm_tool
from astrbot.core.star.register import register_on_astrbot_loaded as on_astrbot_loaded
from astrbot.core.star.register import (
register_on_decorating_result as on_decorating_result,
)
from astrbot.core.star.register import register_on_llm_request as on_llm_request
from astrbot.core.star.register import register_on_llm_response as on_llm_response
from astrbot.core.star.register import register_on_platform_loaded as on_platform_loaded
from astrbot.core.star.register import register_permission_type as permission_type
from astrbot.core.star.register import (
register_platform_adapter_type as platform_adapter_type,
)
from astrbot.core.star.register import register_regex as regex
__all__ = [
"CustomFilter",
"EventMessageType",
"EventMessageTypeFilter",
"PermissionType",
"PermissionTypeFilter",
"PlatformAdapterType",
"PlatformAdapterTypeFilter",
"after_message_sent",
"command",
"command_group",
"event_message_type",
"regex",
"platform_adapter_type",
"permission_type",
"EventMessageTypeFilter",
"EventMessageType",
"PlatformAdapterTypeFilter",
"PlatformAdapterType",
"PermissionTypeFilter",
"CustomFilter",
"custom_filter",
"PermissionType",
"on_astrbot_loaded",
"on_llm_request",
"event_message_type",
"llm_tool",
"on_astrbot_loaded",
"on_decorating_result",
"after_message_sent",
"on_llm_request",
"on_llm_response",
"on_platform_loaded",
"permission_type",
"platform_adapter_type",
"regex",
]

View File

@@ -1,23 +1,22 @@
from astrbot.core.message.components import *
from astrbot.core.platform import (
AstrMessageEvent,
Platform,
AstrBotMessage,
AstrMessageEvent,
Group,
MessageMember,
MessageType,
Platform,
PlatformMetadata,
Group,
)
from astrbot.core.platform.register import register_platform_adapter
from astrbot.core.message.components import *
__all__ = [
"AstrMessageEvent",
"Platform",
"AstrBotMessage",
"AstrMessageEvent",
"Group",
"MessageMember",
"MessageType",
"Platform",
"PlatformMetadata",
"register_platform_adapter",
"Group",
]

View File

@@ -1,17 +1,18 @@
from astrbot.core.provider import Provider, STTProvider, Personality
from astrbot.core.db.po import Personality
from astrbot.core.provider import Provider, STTProvider
from astrbot.core.provider.entities import (
LLMResponse,
ProviderMetaData,
ProviderRequest,
ProviderType,
ProviderMetaData,
LLMResponse,
)
__all__ = [
"Provider",
"STTProvider",
"LLMResponse",
"Personality",
"Provider",
"ProviderMetaData",
"ProviderRequest",
"ProviderType",
"ProviderMetaData",
"LLMResponse",
"STTProvider",
]

View File

@@ -1,8 +1,7 @@
from astrbot.core.star import Context, Star, StarTools
from astrbot.core.star.config import *
from astrbot.core.star.register import (
register_star as register, # 注册插件Star
)
from astrbot.core.star import Context, Star, StarTools
from astrbot.core.star.config import *
__all__ = ["register", "Context", "Star", "StarTools"]
__all__ = ["Context", "Star", "StarTools", "register"]

View File

@@ -1,7 +1,7 @@
from astrbot.core.utils.session_waiter import (
SessionWaiter,
SessionController,
SessionWaiter,
session_waiter,
)
__all__ = ["SessionWaiter", "SessionController", "session_waiter"]
__all__ = ["SessionController", "SessionWaiter", "session_waiter"]

View File

@@ -1 +1 @@
__version__ = "3.5.8"
__version__ = "4.7.4"

View File

@@ -1,11 +1,11 @@
"""
AstrBot CLI入口
"""
"""AstrBot CLI入口"""
import sys
import click
import sys
from . import __version__
from .commands import init, run, plug, conf
from .commands import conf, init, plug, run
logo_tmpl = r"""
___ _______.___________..______ .______ ______ .___________.

View File

@@ -1,6 +1,6 @@
from .cmd_init import init
from .cmd_run import run
from .cmd_plug import plug
from .cmd_conf import conf
from .cmd_init import init
from .cmd_plug import plug
from .cmd_run import run
__all__ = ["init", "run", "plug", "conf"]
__all__ = ["conf", "init", "plug", "run"]

View File

@@ -1,9 +1,12 @@
import json
import click
import hashlib
import json
import zoneinfo
from typing import Any, Callable
from ..utils import get_astrbot_root, check_astrbot_root
from collections.abc import Callable
from typing import Any
import click
from ..utils import check_astrbot_root, get_astrbot_root
def _validate_log_level(value: str) -> str:
@@ -11,7 +14,7 @@ def _validate_log_level(value: str) -> str:
value = value.upper()
if value not in ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]:
raise click.ClickException(
"日志级别必须是 DEBUG/INFO/WARNING/ERROR/CRITICAL 之一"
"日志级别必须是 DEBUG/INFO/WARNING/ERROR/CRITICAL 之一",
)
return value
@@ -73,7 +76,7 @@ def _load_config() -> dict[str, Any]:
root = get_astrbot_root()
if not check_astrbot_root(root):
raise click.ClickException(
f"{root}不是有效的 AstrBot 根目录,如需初始化请使用 astrbot init"
f"{root}不是有效的 AstrBot 根目录,如需初始化请使用 astrbot init",
)
config_path = root / "data" / "cmd_config.json"
@@ -88,7 +91,7 @@ def _load_config() -> dict[str, Any]:
try:
return json.loads(config_path.read_text(encoding="utf-8-sig"))
except json.JSONDecodeError as e:
raise click.ClickException(f"配置文件解析失败: {str(e)}")
raise click.ClickException(f"配置文件解析失败: {e!s}")
def _save_config(config: dict[str, Any]) -> None:
@@ -96,7 +99,8 @@ def _save_config(config: dict[str, Any]) -> None:
config_path = get_astrbot_root() / "data" / "cmd_config.json"
config_path.write_text(
json.dumps(config, ensure_ascii=False, indent=2), encoding="utf-8-sig"
json.dumps(config, ensure_ascii=False, indent=2),
encoding="utf-8-sig",
)
@@ -108,7 +112,7 @@ def _set_nested_item(obj: dict[str, Any], path: str, value: Any) -> None:
obj[part] = {}
elif not isinstance(obj[part], dict):
raise click.ClickException(
f"配置路径冲突: {'.'.join(parts[: parts.index(part) + 1])} 不是字典"
f"配置路径冲突: {'.'.join(parts[: parts.index(part) + 1])} 不是字典",
)
obj = obj[part]
obj[parts[-1]] = value
@@ -140,7 +144,6 @@ def conf():
- callback_api_base: 回调接口基址
"""
pass
@conf.command(name="set")
@@ -148,7 +151,7 @@ def conf():
@click.argument("value")
def set_config(key: str, value: str):
"""设置配置项的值"""
if key not in CONFIG_VALIDATORS.keys():
if key not in CONFIG_VALIDATORS:
raise click.ClickException(f"不支持的配置项: {key}")
config = _load_config()
@@ -170,17 +173,17 @@ def set_config(key: str, value: str):
except KeyError:
raise click.ClickException(f"未知的配置项: {key}")
except Exception as e:
raise click.UsageError(f"设置配置失败: {str(e)}")
raise click.UsageError(f"设置配置失败: {e!s}")
@conf.command(name="get")
@click.argument("key", required=False)
def get_config(key: str = None):
def get_config(key: str | None = None):
"""获取配置项的值不提供key则显示所有可配置项"""
config = _load_config()
if key:
if key not in CONFIG_VALIDATORS.keys():
if key not in CONFIG_VALIDATORS:
raise click.ClickException(f"不支持的配置项: {key}")
try:
@@ -191,10 +194,10 @@ def get_config(key: str = None):
except KeyError:
raise click.ClickException(f"未知的配置项: {key}")
except Exception as e:
raise click.UsageError(f"获取配置失败: {str(e)}")
raise click.UsageError(f"获取配置失败: {e!s}")
else:
click.echo("当前配置:")
for key in CONFIG_VALIDATORS.keys():
for key in CONFIG_VALIDATORS:
try:
value = (
"********"

View File

@@ -1,4 +1,5 @@
import asyncio
from pathlib import Path
import click
from filelock import FileLock, Timeout
@@ -6,14 +7,14 @@ from filelock import FileLock, Timeout
from ..utils import check_dashboard, get_astrbot_root
async def initialize_astrbot(astrbot_root) -> None:
async def initialize_astrbot(astrbot_root: Path) -> None:
"""执行 AstrBot 初始化逻辑"""
dot_astrbot = astrbot_root / ".astrbot"
if not dot_astrbot.exists():
click.echo(f"Current Directory: {astrbot_root}")
click.echo(
"如果你确认这是 Astrbot root directory, 你需要在当前目录下创建一个 .astrbot 文件标记该目录为 AstrBot 的数据目录。"
"如果你确认这是 Astrbot root directory, 你需要在当前目录下创建一个 .astrbot 文件标记该目录为 AstrBot 的数据目录。",
)
if click.confirm(
f"请检查当前目录是否正确,确认正确请回车: {astrbot_root}",

View File

@@ -1,31 +1,29 @@
import re
import shutil
from pathlib import Path
import click
import shutil
from ..utils import (
get_git_repo,
build_plug_list,
manage_plugin,
PluginStatus,
build_plug_list,
check_astrbot_root,
get_astrbot_root,
get_git_repo,
manage_plugin,
)
@click.group()
def plug():
"""插件管理"""
pass
def _get_data_path() -> Path:
base = get_astrbot_root()
if not check_astrbot_root(base):
raise click.ClickException(
f"{base}不是有效的 AstrBot 根目录,如需初始化请使用 astrbot init"
f"{base}不是有效的 AstrBot 根目录,如需初始化请使用 astrbot init",
)
return (base / "data").resolve()
@@ -41,7 +39,7 @@ def display_plugins(plugins, title=None, color=None):
desc = p["desc"][:30] + ("..." if len(p["desc"]) > 30 else "")
click.echo(
f"{p['name']:<20} {p['version']:<10} {p['status']:<10} "
f"{p['author']:<15} {desc:<30}"
f"{p['author']:<15} {desc:<30}",
)
@@ -78,7 +76,7 @@ def new(name: str):
f"desc: {desc}\n"
f"version: {version}\n"
f"author: {author}\n"
f"repo: {repo}\n"
f"repo: {repo}\n",
)
# 重写 README.md
@@ -86,7 +84,7 @@ def new(name: str):
f.write(f"# {name}\n\n{desc}\n\n# 支持\n\n[帮助文档](https://astrbot.app)\n")
# 重写 main.py
with open(plug_path / "main.py", "r", encoding="utf-8") as f:
with open(plug_path / "main.py", encoding="utf-8") as f:
content = f.read()
new_content = content.replace(

View File

@@ -1,19 +1,18 @@
import asyncio
import os
import sys
import traceback
from pathlib import Path
import click
import asyncio
import traceback
from filelock import FileLock, Timeout
from ..utils import check_dashboard, check_astrbot_root, get_astrbot_root
from ..utils import check_astrbot_root, check_dashboard, get_astrbot_root
async def run_astrbot(astrbot_root: Path):
"""运行 AstrBot"""
from astrbot.core import logger, LogManager, LogBroker, db_helper
from astrbot.core import LogBroker, LogManager, db_helper, logger
from astrbot.core.initial_loader import InitialLoader
await check_dashboard(astrbot_root / "data")
@@ -38,7 +37,7 @@ def run(reload: bool, port: str) -> None:
if not check_astrbot_root(astrbot_root):
raise click.ClickException(
f"{astrbot_root}不是有效的 AstrBot 根目录,如需初始化请使用 astrbot init"
f"{astrbot_root}不是有效的 AstrBot 根目录,如需初始化请使用 astrbot init",
)
os.environ["ASTRBOT_ROOT"] = str(astrbot_root)

View File

@@ -1,18 +1,18 @@
from .basic import (
get_astrbot_root,
check_astrbot_root,
check_dashboard,
get_astrbot_root,
)
from .plugin import get_git_repo, manage_plugin, build_plug_list, PluginStatus
from .plugin import PluginStatus, build_plug_list, get_git_repo, manage_plugin
from .version_comparator import VersionComparator
__all__ = [
"get_astrbot_root",
"PluginStatus",
"VersionComparator",
"build_plug_list",
"check_astrbot_root",
"check_dashboard",
"get_astrbot_root",
"get_git_repo",
"manage_plugin",
"build_plug_list",
"VersionComparator",
"PluginStatus",
]

View File

@@ -21,8 +21,9 @@ def get_astrbot_root() -> Path:
async def check_dashboard(astrbot_root: Path) -> None:
"""检查是否安装了dashboard"""
from astrbot.core.utils.io import get_dashboard_version, download_dashboard
from astrbot.core.config.default import VERSION
from astrbot.core.utils.io import download_dashboard, get_dashboard_version
from .version_comparator import VersionComparator
try:
@@ -37,7 +38,10 @@ async def check_dashboard(astrbot_root: Path) -> None:
):
click.echo("正在安装管理面板...")
await download_dashboard(
path="data/dashboard.zip", extract_path=str(astrbot_root)
path="data/dashboard.zip",
extract_path=str(astrbot_root),
version=f"v{VERSION}",
latest=False,
)
click.echo("管理面板安装完成")
@@ -45,12 +49,14 @@ async def check_dashboard(astrbot_root: Path) -> None:
if VersionComparator.compare_version(VERSION, dashboard_version) <= 0:
click.echo("管理面板已是最新版本")
return
else:
try:
version = dashboard_version.split("v")[1]
click.echo(f"管理面板版本: {version}")
await download_dashboard(
path="data/dashboard.zip", extract_path=str(astrbot_root)
path="data/dashboard.zip",
extract_path=str(astrbot_root),
version=f"v{VERSION}",
latest=False,
)
except Exception as e:
click.echo(f"下载管理面板失败: {e}")
@@ -59,7 +65,10 @@ async def check_dashboard(astrbot_root: Path) -> None:
click.echo("初始化管理面板目录...")
try:
await download_dashboard(
path=str(astrbot_root / "dashboard.zip"), extract_path=str(astrbot_root)
path=str(astrbot_root / "dashboard.zip"),
extract_path=str(astrbot_root),
version=f"v{VERSION}",
latest=False,
)
click.echo("管理面板初始化完成")
except Exception as e:

View File

@@ -1,14 +1,14 @@
import shutil
import tempfile
import httpx
import yaml
from enum import Enum
from io import BytesIO
from pathlib import Path
from zipfile import ZipFile
import click
import httpx
import yaml
from .version_comparator import VersionComparator
@@ -32,7 +32,8 @@ def get_git_repo(url: str, target_path: Path, proxy: str | None = None):
release_url = f"https://api.github.com/repos/{author}/{repo}/releases"
try:
with httpx.Client(
proxy=proxy if proxy else None, follow_redirects=True
proxy=proxy if proxy else None,
follow_redirects=True,
) as client:
resp = client.get(release_url)
resp.raise_for_status()
@@ -55,7 +56,8 @@ def get_git_repo(url: str, target_path: Path, proxy: str | None = None):
# 下载并解压
with httpx.Client(
proxy=proxy if proxy else None, follow_redirects=True
proxy=proxy if proxy else None,
follow_redirects=True,
) as client:
resp = client.get(download_url)
if (
@@ -89,6 +91,7 @@ def load_yaml_metadata(plugin_dir: Path) -> dict:
Returns:
dict: 包含元数据的字典,如果读取失败则返回空字典
"""
yaml_path = plugin_dir / "metadata.yaml"
if yaml_path.exists():
@@ -107,6 +110,7 @@ def build_plug_list(plugins_dir: Path) -> list:
Returns:
list: 包含插件信息的字典列表
"""
# 获取本地插件信息
result = []
@@ -117,11 +121,15 @@ def build_plug_list(plugins_dir: Path) -> list:
# 从 metadata.yaml 加载元数据
metadata = load_yaml_metadata(plugin_dir)
if "desc" not in metadata and "description" in metadata:
metadata["desc"] = metadata["description"]
# 如果成功加载元数据,添加到结果列表
if metadata and all(
k in metadata for k in ["name", "desc", "version", "author", "repo"]
):
result.append({
result.append(
{
"name": str(metadata.get("name", "")),
"desc": str(metadata.get("desc", "")),
"version": str(metadata.get("version", "")),
@@ -129,7 +137,8 @@ def build_plug_list(plugins_dir: Path) -> list:
"repo": str(metadata.get("repo", "")),
"status": PluginStatus.INSTALLED,
"local_path": str(plugin_dir),
})
},
)
# 获取在线插件列表
online_plugins = []
@@ -139,7 +148,8 @@ def build_plug_list(plugins_dir: Path) -> list:
resp.raise_for_status()
data = resp.json()
for plugin_id, plugin_info in data.items():
online_plugins.append({
online_plugins.append(
{
"name": str(plugin_id),
"desc": str(plugin_info.get("desc", "")),
"version": str(plugin_info.get("version", "")),
@@ -147,7 +157,8 @@ def build_plug_list(plugins_dir: Path) -> list:
"repo": str(plugin_info.get("repo", "")),
"status": PluginStatus.NOT_INSTALLED,
"local_path": None,
})
},
)
except Exception as e:
click.echo(f"获取在线插件列表失败: {e}", err=True)
@@ -161,7 +172,8 @@ def build_plug_list(plugins_dir: Path) -> list:
)
if (
VersionComparator.compare_version(
local_plugin["version"], online_plugin["version"]
local_plugin["version"],
online_plugin["version"],
)
< 0
):
@@ -179,7 +191,10 @@ def build_plug_list(plugins_dir: Path) -> list:
def manage_plugin(
plugin: dict, plugins_dir: Path, is_update: bool = False, proxy: str | None = None
plugin: dict,
plugins_dir: Path,
is_update: bool = False,
proxy: str | None = None,
) -> None:
"""安装或更新插件
@@ -188,6 +203,7 @@ def manage_plugin(
plugins_dir (Path): 插件目录
is_update (bool, optional): 是否为更新操作. 默认为 False
proxy (str, optional): 代理服务器地址
"""
plugin_name = plugin["name"]
repo_url = plugin["repo"]
@@ -205,26 +221,26 @@ def manage_plugin(
raise click.ClickException(f"插件 {plugin_name} 未安装,无法更新")
# 备份现有插件
if is_update and backup_path.exists():
if is_update and backup_path is not None and backup_path.exists():
shutil.rmtree(backup_path)
if is_update:
if is_update and backup_path is not None:
shutil.copytree(target_path, backup_path)
try:
click.echo(
f"正在从 {repo_url} {'更新' if is_update else '下载'}插件 {plugin_name}..."
f"正在从 {repo_url} {'更新' if is_update else '下载'}插件 {plugin_name}...",
)
get_git_repo(repo_url, target_path, proxy)
# 更新成功,删除备份
if is_update and backup_path.exists():
if is_update and backup_path is not None and backup_path.exists():
shutil.rmtree(backup_path)
click.echo(f"插件 {plugin_name} {'更新' if is_update else '安装'}成功")
except Exception as e:
if target_path.exists():
shutil.rmtree(target_path, ignore_errors=True)
if is_update and backup_path.exists():
if is_update and backup_path is not None and backup_path.exists():
shutil.move(backup_path, target_path)
raise click.ClickException(
f"{'更新' if is_update else '安装'}插件 {plugin_name} 时出错: {e}"
f"{'更新' if is_update else '安装'}插件 {plugin_name} 时出错: {e}",
)

View File

@@ -1,6 +1,4 @@
"""
拷贝自 astrbot.core.utils.version_comparator
"""
"""拷贝自 astrbot.core.utils.version_comparator"""
import re
@@ -42,15 +40,15 @@ class VersionComparator:
for i in range(length):
if v1_parts[i] > v2_parts[i]:
return 1
elif v1_parts[i] < v2_parts[i]:
if v1_parts[i] < v2_parts[i]:
return -1
# 比较预发布标签
if v1_prerelease is None and v2_prerelease is not None:
return 1 # 没有预发布标签的版本高于有预发布标签的版本
elif v1_prerelease is not None and v2_prerelease is None:
if v1_prerelease is not None and v2_prerelease is None:
return -1 # 有预发布标签的版本低于没有预发布标签的版本
elif v1_prerelease is not None and v2_prerelease is not None:
if v1_prerelease is not None and v2_prerelease is not None:
len_pre = max(len(v1_prerelease), len(v2_prerelease))
for i in range(len_pre):
p1 = v1_prerelease[i] if i < len(v1_prerelease) else None
@@ -58,21 +56,21 @@ class VersionComparator:
if p1 is None and p2 is not None:
return -1
elif p1 is not None and p2 is None:
if p1 is not None and p2 is None:
return 1
elif isinstance(p1, int) and isinstance(p2, str):
if isinstance(p1, int) and isinstance(p2, str):
return -1
elif isinstance(p1, str) and isinstance(p2, int):
if isinstance(p1, str) and isinstance(p2, int):
return 1
elif isinstance(p1, int) and isinstance(p2, int):
if isinstance(p1, int) and isinstance(p2, int):
if p1 > p2:
return 1
elif p1 < p2:
if p1 < p2:
return -1
elif isinstance(p1, str) and isinstance(p2, str):
if p1 > p2:
return 1
elif p1 < p2:
if p1 < p2:
return -1
return 0 # 预发布标签完全相同

View File

@@ -1,13 +1,14 @@
import os
import asyncio
from .log import LogManager, LogBroker # noqa
from astrbot.core.utils.t2i.renderer import HtmlRenderer
from astrbot.core.utils.shared_preferences import SharedPreferences
from astrbot.core.utils.pip_installer import PipInstaller
from astrbot.core.db.sqlite import SQLiteDatabase
from astrbot.core.config.default import DB_PATH
from astrbot.core.config import AstrBotConfig
from astrbot.core.config.default import DB_PATH
from astrbot.core.db.sqlite import SQLiteDatabase
from astrbot.core.file_token_service import FileTokenService
from astrbot.core.utils.pip_installer import PipInstaller
from astrbot.core.utils.shared_preferences import SharedPreferences
from astrbot.core.utils.t2i.renderer import HtmlRenderer
from .log import LogBroker, LogManager # noqa
from .utils.astrbot_path import get_astrbot_data_path
# 初始化数据存储文件夹
@@ -21,7 +22,7 @@ html_renderer = HtmlRenderer(t2i_base_url)
logger = LogManager.GetLogger(log_name="astrbot")
db_helper = SQLiteDatabase(DB_PATH)
# 简单的偏好设置存储, 这里后续应该存储到数据库中, 一些部分可以存储到配置中
sp = SharedPreferences()
sp = SharedPreferences(db_helper=db_helper)
# 文件令牌服务
file_token_service = FileTokenService()
pip_installer = PipInstaller(

View File

@@ -0,0 +1,14 @@
from dataclasses import dataclass
from typing import Generic
from .hooks import BaseAgentRunHooks
from .run_context import TContext
from .tool import FunctionTool
@dataclass
class Agent(Generic[TContext]):
name: str
instructions: str | None = None
tools: list[str | FunctionTool] | None = None
run_hooks: BaseAgentRunHooks[TContext] | None = None

View File

@@ -0,0 +1,38 @@
from typing import Generic
from .agent import Agent
from .run_context import TContext
from .tool import FunctionTool
class HandoffTool(FunctionTool, Generic[TContext]):
"""Handoff tool for delegating tasks to another agent."""
def __init__(
self,
agent: Agent[TContext],
parameters: dict | None = None,
**kwargs,
):
self.agent = agent
super().__init__(
name=f"transfer_to_{agent.name}",
parameters=parameters or self.default_parameters(),
description=agent.instructions or self.default_description(agent.name),
**kwargs,
)
def default_parameters(self) -> dict:
return {
"type": "object",
"properties": {
"input": {
"type": "string",
"description": "The input to be handed off to another agent. This should be a clear and concise request or task.",
},
},
}
def default_description(self, agent_name: str | None) -> str:
agent_name = agent_name or "another"
return f"Delegate tasks to {self.name} agent to handle the request."

View File

@@ -0,0 +1,30 @@
from typing import Generic
import mcp
from astrbot.core.agent.tool import FunctionTool
from astrbot.core.provider.entities import LLMResponse
from .run_context import ContextWrapper, TContext
class BaseAgentRunHooks(Generic[TContext]):
async def on_agent_begin(self, run_context: ContextWrapper[TContext]): ...
async def on_tool_start(
self,
run_context: ContextWrapper[TContext],
tool: FunctionTool,
tool_args: dict | None,
): ...
async def on_tool_end(
self,
run_context: ContextWrapper[TContext],
tool: FunctionTool,
tool_args: dict | None,
tool_result: mcp.types.CallToolResult | None,
): ...
async def on_agent_done(
self,
run_context: ContextWrapper[TContext],
llm_response: LLMResponse,
): ...

View File

@@ -0,0 +1,385 @@
import asyncio
import logging
from contextlib import AsyncExitStack
from datetime import timedelta
from typing import Generic
from tenacity import (
before_sleep_log,
retry,
retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
from astrbot import logger
from astrbot.core.agent.run_context import ContextWrapper
from astrbot.core.utils.log_pipe import LogPipe
from .run_context import TContext
from .tool import FunctionTool
try:
import anyio
import mcp
from mcp.client.sse import sse_client
except (ModuleNotFoundError, ImportError):
logger.warning(
"Warning: Missing 'mcp' dependency, MCP services will be unavailable."
)
try:
from mcp.client.streamable_http import streamablehttp_client
except (ModuleNotFoundError, ImportError):
logger.warning(
"Warning: Missing 'mcp' dependency or MCP library version too old, Streamable HTTP connection unavailable.",
)
def _prepare_config(config: dict) -> dict:
"""Prepare configuration, handle nested format"""
if config.get("mcpServers"):
first_key = next(iter(config["mcpServers"]))
config = config["mcpServers"][first_key]
config.pop("active", None)
return config
async def _quick_test_mcp_connection(config: dict) -> tuple[bool, str]:
"""Quick test MCP server connectivity"""
import aiohttp
cfg = _prepare_config(config.copy())
url = cfg["url"]
headers = cfg.get("headers", {})
timeout = cfg.get("timeout", 10)
try:
if "transport" in cfg:
transport_type = cfg["transport"]
elif "type" in cfg:
transport_type = cfg["type"]
else:
raise Exception("MCP connection config missing transport or type field")
async with aiohttp.ClientSession() as session:
if transport_type == "streamable_http":
test_payload = {
"jsonrpc": "2.0",
"method": "initialize",
"id": 0,
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {"name": "test-client", "version": "1.2.3"},
},
}
async with session.post(
url,
headers={
**headers,
"Content-Type": "application/json",
"Accept": "application/json, text/event-stream",
},
json=test_payload,
timeout=aiohttp.ClientTimeout(total=timeout),
) as response:
if response.status == 200:
return True, ""
return False, f"HTTP {response.status}: {response.reason}"
else:
async with session.get(
url,
headers={
**headers,
"Accept": "application/json, text/event-stream",
},
timeout=aiohttp.ClientTimeout(total=timeout),
) as response:
if response.status == 200:
return True, ""
return False, f"HTTP {response.status}: {response.reason}"
except asyncio.TimeoutError:
return False, f"Connection timeout: {timeout} seconds"
except Exception as e:
return False, f"{e!s}"
class MCPClient:
def __init__(self):
# Initialize session and client objects
self.session: mcp.ClientSession | None = None
self.exit_stack = AsyncExitStack()
self._old_exit_stacks: list[AsyncExitStack] = [] # Track old stacks for cleanup
self.name: str | None = None
self.active: bool = True
self.tools: list[mcp.Tool] = []
self.server_errlogs: list[str] = []
self.running_event = asyncio.Event()
# Store connection config for reconnection
self._mcp_server_config: dict | None = None
self._server_name: str | None = None
self._reconnect_lock = asyncio.Lock() # Lock for thread-safe reconnection
self._reconnecting: bool = False # For logging and debugging
async def connect_to_server(self, mcp_server_config: dict, name: str):
"""Connect to MCP server
If `url` parameter exists:
1. When transport is specified as `streamable_http`, use Streamable HTTP connection.
2. When transport is specified as `sse`, use SSE connection.
3. If not specified, default to SSE connection to MCP service.
Args:
mcp_server_config (dict): Configuration for the MCP server. See https://modelcontextprotocol.io/quickstart/server
"""
# Store config for reconnection
self._mcp_server_config = mcp_server_config
self._server_name = name
cfg = _prepare_config(mcp_server_config.copy())
def logging_callback(msg: str):
# Handle MCP service error logs
print(f"MCP Server {name} Error: {msg}")
self.server_errlogs.append(msg)
if "url" in cfg:
success, error_msg = await _quick_test_mcp_connection(cfg)
if not success:
raise Exception(error_msg)
if "transport" in cfg:
transport_type = cfg["transport"]
elif "type" in cfg:
transport_type = cfg["type"]
else:
raise Exception("MCP connection config missing transport or type field")
if transport_type != "streamable_http":
# SSE transport method
self._streams_context = sse_client(
url=cfg["url"],
headers=cfg.get("headers", {}),
timeout=cfg.get("timeout", 5),
sse_read_timeout=cfg.get("sse_read_timeout", 60 * 5),
)
streams = await self.exit_stack.enter_async_context(
self._streams_context,
)
# Create a new client session
read_timeout = timedelta(seconds=cfg.get("session_read_timeout", 60))
self.session = await self.exit_stack.enter_async_context(
mcp.ClientSession(
*streams,
read_timeout_seconds=read_timeout,
logging_callback=logging_callback, # type: ignore
),
)
else:
timeout = timedelta(seconds=cfg.get("timeout", 30))
sse_read_timeout = timedelta(
seconds=cfg.get("sse_read_timeout", 60 * 5),
)
self._streams_context = streamablehttp_client(
url=cfg["url"],
headers=cfg.get("headers", {}),
timeout=timeout,
sse_read_timeout=sse_read_timeout,
terminate_on_close=cfg.get("terminate_on_close", True),
)
read_s, write_s, _ = await self.exit_stack.enter_async_context(
self._streams_context,
)
# Create a new client session
read_timeout = timedelta(seconds=cfg.get("session_read_timeout", 60))
self.session = await self.exit_stack.enter_async_context(
mcp.ClientSession(
read_stream=read_s,
write_stream=write_s,
read_timeout_seconds=read_timeout,
logging_callback=logging_callback, # type: ignore
),
)
else:
server_params = mcp.StdioServerParameters(
**cfg,
)
def callback(msg: str):
# Handle MCP service error logs
self.server_errlogs.append(msg)
stdio_transport = await self.exit_stack.enter_async_context(
mcp.stdio_client(
server_params,
errlog=LogPipe(
level=logging.ERROR,
logger=logger,
identifier=f"MCPServer-{name}",
callback=callback,
), # type: ignore
),
)
# Create a new client session
self.session = await self.exit_stack.enter_async_context(
mcp.ClientSession(*stdio_transport),
)
await self.session.initialize()
async def list_tools_and_save(self) -> mcp.ListToolsResult:
"""List all tools from the server and save them to self.tools"""
if not self.session:
raise Exception("MCP Client is not initialized")
response = await self.session.list_tools()
self.tools = response.tools
return response
async def _reconnect(self) -> None:
"""Reconnect to the MCP server using the stored configuration.
Uses asyncio.Lock to ensure thread-safe reconnection in concurrent environments.
Raises:
Exception: raised when reconnection fails
"""
async with self._reconnect_lock:
# Check if already reconnecting (useful for logging)
if self._reconnecting:
logger.debug(
f"MCP Client {self._server_name} is already reconnecting, skipping"
)
return
if not self._mcp_server_config or not self._server_name:
raise Exception("Cannot reconnect: missing connection configuration")
self._reconnecting = True
try:
logger.info(
f"Attempting to reconnect to MCP server {self._server_name}..."
)
# Save old exit_stack for later cleanup (don't close it now to avoid cancel scope issues)
if self.exit_stack:
self._old_exit_stacks.append(self.exit_stack)
# Mark old session as invalid
self.session = None
# Create new exit stack for new connection
self.exit_stack = AsyncExitStack()
# Reconnect using stored config
await self.connect_to_server(self._mcp_server_config, self._server_name)
await self.list_tools_and_save()
logger.info(
f"Successfully reconnected to MCP server {self._server_name}"
)
except Exception as e:
logger.error(
f"Failed to reconnect to MCP server {self._server_name}: {e}"
)
raise
finally:
self._reconnecting = False
async def call_tool_with_reconnect(
self,
tool_name: str,
arguments: dict,
read_timeout_seconds: timedelta,
) -> mcp.types.CallToolResult:
"""Call MCP tool with automatic reconnection on failure, max 2 retries.
Args:
tool_name: tool name
arguments: tool arguments
read_timeout_seconds: read timeout
Returns:
MCP tool call result
Raises:
ValueError: MCP session is not available
anyio.ClosedResourceError: raised after reconnection failure
"""
@retry(
retry=retry_if_exception_type(anyio.ClosedResourceError),
stop=stop_after_attempt(2),
wait=wait_exponential(multiplier=1, min=1, max=3),
before_sleep=before_sleep_log(logger, logging.WARNING),
reraise=True,
)
async def _call_with_retry():
if not self.session:
raise ValueError("MCP session is not available for MCP function tools.")
try:
return await self.session.call_tool(
name=tool_name,
arguments=arguments,
read_timeout_seconds=read_timeout_seconds,
)
except anyio.ClosedResourceError:
logger.warning(
f"MCP tool {tool_name} call failed (ClosedResourceError), attempting to reconnect..."
)
# Attempt to reconnect
await self._reconnect()
# Reraise the exception to trigger tenacity retry
raise
return await _call_with_retry()
async def cleanup(self):
"""Clean up resources including old exit stacks from reconnections"""
# Close current exit stack
try:
await self.exit_stack.aclose()
except Exception as e:
logger.debug(f"Error closing current exit stack: {e}")
# Don't close old exit stacks as they may be in different task contexts
# They will be garbage collected naturally
# Just clear the list to release references
self._old_exit_stacks.clear()
# Set running_event first to unblock any waiting tasks
self.running_event.set()
class MCPTool(FunctionTool, Generic[TContext]):
"""A function tool that calls an MCP service."""
def __init__(
self, mcp_tool: mcp.Tool, mcp_client: MCPClient, mcp_server_name: str, **kwargs
):
super().__init__(
name=mcp_tool.name,
description=mcp_tool.description or "",
parameters=mcp_tool.inputSchema,
)
self.mcp_tool = mcp_tool
self.mcp_client = mcp_client
self.mcp_server_name = mcp_server_name
async def call(
self, context: ContextWrapper[TContext], **kwargs
) -> mcp.types.CallToolResult:
return await self.mcp_client.call_tool_with_reconnect(
tool_name=self.mcp_tool.name,
arguments=kwargs,
read_timeout_seconds=timedelta(seconds=context.tool_call_timeout),
)

View File

@@ -0,0 +1,192 @@
# Inspired by MoonshotAI/kosong, credits to MoonshotAI/kosong authors for the original implementation.
# License: Apache License 2.0
from typing import Any, ClassVar, Literal, cast
from pydantic import BaseModel, GetCoreSchemaHandler, model_validator
from pydantic_core import core_schema
class ContentPart(BaseModel):
"""A part of the content in a message."""
__content_part_registry: ClassVar[dict[str, type["ContentPart"]]] = {}
type: str
def __init_subclass__(cls, **kwargs: Any) -> None:
super().__init_subclass__(**kwargs)
invalid_subclass_error_msg = f"ContentPart subclass {cls.__name__} must have a `type` field of type `str`"
type_value = getattr(cls, "type", None)
if type_value is None or not isinstance(type_value, str):
raise ValueError(invalid_subclass_error_msg)
cls.__content_part_registry[type_value] = cls
@classmethod
def __get_pydantic_core_schema__(
cls, source_type: Any, handler: GetCoreSchemaHandler
) -> core_schema.CoreSchema:
# If we're dealing with the base ContentPart class, use custom validation
if cls.__name__ == "ContentPart":
def validate_content_part(value: Any) -> Any:
# if it's already an instance of a ContentPart subclass, return it
if hasattr(value, "__class__") and issubclass(value.__class__, cls):
return value
# if it's a dict with a type field, dispatch to the appropriate subclass
if isinstance(value, dict) and "type" in value:
type_value: Any | None = cast(dict[str, Any], value).get("type")
if not isinstance(type_value, str):
raise ValueError(f"Cannot validate {value} as ContentPart")
target_class = cls.__content_part_registry[type_value]
return target_class.model_validate(value)
raise ValueError(f"Cannot validate {value} as ContentPart")
return core_schema.no_info_plain_validator_function(validate_content_part)
# for subclasses, use the default schema
return handler(source_type)
class TextPart(ContentPart):
"""
>>> TextPart(text="Hello, world!").model_dump()
{'type': 'text', 'text': 'Hello, world!'}
"""
type: str = "text"
text: str
class ImageURLPart(ContentPart):
"""
>>> ImageURLPart(image_url="http://example.com/image.jpg").model_dump()
{'type': 'image_url', 'image_url': 'http://example.com/image.jpg'}
"""
class ImageURL(BaseModel):
url: str
"""The URL of the image, can be data URI scheme like `data:image/png;base64,...`."""
id: str | None = None
"""The ID of the image, to allow LLMs to distinguish different images."""
type: str = "image_url"
image_url: ImageURL
class AudioURLPart(ContentPart):
"""
>>> AudioURLPart(audio_url=AudioURLPart.AudioURL(url="https://example.com/audio.mp3")).model_dump()
{'type': 'audio_url', 'audio_url': {'url': 'https://example.com/audio.mp3', 'id': None}}
"""
class AudioURL(BaseModel):
url: str
"""The URL of the audio, can be data URI scheme like `data:audio/aac;base64,...`."""
id: str | None = None
"""The ID of the audio, to allow LLMs to distinguish different audios."""
type: str = "audio_url"
audio_url: AudioURL
class ToolCall(BaseModel):
"""
A tool call requested by the assistant.
>>> ToolCall(
... id="123",
... function=ToolCall.FunctionBody(
... name="function",
... arguments="{}"
... ),
... ).model_dump()
{'type': 'function', 'id': '123', 'function': {'name': 'function', 'arguments': '{}'}}
"""
class FunctionBody(BaseModel):
name: str
arguments: str | None
type: Literal["function"] = "function"
id: str
"""The ID of the tool call."""
function: FunctionBody
"""The function body of the tool call."""
extra_content: dict[str, Any] | None = None
"""Extra metadata for the tool call."""
def model_dump(self, **kwargs: Any) -> dict[str, Any]:
if self.extra_content is None:
kwargs.setdefault("exclude", set()).add("extra_content")
return super().model_dump(**kwargs)
class ToolCallPart(BaseModel):
"""A part of the tool call."""
arguments_part: str | None = None
"""A part of the arguments of the tool call."""
class Message(BaseModel):
"""A message in a conversation."""
role: Literal[
"system",
"user",
"assistant",
"tool",
]
content: str | list[ContentPart] | None = None
"""The content of the message."""
tool_calls: list[ToolCall] | list[dict] | None = None
"""The tool calls of the message."""
tool_call_id: str | None = None
"""The ID of the tool call."""
@model_validator(mode="after")
def check_content_required(self):
# assistant + tool_calls is not None: allow content to be None
if self.role == "assistant" and self.tool_calls is not None:
return self
# other all cases: content is required
if self.content is None:
raise ValueError(
"content is required unless role='assistant' and tool_calls is not None"
)
return self
class AssistantMessageSegment(Message):
"""A message segment from the assistant."""
role: Literal["assistant"] = "assistant"
class ToolCallMessageSegment(Message):
"""A message segment representing a tool call."""
role: Literal["tool"] = "tool"
class UserMessageSegment(Message):
"""A message segment from the user."""
role: Literal["user"] = "user"
class SystemMessageSegment(Message):
"""A message segment from the system."""
role: Literal["system"] = "system"

View File

@@ -0,0 +1,14 @@
import typing as T
from dataclasses import dataclass
from astrbot.core.message.message_event_result import MessageChain
class AgentResponseData(T.TypedDict):
chain: MessageChain
@dataclass
class AgentResponse:
type: str
data: AgentResponseData

View File

@@ -0,0 +1,22 @@
from typing import Any, Generic
from pydantic import Field
from pydantic.dataclasses import dataclass
from typing_extensions import TypeVar
from .message import Message
TContext = TypeVar("TContext", default=Any)
@dataclass(config={"arbitrary_types_allowed": True})
class ContextWrapper(Generic[TContext]):
"""A context for running an agent, which can be used to pass additional data or state."""
context: TContext
messages: list[Message] = Field(default_factory=list)
"""This field stores the llm message context for the agent run, agent runners will maintain this field automatically."""
tool_call_timeout: int = 60 # Default tool call timeout in seconds
NoContext = ContextWrapper[None]

View File

@@ -0,0 +1,3 @@
from .base import BaseAgentRunner
__all__ = ["BaseAgentRunner"]

View File

@@ -0,0 +1,65 @@
import abc
import typing as T
from enum import Enum, auto
from astrbot import logger
from astrbot.core.provider.entities import LLMResponse
from ..hooks import BaseAgentRunHooks
from ..response import AgentResponse
from ..run_context import ContextWrapper, TContext
class AgentState(Enum):
"""Defines the state of the agent."""
IDLE = auto() # Initial state
RUNNING = auto() # Currently processing
DONE = auto() # Completed
ERROR = auto() # Error state
class BaseAgentRunner(T.Generic[TContext]):
@abc.abstractmethod
async def reset(
self,
run_context: ContextWrapper[TContext],
agent_hooks: BaseAgentRunHooks[TContext],
**kwargs: T.Any,
) -> None:
"""Reset the agent to its initial state.
This method should be called before starting a new run.
"""
...
@abc.abstractmethod
async def step(self) -> T.AsyncGenerator[AgentResponse, None]:
"""Process a single step of the agent."""
...
@abc.abstractmethod
async def step_until_done(
self, max_step: int
) -> T.AsyncGenerator[AgentResponse, None]:
"""Process steps until the agent is done."""
...
@abc.abstractmethod
def done(self) -> bool:
"""Check if the agent has completed its task.
Returns True if the agent is done, False otherwise.
"""
...
@abc.abstractmethod
def get_final_llm_resp(self) -> LLMResponse | None:
"""Get the final observation from the agent.
This method should be called after the agent is done.
"""
...
def _transition_state(self, new_state: AgentState) -> None:
"""Transition the agent state."""
if self._state != new_state:
logger.debug(f"Agent state transition: {self._state} -> {new_state}")
self._state = new_state

View File

@@ -0,0 +1,367 @@
import base64
import json
import sys
import typing as T
import astrbot.core.message.components as Comp
from astrbot import logger
from astrbot.core import sp
from astrbot.core.message.message_event_result import MessageChain
from astrbot.core.provider.entities import (
LLMResponse,
ProviderRequest,
)
from ...hooks import BaseAgentRunHooks
from ...response import AgentResponseData
from ...run_context import ContextWrapper, TContext
from ..base import AgentResponse, AgentState, BaseAgentRunner
from .coze_api_client import CozeAPIClient
if sys.version_info >= (3, 12):
from typing import override
else:
from typing_extensions import override
class CozeAgentRunner(BaseAgentRunner[TContext]):
"""Coze Agent Runner"""
@override
async def reset(
self,
request: ProviderRequest,
run_context: ContextWrapper[TContext],
agent_hooks: BaseAgentRunHooks[TContext],
provider_config: dict,
**kwargs: T.Any,
) -> None:
self.req = request
self.streaming = kwargs.get("streaming", False)
self.final_llm_resp = None
self._state = AgentState.IDLE
self.agent_hooks = agent_hooks
self.run_context = run_context
self.api_key = provider_config.get("coze_api_key", "")
if not self.api_key:
raise Exception("Coze API Key 不能为空。")
self.bot_id = provider_config.get("bot_id", "")
if not self.bot_id:
raise Exception("Coze Bot ID 不能为空。")
self.api_base: str = provider_config.get("coze_api_base", "https://api.coze.cn")
if not isinstance(self.api_base, str) or not self.api_base.startswith(
("http://", "https://"),
):
raise Exception(
"Coze API Base URL 格式不正确,必须以 http:// 或 https:// 开头。",
)
self.timeout = provider_config.get("timeout", 120)
if isinstance(self.timeout, str):
self.timeout = int(self.timeout)
self.auto_save_history = provider_config.get("auto_save_history", True)
# 创建 API 客户端
self.api_client = CozeAPIClient(api_key=self.api_key, api_base=self.api_base)
# 会话相关缓存
self.file_id_cache: dict[str, dict[str, str]] = {}
@override
async def step(self):
"""
执行 Coze Agent 的一个步骤
"""
if not self.req:
raise ValueError("Request is not set. Please call reset() first.")
if self._state == AgentState.IDLE:
try:
await self.agent_hooks.on_agent_begin(self.run_context)
except Exception as e:
logger.error(f"Error in on_agent_begin hook: {e}", exc_info=True)
# 开始处理,转换到运行状态
self._transition_state(AgentState.RUNNING)
try:
# 执行 Coze 请求并处理结果
async for response in self._execute_coze_request():
yield response
except Exception as e:
logger.error(f"Coze 请求失败:{str(e)}")
self._transition_state(AgentState.ERROR)
self.final_llm_resp = LLMResponse(
role="err", completion_text=f"Coze 请求失败:{str(e)}"
)
yield AgentResponse(
type="err",
data=AgentResponseData(
chain=MessageChain().message(f"Coze 请求失败:{str(e)}")
),
)
finally:
await self.api_client.close()
@override
async def step_until_done(
self, max_step: int = 30
) -> T.AsyncGenerator[AgentResponse, None]:
while not self.done():
async for resp in self.step():
yield resp
async def _execute_coze_request(self):
"""执行 Coze 请求的核心逻辑"""
prompt = self.req.prompt or ""
session_id = self.req.session_id or "unknown"
image_urls = self.req.image_urls or []
contexts = self.req.contexts or []
system_prompt = self.req.system_prompt
# 用户ID参数
user_id = session_id
# 获取或创建会话ID
conversation_id = await sp.get_async(
scope="umo",
scope_id=user_id,
key="coze_conversation_id",
default="",
)
# 构建消息
additional_messages = []
if system_prompt:
if not self.auto_save_history or not conversation_id:
additional_messages.append(
{
"role": "system",
"content": system_prompt,
"content_type": "text",
},
)
# 处理历史上下文
if not self.auto_save_history and contexts:
for ctx in contexts:
if isinstance(ctx, dict) and "role" in ctx and "content" in ctx:
# 处理上下文中的图片
content = ctx["content"]
if isinstance(content, list):
# 多模态内容,需要处理图片
processed_content = []
for item in content:
if isinstance(item, dict):
if item.get("type") == "text":
processed_content.append(item)
elif item.get("type") == "image_url":
# 处理图片上传
try:
image_data = item.get("image_url", {})
url = image_data.get("url", "")
if url:
file_id = (
await self._download_and_upload_image(
url, session_id
)
)
processed_content.append(
{
"type": "file",
"file_id": file_id,
"file_url": url,
}
)
except Exception as e:
logger.warning(f"处理上下文图片失败: {e}")
continue
if processed_content:
additional_messages.append(
{
"role": ctx["role"],
"content": processed_content,
"content_type": "object_string",
}
)
else:
# 纯文本内容
additional_messages.append(
{
"role": ctx["role"],
"content": content,
"content_type": "text",
}
)
# 构建当前消息
if prompt or image_urls:
if image_urls:
# 多模态
object_string_content = []
if prompt:
object_string_content.append({"type": "text", "text": prompt})
for url in image_urls:
# the url is a base64 string
try:
image_data = base64.b64decode(url)
file_id = await self.api_client.upload_file(image_data)
object_string_content.append(
{
"type": "image",
"file_id": file_id,
}
)
except Exception as e:
logger.warning(f"处理图片失败 {url}: {e}")
continue
if object_string_content:
content = json.dumps(object_string_content, ensure_ascii=False)
additional_messages.append(
{
"role": "user",
"content": content,
"content_type": "object_string",
}
)
elif prompt:
# 纯文本
additional_messages.append(
{
"role": "user",
"content": prompt,
"content_type": "text",
},
)
# 执行 Coze API 请求
accumulated_content = ""
message_started = False
async for chunk in self.api_client.chat_messages(
bot_id=self.bot_id,
user_id=user_id,
additional_messages=additional_messages,
conversation_id=conversation_id,
auto_save_history=self.auto_save_history,
stream=True,
timeout=self.timeout,
):
event_type = chunk.get("event")
data = chunk.get("data", {})
if event_type == "conversation.chat.created":
if isinstance(data, dict) and "conversation_id" in data:
await sp.put_async(
scope="umo",
scope_id=user_id,
key="coze_conversation_id",
value=data["conversation_id"],
)
if event_type == "conversation.message.delta":
# 增量消息
content = data.get("content", "")
if not content and "delta" in data:
content = data["delta"].get("content", "")
if not content and "text" in data:
content = data.get("text", "")
if content:
accumulated_content += content
message_started = True
# 如果是流式响应,发送增量数据
if self.streaming:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(
chain=MessageChain().message(content)
),
)
elif event_type == "conversation.message.completed":
# 消息完成
logger.debug("Coze message completed")
message_started = True
elif event_type == "conversation.chat.completed":
# 对话完成
logger.debug("Coze chat completed")
break
elif event_type == "error":
# 错误处理
error_msg = data.get("msg", "未知错误")
error_code = data.get("code", "UNKNOWN")
logger.error(f"Coze 出现错误: {error_code} - {error_msg}")
raise Exception(f"Coze 出现错误: {error_code} - {error_msg}")
if not message_started and not accumulated_content:
logger.warning("Coze 未返回任何内容")
accumulated_content = ""
# 创建最终响应
chain = MessageChain(chain=[Comp.Plain(accumulated_content)])
self.final_llm_resp = LLMResponse(role="assistant", result_chain=chain)
self._transition_state(AgentState.DONE)
try:
await self.agent_hooks.on_agent_done(self.run_context, self.final_llm_resp)
except Exception as e:
logger.error(f"Error in on_agent_done hook: {e}", exc_info=True)
# 返回最终结果
yield AgentResponse(
type="llm_result",
data=AgentResponseData(chain=chain),
)
async def _download_and_upload_image(
self,
image_url: str,
session_id: str | None = None,
) -> str:
"""下载图片并上传到 Coze返回 file_id"""
import hashlib
# 计算哈希实现缓存
cache_key = hashlib.md5(image_url.encode("utf-8")).hexdigest()
if session_id:
if session_id not in self.file_id_cache:
self.file_id_cache[session_id] = {}
if cache_key in self.file_id_cache[session_id]:
file_id = self.file_id_cache[session_id][cache_key]
logger.debug(f"[Coze] 使用缓存的 file_id: {file_id}")
return file_id
try:
image_data = await self.api_client.download_image(image_url)
file_id = await self.api_client.upload_file(image_data)
if session_id:
self.file_id_cache[session_id][cache_key] = file_id
logger.debug(f"[Coze] 图片上传成功并缓存file_id: {file_id}")
return file_id
except Exception as e:
logger.error(f"处理图片失败 {image_url}: {e!s}")
raise Exception(f"处理图片失败: {e!s}")
@override
def done(self) -> bool:
"""检查 Agent 是否已完成工作"""
return self._state in (AgentState.DONE, AgentState.ERROR)
@override
def get_final_llm_resp(self) -> LLMResponse | None:
return self.final_llm_resp

View File

@@ -0,0 +1,324 @@
import asyncio
import io
import json
from collections.abc import AsyncGenerator
from typing import Any
import aiohttp
from astrbot.core import logger
class CozeAPIClient:
def __init__(self, api_key: str, api_base: str = "https://api.coze.cn"):
self.api_key = api_key
self.api_base = api_base
self.session = None
async def _ensure_session(self):
"""确保HTTP session存在"""
if self.session is None:
connector = aiohttp.TCPConnector(
ssl=False if self.api_base.startswith("http://") else True,
limit=100,
limit_per_host=30,
keepalive_timeout=30,
enable_cleanup_closed=True,
)
timeout = aiohttp.ClientTimeout(
total=120, # 默认超时时间
connect=30,
sock_read=120,
)
headers = {
"Authorization": f"Bearer {self.api_key}",
"Accept": "text/event-stream",
}
self.session = aiohttp.ClientSession(
headers=headers,
timeout=timeout,
connector=connector,
)
return self.session
async def upload_file(
self,
file_data: bytes,
) -> str:
"""上传文件到 Coze 并返回 file_id
Args:
file_data (bytes): 文件的二进制数据
Returns:
str: 上传成功后返回的 file_id
"""
session = await self._ensure_session()
url = f"{self.api_base}/v1/files/upload"
try:
file_io = io.BytesIO(file_data)
async with session.post(
url,
data={
"file": file_io,
},
timeout=aiohttp.ClientTimeout(total=60),
) as response:
if response.status == 401:
raise Exception("Coze API 认证失败,请检查 API Key 是否正确")
response_text = await response.text()
logger.debug(
f"文件上传响应状态: {response.status}, 内容: {response_text}",
)
if response.status != 200:
raise Exception(
f"文件上传失败,状态码: {response.status}, 响应: {response_text}",
)
try:
result = await response.json()
except json.JSONDecodeError:
raise Exception(f"文件上传响应解析失败: {response_text}")
if result.get("code") != 0:
raise Exception(f"文件上传失败: {result.get('msg', '未知错误')}")
file_id = result["data"]["id"]
logger.debug(f"[Coze] 图片上传成功file_id: {file_id}")
return file_id
except asyncio.TimeoutError:
logger.error("文件上传超时")
raise Exception("文件上传超时")
except Exception as e:
logger.error(f"文件上传失败: {e!s}")
raise Exception(f"文件上传失败: {e!s}")
async def download_image(self, image_url: str) -> bytes:
"""下载图片并返回字节数据
Args:
image_url (str): 图片的URL
Returns:
bytes: 图片的二进制数据
"""
session = await self._ensure_session()
try:
async with session.get(image_url) as response:
if response.status != 200:
raise Exception(f"下载图片失败,状态码: {response.status}")
image_data = await response.read()
return image_data
except Exception as e:
logger.error(f"下载图片失败 {image_url}: {e!s}")
raise Exception(f"下载图片失败: {e!s}")
async def chat_messages(
self,
bot_id: str,
user_id: str,
additional_messages: list[dict] | None = None,
conversation_id: str | None = None,
auto_save_history: bool = True,
stream: bool = True,
timeout: float = 120,
) -> AsyncGenerator[dict[str, Any], None]:
"""发送聊天消息并返回流式响应
Args:
bot_id: Bot ID
user_id: 用户ID
additional_messages: 额外消息列表
conversation_id: 会话ID
auto_save_history: 是否自动保存历史
stream: 是否流式响应
timeout: 超时时间
"""
session = await self._ensure_session()
url = f"{self.api_base}/v3/chat"
payload = {
"bot_id": bot_id,
"user_id": user_id,
"stream": stream,
"auto_save_history": auto_save_history,
}
if additional_messages:
payload["additional_messages"] = additional_messages
params = {}
if conversation_id:
params["conversation_id"] = conversation_id
logger.debug(f"Coze chat_messages payload: {payload}, params: {params}")
try:
async with session.post(
url,
json=payload,
params=params,
timeout=aiohttp.ClientTimeout(total=timeout),
) as response:
if response.status == 401:
raise Exception("Coze API 认证失败,请检查 API Key 是否正确")
if response.status != 200:
raise Exception(f"Coze API 流式请求失败,状态码: {response.status}")
# SSE
buffer = ""
event_type = None
event_data = None
async for chunk in response.content:
if chunk:
buffer += chunk.decode("utf-8", errors="ignore")
lines = buffer.split("\n")
buffer = lines[-1]
for line in lines[:-1]:
line = line.strip()
if not line:
if event_type and event_data:
yield {"event": event_type, "data": event_data}
event_type = None
event_data = None
elif line.startswith("event:"):
event_type = line[6:].strip()
elif line.startswith("data:"):
data_str = line[5:].strip()
if data_str and data_str != "[DONE]":
try:
event_data = json.loads(data_str)
except json.JSONDecodeError:
event_data = {"content": data_str}
except asyncio.TimeoutError:
raise Exception(f"Coze API 流式请求超时 ({timeout}秒)")
except Exception as e:
raise Exception(f"Coze API 流式请求失败: {e!s}")
async def clear_context(self, conversation_id: str):
"""清空会话上下文
Args:
conversation_id: 会话ID
Returns:
dict: API响应结果
"""
session = await self._ensure_session()
url = f"{self.api_base}/v3/conversation/message/clear_context"
payload = {"conversation_id": conversation_id}
try:
async with session.post(url, json=payload) as response:
response_text = await response.text()
if response.status == 401:
raise Exception("Coze API 认证失败,请检查 API Key 是否正确")
if response.status != 200:
raise Exception(f"Coze API 请求失败,状态码: {response.status}")
try:
return json.loads(response_text)
except json.JSONDecodeError:
raise Exception("Coze API 返回非JSON格式")
except asyncio.TimeoutError:
raise Exception("Coze API 请求超时")
except aiohttp.ClientError as e:
raise Exception(f"Coze API 请求失败: {e!s}")
async def get_message_list(
self,
conversation_id: str,
order: str = "desc",
limit: int = 10,
offset: int = 0,
):
"""获取消息列表
Args:
conversation_id: 会话ID
order: 排序方式 (asc/desc)
limit: 限制数量
offset: 偏移量
Returns:
dict: API响应结果
"""
session = await self._ensure_session()
url = f"{self.api_base}/v3/conversation/message/list"
params = {
"conversation_id": conversation_id,
"order": order,
"limit": limit,
"offset": offset,
}
try:
async with session.get(url, params=params) as response:
response.raise_for_status()
return await response.json()
except Exception as e:
logger.error(f"获取Coze消息列表失败: {e!s}")
raise Exception(f"获取Coze消息列表失败: {e!s}")
async def close(self):
"""关闭会话"""
if self.session:
await self.session.close()
self.session = None
if __name__ == "__main__":
import asyncio
import os
async def test_coze_api_client():
api_key = os.getenv("COZE_API_KEY", "")
bot_id = os.getenv("COZE_BOT_ID", "")
client = CozeAPIClient(api_key=api_key)
try:
with open("README.md", "rb") as f:
file_data = f.read()
file_id = await client.upload_file(file_data)
print(f"Uploaded file_id: {file_id}")
async for event in client.chat_messages(
bot_id=bot_id,
user_id="test_user",
additional_messages=[
{
"role": "user",
"content": json.dumps(
[
{"type": "text", "text": "这是什么"},
{"type": "file", "file_id": file_id},
],
ensure_ascii=False,
),
"content_type": "object_string",
},
],
stream=True,
):
print(f"Event: {event}")
finally:
await client.close()
asyncio.run(test_coze_api_client())

View File

@@ -0,0 +1,403 @@
import asyncio
import functools
import queue
import re
import sys
import threading
import typing as T
from dashscope import Application
from dashscope.app.application_response import ApplicationResponse
import astrbot.core.message.components as Comp
from astrbot.core import logger, sp
from astrbot.core.message.message_event_result import MessageChain
from astrbot.core.provider.entities import (
LLMResponse,
ProviderRequest,
)
from ...hooks import BaseAgentRunHooks
from ...response import AgentResponseData
from ...run_context import ContextWrapper, TContext
from ..base import AgentResponse, AgentState, BaseAgentRunner
if sys.version_info >= (3, 12):
from typing import override
else:
from typing_extensions import override
class DashscopeAgentRunner(BaseAgentRunner[TContext]):
"""Dashscope Agent Runner"""
@override
async def reset(
self,
request: ProviderRequest,
run_context: ContextWrapper[TContext],
agent_hooks: BaseAgentRunHooks[TContext],
provider_config: dict,
**kwargs: T.Any,
) -> None:
self.req = request
self.streaming = kwargs.get("streaming", False)
self.final_llm_resp = None
self._state = AgentState.IDLE
self.agent_hooks = agent_hooks
self.run_context = run_context
self.api_key = provider_config.get("dashscope_api_key", "")
if not self.api_key:
raise Exception("阿里云百炼 API Key 不能为空。")
self.app_id = provider_config.get("dashscope_app_id", "")
if not self.app_id:
raise Exception("阿里云百炼 APP ID 不能为空。")
self.dashscope_app_type = provider_config.get("dashscope_app_type", "")
if not self.dashscope_app_type:
raise Exception("阿里云百炼 APP 类型不能为空。")
self.variables: dict = provider_config.get("variables", {}) or {}
self.rag_options: dict = provider_config.get("rag_options", {})
self.output_reference = self.rag_options.get("output_reference", False)
self.rag_options = self.rag_options.copy()
self.rag_options.pop("output_reference", None)
self.timeout = provider_config.get("timeout", 120)
if isinstance(self.timeout, str):
self.timeout = int(self.timeout)
def has_rag_options(self):
"""判断是否有 RAG 选项
Returns:
bool: 是否有 RAG 选项
"""
if self.rag_options and (
len(self.rag_options.get("pipeline_ids", [])) > 0
or len(self.rag_options.get("file_ids", [])) > 0
):
return True
return False
@override
async def step(self):
"""
执行 Dashscope Agent 的一个步骤
"""
if not self.req:
raise ValueError("Request is not set. Please call reset() first.")
if self._state == AgentState.IDLE:
try:
await self.agent_hooks.on_agent_begin(self.run_context)
except Exception as e:
logger.error(f"Error in on_agent_begin hook: {e}", exc_info=True)
# 开始处理,转换到运行状态
self._transition_state(AgentState.RUNNING)
try:
# 执行 Dashscope 请求并处理结果
async for response in self._execute_dashscope_request():
yield response
except Exception as e:
logger.error(f"阿里云百炼请求失败:{str(e)}")
self._transition_state(AgentState.ERROR)
self.final_llm_resp = LLMResponse(
role="err", completion_text=f"阿里云百炼请求失败:{str(e)}"
)
yield AgentResponse(
type="err",
data=AgentResponseData(
chain=MessageChain().message(f"阿里云百炼请求失败:{str(e)}")
),
)
@override
async def step_until_done(
self, max_step: int = 30
) -> T.AsyncGenerator[AgentResponse, None]:
while not self.done():
async for resp in self.step():
yield resp
def _consume_sync_generator(
self, response: T.Any, response_queue: queue.Queue
) -> None:
"""在线程中消费同步generator,将结果放入队列
Args:
response: 同步generator对象
response_queue: 用于传递数据的队列
"""
try:
if self.streaming:
for chunk in response:
response_queue.put(("data", chunk))
else:
response_queue.put(("data", response))
except Exception as e:
response_queue.put(("error", e))
finally:
response_queue.put(("done", None))
async def _process_stream_chunk(
self, chunk: ApplicationResponse, output_text: str
) -> tuple[str, list | None, AgentResponse | None]:
"""处理流式响应的单个chunk
Args:
chunk: Dashscope响应chunk
output_text: 当前累积的输出文本
Returns:
(更新后的output_text, doc_references, AgentResponse或None)
"""
logger.debug(f"dashscope stream chunk: {chunk}")
if chunk.status_code != 200:
logger.error(
f"阿里云百炼请求失败: request_id={chunk.request_id}, code={chunk.status_code}, message={chunk.message}, 请参考文档https://help.aliyun.com/zh/model-studio/developer-reference/error-code",
)
self._transition_state(AgentState.ERROR)
error_msg = (
f"阿里云百炼请求失败: message={chunk.message} code={chunk.status_code}"
)
self.final_llm_resp = LLMResponse(
role="err",
result_chain=MessageChain().message(error_msg),
)
return (
output_text,
None,
AgentResponse(
type="err",
data=AgentResponseData(chain=MessageChain().message(error_msg)),
),
)
chunk_text = chunk.output.get("text", "") or ""
# RAG 引用脚标格式化
chunk_text = re.sub(r"<ref>\[(\d+)\]</ref>", r"[\1]", chunk_text)
response = None
if chunk_text:
output_text += chunk_text
response = AgentResponse(
type="streaming_delta",
data=AgentResponseData(chain=MessageChain().message(chunk_text)),
)
# 获取文档引用
doc_references = chunk.output.get("doc_references", None)
return output_text, doc_references, response
def _format_doc_references(self, doc_references: list) -> str:
"""格式化文档引用为文本
Args:
doc_references: 文档引用列表
Returns:
格式化后的引用文本
"""
ref_parts = []
for ref in doc_references:
ref_title = (
ref.get("title", "") if ref.get("title") else ref.get("doc_name", "")
)
ref_parts.append(f"{ref['index_id']}. {ref_title}\n")
ref_str = "".join(ref_parts)
return f"\n\n回答来源:\n{ref_str}"
async def _build_request_payload(
self, prompt: str, session_id: str, contexts: list, system_prompt: str
) -> dict:
"""构建请求payload
Args:
prompt: 用户输入
session_id: 会话ID
contexts: 上下文列表
system_prompt: 系统提示词
Returns:
请求payload字典
"""
conversation_id = await sp.get_async(
scope="umo",
scope_id=session_id,
key="dashscope_conversation_id",
default="",
)
# 获得会话变量
payload_vars = self.variables.copy()
session_var = await sp.get_async(
scope="umo",
scope_id=session_id,
key="session_variables",
default={},
)
payload_vars.update(session_var)
if (
self.dashscope_app_type in ["agent", "dialog-workflow"]
and not self.has_rag_options()
):
# 支持多轮对话的
p = {
"app_id": self.app_id,
"api_key": self.api_key,
"prompt": prompt,
"biz_params": payload_vars or None,
"stream": self.streaming,
"incremental_output": True,
}
if conversation_id:
p["session_id"] = conversation_id
return p
else:
# 不支持多轮对话的
payload = {
"app_id": self.app_id,
"prompt": prompt,
"api_key": self.api_key,
"biz_params": payload_vars or None,
"stream": self.streaming,
"incremental_output": True,
}
if self.rag_options:
payload["rag_options"] = self.rag_options
return payload
async def _handle_streaming_response(
self, response: T.Any, session_id: str
) -> T.AsyncGenerator[AgentResponse, None]:
"""处理流式响应
Args:
response: Dashscope 流式响应 generator
Yields:
AgentResponse 对象
"""
response_queue = queue.Queue()
consumer_thread = threading.Thread(
target=self._consume_sync_generator,
args=(response, response_queue),
daemon=True,
)
consumer_thread.start()
output_text = ""
doc_references = None
while True:
try:
item_type, item_data = await asyncio.get_event_loop().run_in_executor(
None, response_queue.get, True, 1
)
except queue.Empty:
continue
if item_type == "done":
break
elif item_type == "error":
raise item_data
elif item_type == "data":
chunk = item_data
assert isinstance(chunk, ApplicationResponse)
(
output_text,
chunk_doc_refs,
response,
) = await self._process_stream_chunk(chunk, output_text)
if response:
if response.type == "err":
yield response
return
yield response
if chunk_doc_refs:
doc_references = chunk_doc_refs
if chunk.output.session_id:
await sp.put_async(
scope="umo",
scope_id=session_id,
key="dashscope_conversation_id",
value=chunk.output.session_id,
)
# 添加 RAG 引用
if self.output_reference and doc_references:
ref_text = self._format_doc_references(doc_references)
output_text += ref_text
if self.streaming:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(chain=MessageChain().message(ref_text)),
)
# 创建最终响应
chain = MessageChain(chain=[Comp.Plain(output_text)])
self.final_llm_resp = LLMResponse(role="assistant", result_chain=chain)
self._transition_state(AgentState.DONE)
try:
await self.agent_hooks.on_agent_done(self.run_context, self.final_llm_resp)
except Exception as e:
logger.error(f"Error in on_agent_done hook: {e}", exc_info=True)
# 返回最终结果
yield AgentResponse(
type="llm_result",
data=AgentResponseData(chain=chain),
)
async def _execute_dashscope_request(self):
"""执行 Dashscope 请求的核心逻辑"""
prompt = self.req.prompt or ""
session_id = self.req.session_id or "unknown"
image_urls = self.req.image_urls or []
contexts = self.req.contexts or []
system_prompt = self.req.system_prompt
# 检查图片输入
if image_urls:
logger.warning("阿里云百炼暂不支持图片输入,将自动忽略图片内容。")
# 构建请求payload
payload = await self._build_request_payload(
prompt, session_id, contexts, system_prompt
)
if not self.streaming:
payload["incremental_output"] = False
# 发起请求
partial = functools.partial(Application.call, **payload)
response = await asyncio.get_event_loop().run_in_executor(None, partial)
async for resp in self._handle_streaming_response(response, session_id):
yield resp
@override
def done(self) -> bool:
"""检查 Agent 是否已完成工作"""
return self._state in (AgentState.DONE, AgentState.ERROR)
@override
def get_final_llm_resp(self) -> LLMResponse | None:
return self.final_llm_resp

View File

@@ -0,0 +1,336 @@
import base64
import os
import sys
import typing as T
import astrbot.core.message.components as Comp
from astrbot.core import logger, sp
from astrbot.core.message.message_event_result import MessageChain
from astrbot.core.provider.entities import (
LLMResponse,
ProviderRequest,
)
from astrbot.core.utils.astrbot_path import get_astrbot_data_path
from astrbot.core.utils.io import download_file
from ...hooks import BaseAgentRunHooks
from ...response import AgentResponseData
from ...run_context import ContextWrapper, TContext
from ..base import AgentResponse, AgentState, BaseAgentRunner
from .dify_api_client import DifyAPIClient
if sys.version_info >= (3, 12):
from typing import override
else:
from typing_extensions import override
class DifyAgentRunner(BaseAgentRunner[TContext]):
"""Dify Agent Runner"""
@override
async def reset(
self,
request: ProviderRequest,
run_context: ContextWrapper[TContext],
agent_hooks: BaseAgentRunHooks[TContext],
provider_config: dict,
**kwargs: T.Any,
) -> None:
self.req = request
self.streaming = kwargs.get("streaming", False)
self.final_llm_resp = None
self._state = AgentState.IDLE
self.agent_hooks = agent_hooks
self.run_context = run_context
self.api_key = provider_config.get("dify_api_key", "")
self.api_base = provider_config.get("dify_api_base", "https://api.dify.ai/v1")
self.api_type = provider_config.get("dify_api_type", "chat")
self.workflow_output_key = provider_config.get(
"dify_workflow_output_key",
"astrbot_wf_output",
)
self.dify_query_input_key = provider_config.get(
"dify_query_input_key",
"astrbot_text_query",
)
self.variables: dict = provider_config.get("variables", {}) or {}
self.timeout = provider_config.get("timeout", 60)
if isinstance(self.timeout, str):
self.timeout = int(self.timeout)
self.api_client = DifyAPIClient(self.api_key, self.api_base)
@override
async def step(self):
"""
执行 Dify Agent 的一个步骤
"""
if not self.req:
raise ValueError("Request is not set. Please call reset() first.")
if self._state == AgentState.IDLE:
try:
await self.agent_hooks.on_agent_begin(self.run_context)
except Exception as e:
logger.error(f"Error in on_agent_begin hook: {e}", exc_info=True)
# 开始处理,转换到运行状态
self._transition_state(AgentState.RUNNING)
try:
# 执行 Dify 请求并处理结果
async for response in self._execute_dify_request():
yield response
except Exception as e:
logger.error(f"Dify 请求失败:{str(e)}")
self._transition_state(AgentState.ERROR)
self.final_llm_resp = LLMResponse(
role="err", completion_text=f"Dify 请求失败:{str(e)}"
)
yield AgentResponse(
type="err",
data=AgentResponseData(
chain=MessageChain().message(f"Dify 请求失败:{str(e)}")
),
)
finally:
await self.api_client.close()
@override
async def step_until_done(
self, max_step: int = 30
) -> T.AsyncGenerator[AgentResponse, None]:
while not self.done():
async for resp in self.step():
yield resp
async def _execute_dify_request(self):
"""执行 Dify 请求的核心逻辑"""
prompt = self.req.prompt or ""
session_id = self.req.session_id or "unknown"
image_urls = self.req.image_urls or []
system_prompt = self.req.system_prompt
conversation_id = await sp.get_async(
scope="umo",
scope_id=session_id,
key="dify_conversation_id",
default="",
)
result = ""
# 处理图片上传
files_payload = []
for image_url in image_urls:
# image_url is a base64 string
try:
image_data = base64.b64decode(image_url)
file_response = await self.api_client.file_upload(
file_data=image_data,
user=session_id,
mime_type="image/png",
file_name="image.png",
)
logger.debug(f"Dify 上传图片响应:{file_response}")
if "id" not in file_response:
logger.warning(
f"上传图片后得到未知的 Dify 响应:{file_response},图片将忽略。"
)
continue
files_payload.append(
{
"type": "image",
"transfer_method": "local_file",
"upload_file_id": file_response["id"],
}
)
except Exception as e:
logger.warning(f"上传图片失败:{e}")
continue
# 获得会话变量
payload_vars = self.variables.copy()
# 动态变量
session_var = await sp.get_async(
scope="umo",
scope_id=session_id,
key="session_variables",
default={},
)
payload_vars.update(session_var)
payload_vars["system_prompt"] = system_prompt
# 处理不同的 API 类型
match self.api_type:
case "chat" | "agent" | "chatflow":
if not prompt:
prompt = "请描述这张图片。"
async for chunk in self.api_client.chat_messages(
inputs={
**payload_vars,
},
query=prompt,
user=session_id,
conversation_id=conversation_id,
files=files_payload,
timeout=self.timeout,
):
logger.debug(f"dify resp chunk: {chunk}")
if chunk["event"] == "message" or chunk["event"] == "agent_message":
result += chunk["answer"]
if not conversation_id:
await sp.put_async(
scope="umo",
scope_id=session_id,
key="dify_conversation_id",
value=chunk["conversation_id"],
)
conversation_id = chunk["conversation_id"]
# 如果是流式响应,发送增量数据
if self.streaming and chunk["answer"]:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(
chain=MessageChain().message(chunk["answer"])
),
)
elif chunk["event"] == "message_end":
logger.debug("Dify message end")
break
elif chunk["event"] == "error":
logger.error(f"Dify 出现错误:{chunk}")
raise Exception(
f"Dify 出现错误 status: {chunk['status']} message: {chunk['message']}"
)
case "workflow":
async for chunk in self.api_client.workflow_run(
inputs={
self.dify_query_input_key: prompt,
"astrbot_session_id": session_id,
**payload_vars,
},
user=session_id,
files=files_payload,
timeout=self.timeout,
):
logger.debug(f"dify workflow resp chunk: {chunk}")
match chunk["event"]:
case "workflow_started":
logger.info(
f"Dify 工作流(ID: {chunk['workflow_run_id']})开始运行。"
)
case "node_finished":
logger.debug(
f"Dify 工作流节点(ID: {chunk['data']['node_id']} Title: {chunk['data'].get('title', '')})运行结束。"
)
case "text_chunk":
if self.streaming and chunk["data"]["text"]:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(
chain=MessageChain().message(
chunk["data"]["text"]
)
),
)
case "workflow_finished":
logger.info(
f"Dify 工作流(ID: {chunk['workflow_run_id']})运行结束"
)
logger.debug(f"Dify 工作流结果:{chunk}")
if chunk["data"]["error"]:
logger.error(
f"Dify 工作流出现错误:{chunk['data']['error']}"
)
raise Exception(
f"Dify 工作流出现错误:{chunk['data']['error']}"
)
if self.workflow_output_key not in chunk["data"]["outputs"]:
raise Exception(
f"Dify 工作流的输出不包含指定的键名:{self.workflow_output_key}"
)
result = chunk
case _:
raise Exception(f"未知的 Dify API 类型:{self.api_type}")
if not result:
logger.warning("Dify 请求结果为空,请查看 Debug 日志。")
# 解析结果
chain = await self.parse_dify_result(result)
# 创建最终响应
self.final_llm_resp = LLMResponse(role="assistant", result_chain=chain)
self._transition_state(AgentState.DONE)
try:
await self.agent_hooks.on_agent_done(self.run_context, self.final_llm_resp)
except Exception as e:
logger.error(f"Error in on_agent_done hook: {e}", exc_info=True)
# 返回最终结果
yield AgentResponse(
type="llm_result",
data=AgentResponseData(chain=chain),
)
async def parse_dify_result(self, chunk: dict | str) -> MessageChain:
"""解析 Dify 的响应结果"""
if isinstance(chunk, str):
# Chat
return MessageChain(chain=[Comp.Plain(chunk)])
async def parse_file(item: dict):
match item["type"]:
case "image":
return Comp.Image(file=item["url"], url=item["url"])
case "audio":
# 仅支持 wav
temp_dir = os.path.join(get_astrbot_data_path(), "temp")
path = os.path.join(temp_dir, f"{item['filename']}.wav")
await download_file(item["url"], path)
return Comp.Image(file=item["url"], url=item["url"])
case "video":
return Comp.Video(file=item["url"])
case _:
return Comp.File(name=item["filename"], file=item["url"])
output = chunk["data"]["outputs"][self.workflow_output_key]
chains = []
if isinstance(output, str):
# 纯文本输出
chains.append(Comp.Plain(output))
elif isinstance(output, list):
# 主要适配 Dify 的 HTTP 请求结点的多模态输出
for item in output:
# handle Array[File]
if (
not isinstance(item, dict)
or item.get("dify_model_identity", "") != "__dify__file__"
):
chains.append(Comp.Plain(str(output)))
break
else:
chains.append(Comp.Plain(str(output)))
# scan file
files = chunk["data"].get("files", [])
for item in files:
comp = await parse_file(item)
chains.append(comp)
return MessageChain(chain=chains)
@override
def done(self) -> bool:
"""检查 Agent 是否已完成工作"""
return self._state in (AgentState.DONE, AgentState.ERROR)
@override
def get_final_llm_resp(self) -> LLMResponse | None:
return self.final_llm_resp

View File

@@ -0,0 +1,195 @@
import codecs
import json
from collections.abc import AsyncGenerator
from typing import Any
from aiohttp import ClientResponse, ClientSession, FormData
from astrbot.core import logger
async def _stream_sse(resp: ClientResponse) -> AsyncGenerator[dict, None]:
decoder = codecs.getincrementaldecoder("utf-8")()
buffer = ""
async for chunk in resp.content.iter_chunked(8192):
buffer += decoder.decode(chunk)
while "\n\n" in buffer:
block, buffer = buffer.split("\n\n", 1)
if block.strip().startswith("data:"):
try:
yield json.loads(block[5:])
except json.JSONDecodeError:
logger.warning(f"Drop invalid dify json data: {block[5:]}")
continue
# flush any remaining text
buffer += decoder.decode(b"", final=True)
if buffer.strip().startswith("data:"):
try:
yield json.loads(buffer[5:])
except json.JSONDecodeError:
logger.warning(f"Drop invalid dify json data: {buffer[5:]}")
class DifyAPIClient:
def __init__(self, api_key: str, api_base: str = "https://api.dify.ai/v1"):
self.api_key = api_key
self.api_base = api_base
self.session = ClientSession(trust_env=True)
self.headers = {
"Authorization": f"Bearer {self.api_key}",
}
async def chat_messages(
self,
inputs: dict,
query: str,
user: str,
response_mode: str = "streaming",
conversation_id: str = "",
files: list[dict[str, Any]] | None = None,
timeout: float = 60,
) -> AsyncGenerator[dict[str, Any], None]:
if files is None:
files = []
url = f"{self.api_base}/chat-messages"
payload = locals()
payload.pop("self")
payload.pop("timeout")
logger.info(f"chat_messages payload: {payload}")
async with self.session.post(
url,
json=payload,
headers=self.headers,
timeout=timeout,
) as resp:
if resp.status != 200:
text = await resp.text()
raise Exception(
f"Dify /chat-messages 接口请求失败:{resp.status}. {text}",
)
async for event in _stream_sse(resp):
yield event
async def workflow_run(
self,
inputs: dict,
user: str,
response_mode: str = "streaming",
files: list[dict[str, Any]] | None = None,
timeout: float = 60,
):
if files is None:
files = []
url = f"{self.api_base}/workflows/run"
payload = locals()
payload.pop("self")
payload.pop("timeout")
logger.info(f"workflow_run payload: {payload}")
async with self.session.post(
url,
json=payload,
headers=self.headers,
timeout=timeout,
) as resp:
if resp.status != 200:
text = await resp.text()
raise Exception(
f"Dify /workflows/run 接口请求失败:{resp.status}. {text}",
)
async for event in _stream_sse(resp):
yield event
async def file_upload(
self,
user: str,
file_path: str | None = None,
file_data: bytes | None = None,
file_name: str | None = None,
mime_type: str | None = None,
) -> dict[str, Any]:
"""Upload a file to Dify. Must provide either file_path or file_data.
Args:
user: The user ID.
file_path: The path to the file to upload.
file_data: The file data in bytes.
file_name: Optional file name when using file_data.
Returns:
A dictionary containing the uploaded file information.
"""
url = f"{self.api_base}/files/upload"
form = FormData()
form.add_field("user", user)
if file_data is not None:
# 使用 bytes 数据
form.add_field(
"file",
file_data,
filename=file_name or "uploaded_file",
content_type=mime_type or "application/octet-stream",
)
elif file_path is not None:
# 使用文件路径
import os
with open(file_path, "rb") as f:
file_content = f.read()
form.add_field(
"file",
file_content,
filename=os.path.basename(file_path),
content_type=mime_type or "application/octet-stream",
)
else:
raise ValueError("file_path 和 file_data 不能同时为 None")
async with self.session.post(
url,
data=form,
headers=self.headers, # 不包含 Content-Type让 aiohttp 自动设置
) as resp:
if resp.status != 200 and resp.status != 201:
text = await resp.text()
raise Exception(f"Dify 文件上传失败:{resp.status}. {text}")
return await resp.json() # {"id": "xxx", ...}
async def close(self):
await self.session.close()
async def get_chat_convs(self, user: str, limit: int = 20):
# conversations. GET
url = f"{self.api_base}/conversations"
payload = {
"user": user,
"limit": limit,
}
async with self.session.get(url, params=payload, headers=self.headers) as resp:
return await resp.json()
async def delete_chat_conv(self, user: str, conversation_id: str):
# conversation. DELETE
url = f"{self.api_base}/conversations/{conversation_id}"
payload = {
"user": user,
}
async with self.session.delete(url, json=payload, headers=self.headers) as resp:
return await resp.json()
async def rename(
self,
conversation_id: str,
name: str,
user: str,
auto_generate: bool = False,
):
# /conversations/:conversation_id/name
url = f"{self.api_base}/conversations/{conversation_id}/name"
payload = {
"user": user,
"name": name,
"auto_generate": auto_generate,
}
async with self.session.post(url, json=payload, headers=self.headers) as resp:
return await resp.json()

View File

@@ -0,0 +1,400 @@
import sys
import traceback
import typing as T
from mcp.types import (
BlobResourceContents,
CallToolResult,
EmbeddedResource,
ImageContent,
TextContent,
TextResourceContents,
)
from astrbot import logger
from astrbot.core.message.message_event_result import (
MessageChain,
)
from astrbot.core.provider.entities import (
LLMResponse,
ProviderRequest,
ToolCallsResult,
)
from astrbot.core.provider.provider import Provider
from ..hooks import BaseAgentRunHooks
from ..message import AssistantMessageSegment, Message, ToolCallMessageSegment
from ..response import AgentResponseData
from ..run_context import ContextWrapper, TContext
from ..tool_executor import BaseFunctionToolExecutor
from .base import AgentResponse, AgentState, BaseAgentRunner
if sys.version_info >= (3, 12):
from typing import override
else:
from typing_extensions import override
class ToolLoopAgentRunner(BaseAgentRunner[TContext]):
@override
async def reset(
self,
provider: Provider,
request: ProviderRequest,
run_context: ContextWrapper[TContext],
tool_executor: BaseFunctionToolExecutor[TContext],
agent_hooks: BaseAgentRunHooks[TContext],
**kwargs: T.Any,
) -> None:
self.req = request
self.streaming = kwargs.get("streaming", False)
self.provider = provider
self.final_llm_resp = None
self._state = AgentState.IDLE
self.tool_executor = tool_executor
self.agent_hooks = agent_hooks
self.run_context = run_context
messages = []
# append existing messages in the run context
for msg in request.contexts:
messages.append(Message.model_validate(msg))
if request.prompt is not None:
m = await request.assemble_context()
messages.append(Message.model_validate(m))
if request.system_prompt:
messages.insert(
0,
Message(role="system", content=request.system_prompt),
)
self.run_context.messages = messages
async def _iter_llm_responses(self) -> T.AsyncGenerator[LLMResponse, None]:
"""Yields chunks *and* a final LLMResponse."""
if self.streaming:
stream = self.provider.text_chat_stream(**self.req.__dict__)
async for resp in stream: # type: ignore
yield resp
else:
yield await self.provider.text_chat(**self.req.__dict__)
@override
async def step(self):
"""Process a single step of the agent.
This method should return the result of the step.
"""
if not self.req:
raise ValueError("Request is not set. Please call reset() first.")
if self._state == AgentState.IDLE:
try:
await self.agent_hooks.on_agent_begin(self.run_context)
except Exception as e:
logger.error(f"Error in on_agent_begin hook: {e}", exc_info=True)
# 开始处理,转换到运行状态
self._transition_state(AgentState.RUNNING)
llm_resp_result = None
async for llm_response in self._iter_llm_responses():
assert isinstance(llm_response, LLMResponse)
if llm_response.is_chunk:
if llm_response.result_chain:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(chain=llm_response.result_chain),
)
elif llm_response.completion_text:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(
chain=MessageChain().message(llm_response.completion_text),
),
)
elif llm_response.reasoning_content:
yield AgentResponse(
type="streaming_delta",
data=AgentResponseData(
chain=MessageChain(type="reasoning").message(
llm_response.reasoning_content,
),
),
)
continue
llm_resp_result = llm_response
break # got final response
if not llm_resp_result:
return
# 处理 LLM 响应
llm_resp = llm_resp_result
if llm_resp.role == "err":
# 如果 LLM 响应错误,转换到错误状态
self.final_llm_resp = llm_resp
self._transition_state(AgentState.ERROR)
yield AgentResponse(
type="err",
data=AgentResponseData(
chain=MessageChain().message(
f"LLM 响应错误: {llm_resp.completion_text or '未知错误'}",
),
),
)
if not llm_resp.tools_call_name:
# 如果没有工具调用,转换到完成状态
self.final_llm_resp = llm_resp
self._transition_state(AgentState.DONE)
# record the final assistant message
self.run_context.messages.append(
Message(
role="assistant",
content=llm_resp.completion_text or "",
),
)
try:
await self.agent_hooks.on_agent_done(self.run_context, llm_resp)
except Exception as e:
logger.error(f"Error in on_agent_done hook: {e}", exc_info=True)
# 返回 LLM 结果
if llm_resp.result_chain:
yield AgentResponse(
type="llm_result",
data=AgentResponseData(chain=llm_resp.result_chain),
)
elif llm_resp.completion_text:
yield AgentResponse(
type="llm_result",
data=AgentResponseData(
chain=MessageChain().message(llm_resp.completion_text),
),
)
# 如果有工具调用,还需处理工具调用
if llm_resp.tools_call_name:
tool_call_result_blocks = []
for tool_call_name in llm_resp.tools_call_name:
yield AgentResponse(
type="tool_call",
data=AgentResponseData(
chain=MessageChain(type="tool_call").message(
f"🔨 调用工具: {tool_call_name}"
),
),
)
async for result in self._handle_function_tools(self.req, llm_resp):
if isinstance(result, list):
tool_call_result_blocks = result
elif isinstance(result, MessageChain):
result.type = "tool_call_result"
yield AgentResponse(
type="tool_call_result",
data=AgentResponseData(chain=result),
)
# 将结果添加到上下文中
tool_calls_result = ToolCallsResult(
tool_calls_info=AssistantMessageSegment(
tool_calls=llm_resp.to_openai_to_calls_model(),
content=llm_resp.completion_text,
),
tool_calls_result=tool_call_result_blocks,
)
# record the assistant message with tool calls
self.run_context.messages.extend(
tool_calls_result.to_openai_messages_model()
)
self.req.append_tool_calls_result(tool_calls_result)
async def step_until_done(
self, max_step: int
) -> T.AsyncGenerator[AgentResponse, None]:
"""Process steps until the agent is done."""
step_count = 0
while not self.done() and step_count < max_step:
step_count += 1
async for resp in self.step():
yield resp
async def _handle_function_tools(
self,
req: ProviderRequest,
llm_response: LLMResponse,
) -> T.AsyncGenerator[MessageChain | list[ToolCallMessageSegment], None]:
"""处理函数工具调用。"""
tool_call_result_blocks: list[ToolCallMessageSegment] = []
logger.info(f"Agent 使用工具: {llm_response.tools_call_name}")
# 执行函数调用
for func_tool_name, func_tool_args, func_tool_id in zip(
llm_response.tools_call_name,
llm_response.tools_call_args,
llm_response.tools_call_ids,
):
try:
if not req.func_tool:
return
func_tool = req.func_tool.get_func(func_tool_name)
logger.info(f"使用工具:{func_tool_name},参数:{func_tool_args}")
if not func_tool:
logger.warning(f"未找到指定的工具: {func_tool_name},将跳过。")
tool_call_result_blocks.append(
ToolCallMessageSegment(
role="tool",
tool_call_id=func_tool_id,
content=f"error: 未找到工具 {func_tool_name}",
),
)
continue
valid_params = {} # 参数过滤:只传递函数实际需要的参数
# 获取实际的 handler 函数
if func_tool.handler:
logger.debug(
f"工具 {func_tool_name} 期望的参数: {func_tool.parameters}",
)
if func_tool.parameters and func_tool.parameters.get("properties"):
expected_params = set(func_tool.parameters["properties"].keys())
valid_params = {
k: v
for k, v in func_tool_args.items()
if k in expected_params
}
# 记录被忽略的参数
ignored_params = set(func_tool_args.keys()) - set(
valid_params.keys(),
)
if ignored_params:
logger.warning(
f"工具 {func_tool_name} 忽略非期望参数: {ignored_params}",
)
else:
# 如果没有 handler如 MCP 工具),使用所有参数
valid_params = func_tool_args
try:
await self.agent_hooks.on_tool_start(
self.run_context,
func_tool,
valid_params,
)
except Exception as e:
logger.error(f"Error in on_tool_start hook: {e}", exc_info=True)
executor = self.tool_executor.execute(
tool=func_tool,
run_context=self.run_context,
**valid_params, # 只传递有效的参数
)
_final_resp: CallToolResult | None = None
async for resp in executor: # type: ignore
if isinstance(resp, CallToolResult):
res = resp
_final_resp = resp
if isinstance(res.content[0], TextContent):
tool_call_result_blocks.append(
ToolCallMessageSegment(
role="tool",
tool_call_id=func_tool_id,
content=res.content[0].text,
),
)
yield MessageChain().message(res.content[0].text)
elif isinstance(res.content[0], ImageContent):
tool_call_result_blocks.append(
ToolCallMessageSegment(
role="tool",
tool_call_id=func_tool_id,
content="返回了图片(已直接发送给用户)",
),
)
yield MessageChain(type="tool_direct_result").base64_image(
res.content[0].data,
)
elif isinstance(res.content[0], EmbeddedResource):
resource = res.content[0].resource
if isinstance(resource, TextResourceContents):
tool_call_result_blocks.append(
ToolCallMessageSegment(
role="tool",
tool_call_id=func_tool_id,
content=resource.text,
),
)
yield MessageChain().message(resource.text)
elif (
isinstance(resource, BlobResourceContents)
and resource.mimeType
and resource.mimeType.startswith("image/")
):
tool_call_result_blocks.append(
ToolCallMessageSegment(
role="tool",
tool_call_id=func_tool_id,
content="返回了图片(已直接发送给用户)",
),
)
yield MessageChain(
type="tool_direct_result",
).base64_image(resource.blob)
else:
tool_call_result_blocks.append(
ToolCallMessageSegment(
role="tool",
tool_call_id=func_tool_id,
content="返回的数据类型不受支持",
),
)
yield MessageChain().message("返回的数据类型不受支持。")
elif resp is None:
# Tool 直接请求发送消息给用户
# 这里我们将直接结束 Agent Loop。
# 发送消息逻辑在 ToolExecutor 中处理了。
logger.warning(
f"{func_tool_name} 没有没有返回值或者将结果直接发送给用户,此工具调用不会被记录到历史中。"
)
self._transition_state(AgentState.DONE)
else:
# 不应该出现其他类型
logger.warning(
f"Tool 返回了不支持的类型: {type(resp)},将忽略。",
)
try:
await self.agent_hooks.on_tool_end(
self.run_context,
func_tool,
func_tool_args,
_final_resp,
)
except Exception as e:
logger.error(f"Error in on_tool_end hook: {e}", exc_info=True)
except Exception as e:
logger.warning(traceback.format_exc())
tool_call_result_blocks.append(
ToolCallMessageSegment(
role="tool",
tool_call_id=func_tool_id,
content=f"error: {e!s}",
),
)
# 处理函数调用响应
if tool_call_result_blocks:
yield tool_call_result_blocks
def done(self) -> bool:
"""检查 Agent 是否已完成工作"""
return self._state in (AgentState.DONE, AgentState.ERROR)
def get_final_llm_resp(self) -> LLMResponse | None:
return self.final_llm_resp

285
astrbot/core/agent/tool.py Normal file
View File

@@ -0,0 +1,285 @@
from collections.abc import Awaitable, Callable
from typing import Any, Generic
import jsonschema
import mcp
from deprecated import deprecated
from pydantic import Field, model_validator
from pydantic.dataclasses import dataclass
from .run_context import ContextWrapper, TContext
ParametersType = dict[str, Any]
ToolExecResult = str | mcp.types.CallToolResult
@dataclass
class ToolSchema:
"""A class representing the schema of a tool for function calling."""
name: str
"""The name of the tool."""
description: str
"""The description of the tool."""
parameters: ParametersType
"""The parameters of the tool, in JSON Schema format."""
@model_validator(mode="after")
def validate_parameters(self) -> "ToolSchema":
jsonschema.validate(
self.parameters, jsonschema.Draft202012Validator.META_SCHEMA
)
return self
@dataclass
class FunctionTool(ToolSchema, Generic[TContext]):
"""A callable tool, for function calling."""
handler: Callable[..., Awaitable[Any]] | None = None
"""a callable that implements the tool's functionality. It should be an async function."""
handler_module_path: str | None = None
"""
The module path of the handler function. This is empty when the origin is mcp.
This field must be retained, as the handler will be wrapped in functools.partial during initialization,
causing the handler's __module__ to be functools
"""
active: bool = True
"""
Whether the tool is active. This field is a special field for AstrBot.
You can ignore it when integrating with other frameworks.
"""
def __repr__(self):
return f"FuncTool(name={self.name}, parameters={self.parameters}, description={self.description})"
async def call(self, context: ContextWrapper[TContext], **kwargs) -> ToolExecResult:
"""Run the tool with the given arguments. The handler field has priority."""
raise NotImplementedError(
"FunctionTool.call() must be implemented by subclasses or set a handler."
)
@dataclass
class ToolSet:
"""A set of function tools that can be used in function calling.
This class provides methods to add, remove, and retrieve tools, as well as
convert the tools to different API formats (OpenAI, Anthropic, Google GenAI).
"""
tools: list[FunctionTool] = Field(default_factory=list)
def empty(self) -> bool:
"""Check if the tool set is empty."""
return len(self.tools) == 0
def add_tool(self, tool: FunctionTool):
"""Add a tool to the set."""
# 检查是否已存在同名工具
for i, existing_tool in enumerate(self.tools):
if existing_tool.name == tool.name:
self.tools[i] = tool
return
self.tools.append(tool)
def remove_tool(self, name: str):
"""Remove a tool by its name."""
self.tools = [tool for tool in self.tools if tool.name != name]
def get_tool(self, name: str) -> FunctionTool | None:
"""Get a tool by its name."""
for tool in self.tools:
if tool.name == name:
return tool
return None
@deprecated(reason="Use add_tool() instead", version="4.0.0")
def add_func(
self,
name: str,
func_args: list,
desc: str,
handler: Callable[..., Awaitable[Any]],
):
"""Add a function tool to the set."""
params = {
"type": "object", # hard-coded here
"properties": {},
}
for param in func_args:
params["properties"][param["name"]] = {
"type": param["type"],
"description": param["description"],
}
_func = FunctionTool(
name=name,
parameters=params,
description=desc,
handler=handler,
)
self.add_tool(_func)
@deprecated(reason="Use remove_tool() instead", version="4.0.0")
def remove_func(self, name: str):
"""Remove a function tool by its name."""
self.remove_tool(name)
@deprecated(reason="Use get_tool() instead", version="4.0.0")
def get_func(self, name: str) -> FunctionTool | None:
"""Get all function tools."""
return self.get_tool(name)
@property
def func_list(self) -> list[FunctionTool]:
"""Get the list of function tools."""
return self.tools
def openai_schema(self, omit_empty_parameter_field: bool = False) -> list[dict]:
"""Convert tools to OpenAI API function calling schema format."""
result = []
for tool in self.tools:
func_def = {
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
},
}
if (
tool.parameters and tool.parameters.get("properties")
) or not omit_empty_parameter_field:
func_def["function"]["parameters"] = tool.parameters
result.append(func_def)
return result
def anthropic_schema(self) -> list[dict]:
"""Convert tools to Anthropic API format."""
result = []
for tool in self.tools:
input_schema = {"type": "object"}
if tool.parameters:
input_schema["properties"] = tool.parameters.get("properties", {})
input_schema["required"] = tool.parameters.get("required", [])
tool_def = {
"name": tool.name,
"description": tool.description,
"input_schema": input_schema,
}
result.append(tool_def)
return result
def google_schema(self) -> dict:
"""Convert tools to Google GenAI API format."""
def convert_schema(schema: dict) -> dict:
"""Convert schema to Gemini API format."""
supported_types = {
"string",
"number",
"integer",
"boolean",
"array",
"object",
"null",
}
supported_formats = {
"string": {"enum", "date-time"},
"integer": {"int32", "int64"},
"number": {"float", "double"},
}
if "anyOf" in schema:
return {"anyOf": [convert_schema(s) for s in schema["anyOf"]]}
result = {}
if "type" in schema and schema["type"] in supported_types:
result["type"] = schema["type"]
if "format" in schema and schema["format"] in supported_formats.get(
result["type"],
set(),
):
result["format"] = schema["format"]
else:
result["type"] = "null"
support_fields = {
"title",
"description",
"enum",
"minimum",
"maximum",
"maxItems",
"minItems",
"nullable",
"required",
}
result.update({k: schema[k] for k in support_fields if k in schema})
if "properties" in schema:
properties = {}
for key, value in schema["properties"].items():
prop_value = convert_schema(value)
if "default" in prop_value:
del prop_value["default"]
properties[key] = prop_value
if properties:
result["properties"] = properties
if "items" in schema:
result["items"] = convert_schema(schema["items"])
return result
tools = []
for tool in self.tools:
d: dict[str, Any] = {
"name": tool.name,
"description": tool.description,
}
if tool.parameters:
d["parameters"] = convert_schema(tool.parameters)
tools.append(d)
declarations = {}
if tools:
declarations["function_declarations"] = tools
return declarations
@deprecated(reason="Use openai_schema() instead", version="4.0.0")
def get_func_desc_openai_style(self, omit_empty_parameter_field: bool = False):
return self.openai_schema(omit_empty_parameter_field)
@deprecated(reason="Use anthropic_schema() instead", version="4.0.0")
def get_func_desc_anthropic_style(self):
return self.anthropic_schema()
@deprecated(reason="Use google_schema() instead", version="4.0.0")
def get_func_desc_google_genai_style(self):
return self.google_schema()
def names(self) -> list[str]:
"""获取所有工具的名称列表"""
return [tool.name for tool in self.tools]
def __len__(self):
return len(self.tools)
def __bool__(self):
return len(self.tools) > 0
def __iter__(self):
return iter(self.tools)
def __repr__(self):
return f"ToolSet(tools={self.tools})"
def __str__(self):
return f"ToolSet(tools={self.tools})"

View File

@@ -0,0 +1,17 @@
from collections.abc import AsyncGenerator
from typing import Any, Generic
import mcp
from .run_context import ContextWrapper, TContext
from .tool import FunctionTool
class BaseFunctionToolExecutor(Generic[TContext]):
@classmethod
async def execute(
cls,
tool: FunctionTool,
run_context: ContextWrapper[TContext],
**tool_args,
) -> AsyncGenerator[Any | mcp.types.CallToolResult, None]: ...

View File

@@ -0,0 +1,19 @@
from pydantic import Field
from pydantic.dataclasses import dataclass
from astrbot.core.agent.run_context import ContextWrapper
from astrbot.core.platform.astr_message_event import AstrMessageEvent
from astrbot.core.star.context import Context
@dataclass(config={"arbitrary_types_allowed": True})
class AstrAgentContext:
context: Context
"""The star context instance"""
event: AstrMessageEvent
"""The message event associated with the agent context."""
extra: dict[str, str] = Field(default_factory=dict)
"""Customized extra data."""
AgentContextWrapper = ContextWrapper[AstrAgentContext]

View File

@@ -0,0 +1,36 @@
from typing import Any
from mcp.types import CallToolResult
from astrbot.core.agent.hooks import BaseAgentRunHooks
from astrbot.core.agent.run_context import ContextWrapper
from astrbot.core.agent.tool import FunctionTool
from astrbot.core.astr_agent_context import AstrAgentContext
from astrbot.core.pipeline.context_utils import call_event_hook
from astrbot.core.star.star_handler import EventType
class MainAgentHooks(BaseAgentRunHooks[AstrAgentContext]):
async def on_agent_done(self, run_context, llm_response):
# 执行事件钩子
await call_event_hook(
run_context.context.event,
EventType.OnLLMResponseEvent,
llm_response,
)
async def on_tool_end(
self,
run_context: ContextWrapper[AstrAgentContext],
tool: FunctionTool[Any],
tool_args: dict | None,
tool_result: CallToolResult | None,
):
run_context.context.event.clear_result()
class EmptyAgentHooks(BaseAgentRunHooks[AstrAgentContext]):
pass
MAIN_AGENT_HOOKS = MainAgentHooks()

View File

@@ -0,0 +1,80 @@
import traceback
from collections.abc import AsyncGenerator
from astrbot.core import logger
from astrbot.core.agent.runners.tool_loop_agent_runner import ToolLoopAgentRunner
from astrbot.core.astr_agent_context import AstrAgentContext
from astrbot.core.message.message_event_result import (
MessageChain,
MessageEventResult,
ResultContentType,
)
AgentRunner = ToolLoopAgentRunner[AstrAgentContext]
async def run_agent(
agent_runner: AgentRunner,
max_step: int = 30,
show_tool_use: bool = True,
stream_to_general: bool = False,
show_reasoning: bool = False,
) -> AsyncGenerator[MessageChain | None, None]:
step_idx = 0
astr_event = agent_runner.run_context.context.event
while step_idx < max_step:
step_idx += 1
try:
async for resp in agent_runner.step():
if astr_event.is_stopped():
return
if resp.type == "tool_call_result":
msg_chain = resp.data["chain"]
if msg_chain.type == "tool_direct_result":
# tool_direct_result 用于标记 llm tool 需要直接发送给用户的内容
await astr_event.send(resp.data["chain"])
continue
# 对于其他情况,暂时先不处理
continue
elif resp.type == "tool_call":
if agent_runner.streaming:
# 用来标记流式响应需要分节
yield MessageChain(chain=[], type="break")
if show_tool_use:
await astr_event.send(resp.data["chain"])
continue
if stream_to_general and resp.type == "streaming_delta":
continue
if stream_to_general or not agent_runner.streaming:
content_typ = (
ResultContentType.LLM_RESULT
if resp.type == "llm_result"
else ResultContentType.GENERAL_RESULT
)
astr_event.set_result(
MessageEventResult(
chain=resp.data["chain"].chain,
result_content_type=content_typ,
),
)
yield
astr_event.clear_result()
elif resp.type == "streaming_delta":
chain = resp.data["chain"]
if chain.type == "reasoning" and not show_reasoning:
# display the reasoning content only when configured
continue
yield resp.data["chain"] # MessageChain
if agent_runner.done():
break
except Exception as e:
logger.error(traceback.format_exc())
err_msg = f"\n\nAstrBot 请求失败。\n错误类型: {type(e).__name__}\n错误信息: {e!s}\n\n请在控制台查看和分享错误详情。\n"
if agent_runner.streaming:
yield MessageChain().message(err_msg)
else:
astr_event.set_result(MessageEventResult().message(err_msg))
return

View File

@@ -0,0 +1,246 @@
import asyncio
import inspect
import traceback
import typing as T
import mcp
from astrbot import logger
from astrbot.core.agent.handoff import HandoffTool
from astrbot.core.agent.mcp_client import MCPTool
from astrbot.core.agent.run_context import ContextWrapper
from astrbot.core.agent.tool import FunctionTool, ToolSet
from astrbot.core.agent.tool_executor import BaseFunctionToolExecutor
from astrbot.core.astr_agent_context import AstrAgentContext
from astrbot.core.message.message_event_result import (
CommandResult,
MessageChain,
MessageEventResult,
)
from astrbot.core.provider.register import llm_tools
class FunctionToolExecutor(BaseFunctionToolExecutor[AstrAgentContext]):
@classmethod
async def execute(cls, tool, run_context, **tool_args):
"""执行函数调用。
Args:
event (AstrMessageEvent): 事件对象, 当 origin 为 local 时必须提供。
**kwargs: 函数调用的参数。
Returns:
AsyncGenerator[None | mcp.types.CallToolResult, None]
"""
if isinstance(tool, HandoffTool):
async for r in cls._execute_handoff(tool, run_context, **tool_args):
yield r
return
elif isinstance(tool, MCPTool):
async for r in cls._execute_mcp(tool, run_context, **tool_args):
yield r
return
else:
async for r in cls._execute_local(tool, run_context, **tool_args):
yield r
return
@classmethod
async def _execute_handoff(
cls,
tool: HandoffTool,
run_context: ContextWrapper[AstrAgentContext],
**tool_args,
):
input_ = tool_args.get("input")
# make toolset for the agent
tools = tool.agent.tools
if tools:
toolset = ToolSet()
for t in tools:
if isinstance(t, str):
_t = llm_tools.get_func(t)
if _t:
toolset.add_tool(_t)
elif isinstance(t, FunctionTool):
toolset.add_tool(t)
else:
toolset = None
ctx = run_context.context.context
event = run_context.context.event
umo = event.unified_msg_origin
prov_id = await ctx.get_current_chat_provider_id(umo)
llm_resp = await ctx.tool_loop_agent(
event=event,
chat_provider_id=prov_id,
prompt=input_,
system_prompt=tool.agent.instructions,
tools=toolset,
max_steps=30,
run_hooks=tool.agent.run_hooks,
)
yield mcp.types.CallToolResult(
content=[mcp.types.TextContent(type="text", text=llm_resp.completion_text)]
)
@classmethod
async def _execute_local(
cls,
tool: FunctionTool,
run_context: ContextWrapper[AstrAgentContext],
**tool_args,
):
event = run_context.context.event
if not event:
raise ValueError("Event must be provided for local function tools.")
is_override_call = False
for ty in type(tool).mro():
if "call" in ty.__dict__ and ty.__dict__["call"] is not FunctionTool.call:
is_override_call = True
break
# 检查 tool 下有没有 run 方法
if not tool.handler and not hasattr(tool, "run") and not is_override_call:
raise ValueError("Tool must have a valid handler or override 'run' method.")
awaitable = None
method_name = ""
if tool.handler:
awaitable = tool.handler
method_name = "decorator_handler"
elif is_override_call:
awaitable = tool.call
method_name = "call"
elif hasattr(tool, "run"):
awaitable = getattr(tool, "run")
method_name = "run"
if awaitable is None:
raise ValueError("Tool must have a valid handler or override 'run' method.")
wrapper = call_local_llm_tool(
context=run_context,
handler=awaitable,
method_name=method_name,
**tool_args,
)
while True:
try:
resp = await asyncio.wait_for(
anext(wrapper),
timeout=run_context.tool_call_timeout,
)
if resp is not None:
if isinstance(resp, mcp.types.CallToolResult):
yield resp
else:
text_content = mcp.types.TextContent(
type="text",
text=str(resp),
)
yield mcp.types.CallToolResult(content=[text_content])
else:
# NOTE: Tool 在这里直接请求发送消息给用户
# TODO: 是否需要判断 event.get_result() 是否为空?
# 如果为空,则说明没有发送消息给用户,并且返回值为空,将返回一个特殊的 TextContent,其内容如"工具没有返回内容"
if res := run_context.context.event.get_result():
if res.chain:
try:
await event.send(
MessageChain(
chain=res.chain,
type="tool_direct_result",
)
)
except Exception as e:
logger.error(
f"Tool 直接发送消息失败: {e}",
exc_info=True,
)
yield None
except asyncio.TimeoutError:
raise Exception(
f"tool {tool.name} execution timeout after {run_context.tool_call_timeout} seconds.",
)
except StopAsyncIteration:
break
@classmethod
async def _execute_mcp(
cls,
tool: FunctionTool,
run_context: ContextWrapper[AstrAgentContext],
**tool_args,
):
res = await tool.call(run_context, **tool_args)
if not res:
return
yield res
async def call_local_llm_tool(
context: ContextWrapper[AstrAgentContext],
handler: T.Callable[..., T.Awaitable[T.Any]],
method_name: str,
*args,
**kwargs,
) -> T.AsyncGenerator[T.Any, None]:
"""执行本地 LLM 工具的处理函数并处理其返回结果"""
ready_to_call = None # 一个协程或者异步生成器
trace_ = None
event = context.context.event
try:
if method_name == "run" or method_name == "decorator_handler":
ready_to_call = handler(event, *args, **kwargs)
elif method_name == "call":
ready_to_call = handler(context, *args, **kwargs)
else:
raise ValueError(f"未知的方法名: {method_name}")
except ValueError as e:
logger.error(f"调用本地 LLM 工具时出错: {e}", exc_info=True)
except TypeError:
logger.error("处理函数参数不匹配,请检查 handler 的定义。", exc_info=True)
except Exception as e:
trace_ = traceback.format_exc()
logger.error(f"调用本地 LLM 工具时出错: {e}\n{trace_}")
if not ready_to_call:
return
if inspect.isasyncgen(ready_to_call):
_has_yielded = False
try:
async for ret in ready_to_call:
# 这里逐步执行异步生成器, 对于每个 yield 返回的 ret, 执行下面的代码
# 返回值只能是 MessageEventResult 或者 None无返回值
_has_yielded = True
if isinstance(ret, (MessageEventResult, CommandResult)):
# 如果返回值是 MessageEventResult, 设置结果并继续
event.set_result(ret)
yield
else:
# 如果返回值是 None, 则不设置结果并继续
# 继续执行后续阶段
yield ret
if not _has_yielded:
# 如果这个异步生成器没有执行到 yield 分支
yield
except Exception as e:
logger.error(f"Previous Error: {trace_}")
raise e
elif inspect.iscoroutine(ready_to_call):
# 如果只是一个协程, 直接执行
ret = await ready_to_call
if isinstance(ret, (MessageEventResult, CommandResult)):
event.set_result(ret)
yield
else:
yield ret

View File

@@ -0,0 +1,275 @@
import os
import uuid
from typing import TypedDict, TypeVar
from astrbot.core import AstrBotConfig, logger
from astrbot.core.config.astrbot_config import ASTRBOT_CONFIG_PATH
from astrbot.core.config.default import DEFAULT_CONFIG
from astrbot.core.platform.message_session import MessageSession
from astrbot.core.umop_config_router import UmopConfigRouter
from astrbot.core.utils.astrbot_path import get_astrbot_config_path
from astrbot.core.utils.shared_preferences import SharedPreferences
_VT = TypeVar("_VT")
class ConfInfo(TypedDict):
"""Configuration information for a specific session or platform."""
id: str # UUID of the configuration or "default"
name: str
path: str # File name to the configuration file
DEFAULT_CONFIG_CONF_INFO = ConfInfo(
id="default",
name="default",
path=ASTRBOT_CONFIG_PATH,
)
class AstrBotConfigManager:
"""A class to manage the system configuration of AstrBot, aka ACM"""
def __init__(
self,
default_config: AstrBotConfig,
ucr: UmopConfigRouter,
sp: SharedPreferences,
):
self.sp = sp
self.ucr = ucr
self.confs: dict[str, AstrBotConfig] = {}
"""uuid / "default" -> AstrBotConfig"""
self.confs["default"] = default_config
self.abconf_data = None
self._load_all_configs()
def _get_abconf_data(self) -> dict:
"""获取所有的 abconf 数据"""
if self.abconf_data is None:
self.abconf_data = self.sp.get(
"abconf_mapping",
{},
scope="global",
scope_id="global",
)
return self.abconf_data
def _load_all_configs(self):
"""Load all configurations from the shared preferences."""
abconf_data = self._get_abconf_data()
self.abconf_data = abconf_data
for uuid_, meta in abconf_data.items():
filename = meta["path"]
conf_path = os.path.join(get_astrbot_config_path(), filename)
if os.path.exists(conf_path):
conf = AstrBotConfig(config_path=conf_path)
self.confs[uuid_] = conf
else:
logger.warning(
f"Config file {conf_path} for UUID {uuid_} does not exist, skipping.",
)
continue
def _load_conf_mapping(self, umo: str | MessageSession) -> ConfInfo:
"""获取指定 umo 的配置文件 uuid, 如果不存在则返回默认配置(返回 "default")
Returns:
ConfInfo: 包含配置文件的 uuid, 路径和名称等信息, 是一个 dict 类型
"""
# uuid -> { "path": str, "name": str }
abconf_data = self._get_abconf_data()
if isinstance(umo, MessageSession):
umo = str(umo)
else:
try:
umo = str(MessageSession.from_str(umo)) # validate
except Exception:
return DEFAULT_CONFIG_CONF_INFO
conf_id = self.ucr.get_conf_id_for_umop(umo)
if conf_id:
meta = abconf_data.get(conf_id)
if meta and isinstance(meta, dict):
# the bind relation between umo and conf is defined in ucr now, so we remove "umop" here
meta.pop("umop", None)
return ConfInfo(**meta, id=conf_id)
return DEFAULT_CONFIG_CONF_INFO
def _save_conf_mapping(
self,
abconf_path: str,
abconf_id: str,
abconf_name: str | None = None,
) -> None:
"""保存配置文件的映射关系"""
abconf_data = self.sp.get(
"abconf_mapping",
{},
scope="global",
scope_id="global",
)
random_word = abconf_name or uuid.uuid4().hex[:8]
abconf_data[abconf_id] = {
"path": abconf_path,
"name": random_word,
}
self.sp.put("abconf_mapping", abconf_data, scope="global", scope_id="global")
self.abconf_data = abconf_data
def get_conf(self, umo: str | MessageSession | None) -> AstrBotConfig:
"""获取指定 umo 的配置文件。如果不存在,则 fallback 到默认配置文件。"""
if not umo:
return self.confs["default"]
if isinstance(umo, MessageSession):
umo = f"{umo.platform_id}:{umo.message_type}:{umo.session_id}"
uuid_ = self._load_conf_mapping(umo)["id"]
conf = self.confs.get(uuid_)
if not conf:
conf = self.confs["default"] # default MUST exists
return conf
@property
def default_conf(self) -> AstrBotConfig:
"""获取默认配置文件"""
return self.confs["default"]
def get_conf_info(self, umo: str | MessageSession) -> ConfInfo:
"""获取指定 umo 的配置文件元数据"""
if isinstance(umo, MessageSession):
umo = f"{umo.platform_id}:{umo.message_type}:{umo.session_id}"
return self._load_conf_mapping(umo)
def get_conf_list(self) -> list[ConfInfo]:
"""获取所有配置文件的元数据列表"""
conf_list = []
abconf_mapping = self._get_abconf_data()
for uuid_, meta in abconf_mapping.items():
if not isinstance(meta, dict):
continue
meta.pop("umop", None)
conf_list.append(ConfInfo(**meta, id=uuid_))
conf_list.append(DEFAULT_CONFIG_CONF_INFO)
return conf_list
def create_conf(
self,
config: dict = DEFAULT_CONFIG,
name: str | None = None,
) -> str:
conf_uuid = str(uuid.uuid4())
conf_file_name = f"abconf_{conf_uuid}.json"
conf_path = os.path.join(get_astrbot_config_path(), conf_file_name)
conf = AstrBotConfig(config_path=conf_path, default_config=config)
conf.save_config()
self._save_conf_mapping(conf_file_name, conf_uuid, abconf_name=name)
self.confs[conf_uuid] = conf
return conf_uuid
def delete_conf(self, conf_id: str) -> bool:
"""删除指定配置文件
Args:
conf_id: 配置文件的 UUID
Returns:
bool: 删除是否成功
Raises:
ValueError: 如果试图删除默认配置文件
"""
if conf_id == "default":
raise ValueError("不能删除默认配置文件")
# 从映射中移除
abconf_data = self.sp.get(
"abconf_mapping",
{},
scope="global",
scope_id="global",
)
if conf_id not in abconf_data:
logger.warning(f"配置文件 {conf_id} 不存在于映射中")
return False
# 获取配置文件路径
conf_path = os.path.join(
get_astrbot_config_path(),
abconf_data[conf_id]["path"],
)
# 删除配置文件
try:
if os.path.exists(conf_path):
os.remove(conf_path)
logger.info(f"已删除配置文件: {conf_path}")
except Exception as e:
logger.error(f"删除配置文件 {conf_path} 失败: {e}")
return False
# 从内存中移除
if conf_id in self.confs:
del self.confs[conf_id]
# 从映射中移除
del abconf_data[conf_id]
self.sp.put("abconf_mapping", abconf_data, scope="global", scope_id="global")
self.abconf_data = abconf_data
logger.info(f"成功删除配置文件 {conf_id}")
return True
def update_conf_info(self, conf_id: str, name: str | None = None) -> bool:
"""更新配置文件信息
Args:
conf_id: 配置文件的 UUID
name: 新的配置文件名称 (可选)
Returns:
bool: 更新是否成功
"""
if conf_id == "default":
raise ValueError("不能更新默认配置文件的信息")
abconf_data = self.sp.get(
"abconf_mapping",
{},
scope="global",
scope_id="global",
)
if conf_id not in abconf_data:
logger.warning(f"配置文件 {conf_id} 不存在于映射中")
return False
# 更新名称
if name is not None:
abconf_data[conf_id]["name"] = name
# 保存更新
self.sp.put("abconf_mapping", abconf_data, scope="global", scope_id="global")
self.abconf_data = abconf_data
logger.info(f"成功更新配置文件 {conf_id} 的信息")
return True
def g(
self,
umo: str | None = None,
key: str | None = None,
default: _VT = None,
) -> _VT:
"""获取配置项。umo 为 None 时使用默认配置"""
if umo is None:
return self.confs["default"].get(key, default)
conf = self.get_conf(umo)
return conf.get(key, default)

View File

@@ -1,9 +1,9 @@
from .default import DEFAULT_CONFIG, VERSION, DB_PATH
from .astrbot_config import *
from .default import DB_PATH, DEFAULT_CONFIG, VERSION
__all__ = [
"DB_PATH",
"DEFAULT_CONFIG",
"VERSION",
"DB_PATH",
"AstrBotConfig",
]

View File

@@ -1,11 +1,12 @@
import os
import enum
import json
import logging
import enum
from .default import DEFAULT_CONFIG, DEFAULT_VALUE_MAP
from typing import Dict
import os
from astrbot.core.utils.astrbot_path import get_astrbot_data_path
from .default import DEFAULT_CONFIG, DEFAULT_VALUE_MAP
ASTRBOT_CONFIG_PATH = os.path.join(get_astrbot_data_path(), "cmd_config.json")
logger = logging.getLogger("astrbot")
@@ -27,7 +28,7 @@ class AstrBotConfig(dict):
self,
config_path: str = ASTRBOT_CONFIG_PATH,
default_config: dict = DEFAULT_CONFIG,
schema: dict = None,
schema: dict | None = None,
):
super().__init__()
@@ -45,7 +46,7 @@ class AstrBotConfig(dict):
json.dump(default_config, f, indent=4, ensure_ascii=False)
object.__setattr__(self, "first_deploy", True) # 标记第一次部署
with open(config_path, "r", encoding="utf-8-sig") as f:
with open(config_path, encoding="utf-8-sig") as f:
conf_str = f.read()
conf = json.loads(conf_str)
@@ -65,7 +66,7 @@ class AstrBotConfig(dict):
for k, v in schema.items():
if v["type"] not in DEFAULT_VALUE_MAP:
raise TypeError(
f"不受支持的配置类型 {v['type']}。支持的类型有:{DEFAULT_VALUE_MAP.keys()}"
f"不受支持的配置类型 {v['type']}。支持的类型有:{DEFAULT_VALUE_MAP.keys()}",
)
if "default" in v:
default = v["default"]
@@ -82,7 +83,7 @@ class AstrBotConfig(dict):
return conf
def check_config_integrity(self, refer_conf: Dict, conf: Dict, path=""):
def check_config_integrity(self, refer_conf: dict, conf: dict, path=""):
"""检查配置完整性,如果有新的配置项或顺序不一致则返回 True"""
has_new = False
@@ -97,8 +98,7 @@ class AstrBotConfig(dict):
logger.info(f"检查到配置项 {path_} 不存在,已插入默认值 {value}")
new_conf[key] = value
has_new = True
else:
if conf[key] is None:
elif conf[key] is None:
# 配置项为 None使用默认值
new_conf[key] = value
has_new = True
@@ -111,7 +111,9 @@ class AstrBotConfig(dict):
else:
# 递归检查并同步顺序
child_has_new = self.check_config_integrity(
value, conf[key], path + "." + key if path else key
value,
conf[key],
path + "." + key if path else key,
)
new_conf[key] = conf[key]
has_new |= child_has_new
@@ -140,7 +142,7 @@ class AstrBotConfig(dict):
return has_new
def save_config(self, replace_config: Dict = None):
def save_config(self, replace_config: dict | None = None):
"""将配置写入文件
如果传入 replace_config则将配置替换为 replace_config

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,110 @@
"""
配置元数据国际化工具
提供配置元数据的国际化键转换功能
"""
from typing import Any
class ConfigMetadataI18n:
"""配置元数据国际化转换器"""
@staticmethod
def _get_i18n_key(group: str, section: str, field: str, attr: str) -> str:
"""
生成国际化键
Args:
group: 配置组,如 'ai_group', 'platform_group'
section: 配置节,如 'agent_runner', 'general'
field: 字段名,如 'enable', 'default_provider'
attr: 属性类型,如 'description', 'hint', 'labels'
Returns:
国际化键,格式如: 'ai_group.agent_runner.enable.description'
"""
if field:
return f"{group}.{section}.{field}.{attr}"
else:
return f"{group}.{section}.{attr}"
@staticmethod
def convert_to_i18n_keys(metadata: dict[str, Any]) -> dict[str, Any]:
"""
将配置元数据转换为使用国际化键
Args:
metadata: 原始配置元数据字典
Returns:
使用国际化键的配置元数据字典
"""
result = {}
for group_key, group_data in metadata.items():
group_result = {
"name": f"{group_key}.name",
"metadata": {},
}
for section_key, section_data in group_data.get("metadata", {}).items():
section_result = {
"description": f"{group_key}.{section_key}.description",
"type": section_data.get("type"),
}
# 复制其他属性
for key in ["items", "condition", "_special", "invisible"]:
if key in section_data:
section_result[key] = section_data[key]
# 处理 hint
if "hint" in section_data:
section_result["hint"] = f"{group_key}.{section_key}.hint"
# 处理 items 中的字段
if "items" in section_data and isinstance(section_data["items"], dict):
items_result = {}
for field_key, field_data in section_data["items"].items():
# 处理嵌套的点号字段名(如 provider_settings.enable
field_name = field_key
field_result = {}
# 复制基本属性
for attr in [
"type",
"condition",
"_special",
"invisible",
"options",
]:
if attr in field_data:
field_result[attr] = field_data[attr]
# 转换文本属性为国际化键
if "description" in field_data:
field_result["description"] = (
f"{group_key}.{section_key}.{field_name}.description"
)
if "hint" in field_data:
field_result["hint"] = (
f"{group_key}.{section_key}.{field_name}.hint"
)
if "labels" in field_data:
field_result["labels"] = (
f"{group_key}.{section_key}.{field_name}.labels"
)
items_result[field_key] = field_result
section_result["items"] = items_result
group_result["metadata"][section_key] = section_result
result[group_key] = group_result
return result

View File

@@ -1,56 +1,109 @@
"""
AstrBot 会话-对话管理器, 维护两个本地存储, 其中一个是 json 格式的shared_preferences, 另外一个是数据库
"""AstrBot 会话-对话管理器, 维护两个本地存储, 其中一个是 json 格式的shared_preferences, 另外一个是数据库.
在 AstrBot 中, 会话和对话是独立的, 会话用于标记对话窗口, 例如群聊"123456789"可以建立一个会话,
在一个会话中可以建立多个对话, 并且支持对话的切换和删除
"""
import uuid
import json
import asyncio
from collections.abc import Awaitable, Callable
from astrbot.core import sp
from typing import Dict, List
from astrbot.core.agent.message import AssistantMessageSegment, UserMessageSegment
from astrbot.core.db import BaseDatabase
from astrbot.core.db.po import Conversation
from astrbot.core.db.po import Conversation, ConversationV2
class ConversationManager:
"""负责管理会话与 LLM 的对话,某个会话当前正在用哪个对话。"""
def __init__(self, db_helper: BaseDatabase):
# session_conversations 字典记录会话ID-对话ID 映射关系
self.session_conversations: Dict[str, str] = sp.get("session_conversation", {})
self.session_conversations: dict[str, str] = {}
self.db = db_helper
self.save_interval = 60 # 每 60 秒保存一次
self._start_periodic_save()
def _start_periodic_save(self):
"""启动定时保存任务"""
asyncio.create_task(self._periodic_save())
# 会话删除回调函数列表(用于级联清理,如知识库配置)
self._on_session_deleted_callbacks: list[Callable[[str], Awaitable[None]]] = []
async def _periodic_save(self):
"""定时保存会话对话映射关系到存储中"""
while True:
await asyncio.sleep(self.save_interval)
self._save_to_storage()
def register_on_session_deleted(
self,
callback: Callable[[str], Awaitable[None]],
) -> None:
"""注册会话删除回调函数.
def _save_to_storage(self):
"""保存会话对话映射关系到存储中"""
sp.put("session_conversation", self.session_conversations)
其他模块可以注册回调来响应会话删除事件,实现级联清理。
例如:知识库模块可以注册回调来清理会话的知识库配置。
async def new_conversation(self, unified_msg_origin: str) -> str:
"""新建对话,并将当前会话的对话转移到新对话
Args:
callback: 回调函数接收会话ID (unified_msg_origin) 作为参数
"""
self._on_session_deleted_callbacks.append(callback)
async def _trigger_session_deleted(self, unified_msg_origin: str) -> None:
"""触发会话删除回调.
Args:
unified_msg_origin: 会话ID
"""
for callback in self._on_session_deleted_callbacks:
try:
await callback(unified_msg_origin)
except Exception as e:
from astrbot.core import logger
logger.error(
f"会话删除回调执行失败 (session: {unified_msg_origin}): {e}",
)
def _convert_conv_from_v2_to_v1(self, conv_v2: ConversationV2) -> Conversation:
"""将 ConversationV2 对象转换为 Conversation 对象"""
created_at = int(conv_v2.created_at.timestamp())
updated_at = int(conv_v2.updated_at.timestamp())
return Conversation(
platform_id=conv_v2.platform_id,
user_id=conv_v2.user_id,
cid=conv_v2.conversation_id,
history=json.dumps(conv_v2.content or []),
title=conv_v2.title,
persona_id=conv_v2.persona_id,
created_at=created_at,
updated_at=updated_at,
)
async def new_conversation(
self,
unified_msg_origin: str,
platform_id: str | None = None,
content: list[dict] | None = None,
title: str | None = None,
persona_id: str | None = None,
) -> str:
"""新建对话,并将当前会话的对话转移到新对话.
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
Returns:
conversation_id (str): 对话 ID, 是 uuid 格式的字符串
"""
conversation_id = str(uuid.uuid4())
self.db.new_conversation(user_id=unified_msg_origin, cid=conversation_id)
self.session_conversations[unified_msg_origin] = conversation_id
sp.put("session_conversation", self.session_conversations)
return conversation_id
if not platform_id:
# 如果没有提供 platform_id则从 unified_msg_origin 中解析
parts = unified_msg_origin.split(":")
if len(parts) >= 3:
platform_id = parts[0]
if not platform_id:
platform_id = "unknown"
conv = await self.db.create_conversation(
user_id=unified_msg_origin,
platform_id=platform_id,
content=content,
title=title,
persona_id=persona_id,
)
self.session_conversations[unified_msg_origin] = conv.conversation_id
await sp.session_put(unified_msg_origin, "sel_conv_id", conv.conversation_id)
return conv.conversation_id
async def switch_conversation(self, unified_msg_origin: str, conversation_id: str):
"""切换会话的对话
@@ -58,147 +111,294 @@ class ConversationManager:
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
conversation_id (str): 对话 ID, 是 uuid 格式的字符串
"""
self.session_conversations[unified_msg_origin] = conversation_id
sp.put("session_conversation", self.session_conversations)
await sp.session_put(unified_msg_origin, "sel_conv_id", conversation_id)
async def delete_conversation(
self, unified_msg_origin: str, conversation_id: str = None
self,
unified_msg_origin: str,
conversation_id: str | None = None,
):
"""删除会话的对话,当 conversation_id 为 None 时删除会话当前的对话
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
conversation_id (str): 对话 ID, 是 uuid 格式的字符串
"""
if not conversation_id:
conversation_id = self.session_conversations.get(unified_msg_origin)
if conversation_id:
self.db.delete_conversation(user_id=unified_msg_origin, cid=conversation_id)
del self.session_conversations[unified_msg_origin]
sp.put("session_conversation", self.session_conversations)
await self.db.delete_conversation(cid=conversation_id)
curr_cid = await self.get_curr_conversation_id(unified_msg_origin)
if curr_cid == conversation_id:
self.session_conversations.pop(unified_msg_origin, None)
await sp.session_remove(unified_msg_origin, "sel_conv_id")
async def get_curr_conversation_id(self, unified_msg_origin: str) -> str:
async def delete_conversations_by_user_id(self, unified_msg_origin: str):
"""删除会话的所有对话
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
"""
await self.db.delete_conversations_by_user_id(user_id=unified_msg_origin)
self.session_conversations.pop(unified_msg_origin, None)
await sp.session_remove(unified_msg_origin, "sel_conv_id")
# 触发会话删除回调(级联清理)
await self._trigger_session_deleted(unified_msg_origin)
async def get_curr_conversation_id(self, unified_msg_origin: str) -> str | None:
"""获取会话当前的对话 ID
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
Returns:
conversation_id (str): 对话 ID, 是 uuid 格式的字符串
"""
return self.session_conversations.get(unified_msg_origin, None)
ret = self.session_conversations.get(unified_msg_origin, None)
if not ret:
ret = await sp.session_get(unified_msg_origin, "sel_conv_id", None)
if ret:
self.session_conversations[unified_msg_origin] = ret
return ret
async def get_conversation(
self,
unified_msg_origin: str,
conversation_id: str,
create_if_not_exists: bool = False,
) -> Conversation:
"""获取会话的对话
) -> Conversation | None:
"""获取会话的对话.
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
conversation_id (str): 对话 ID, 是 uuid 格式的字符串
create_if_not_exists (bool): 如果对话不存在,是否创建一个新的对话
Returns:
conversation (Conversation): 对话对象
"""
conv = self.db.get_conversation_by_user_id(unified_msg_origin, conversation_id)
conv = await self.db.get_conversation_by_id(cid=conversation_id)
if not conv and create_if_not_exists:
# 如果对话不存在且需要创建,则新建一个对话
conversation_id = await self.new_conversation(unified_msg_origin)
return self.db.get_conversation_by_user_id(
unified_msg_origin, conversation_id
)
return self.db.get_conversation_by_user_id(unified_msg_origin, conversation_id)
conv = await self.db.get_conversation_by_id(cid=conversation_id)
conv_res = None
if conv:
conv_res = self._convert_conv_from_v2_to_v1(conv)
return conv_res
async def get_conversations(self, unified_msg_origin: str) -> List[Conversation]:
"""获取会话的所有对话
async def get_conversations(
self,
unified_msg_origin: str | None = None,
platform_id: str | None = None,
) -> list[Conversation]:
"""获取对话列表.
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id,可选
platform_id (str): 平台 ID, 可选参数, 用于过滤对话
Returns:
conversations (List[Conversation]): 对话对象列表
"""
return self.db.get_conversations(unified_msg_origin)
convs = await self.db.get_conversations(
user_id=unified_msg_origin,
platform_id=platform_id,
)
convs_res = []
for conv in convs:
conv_res = self._convert_conv_from_v2_to_v1(conv)
convs_res.append(conv_res)
return convs_res
async def get_filtered_conversations(
self,
page: int = 1,
page_size: int = 20,
platform_ids: list[str] | None = None,
search_query: str = "",
**kwargs,
) -> tuple[list[Conversation], int]:
"""获取过滤后的对话列表.
Args:
page (int): 页码, 默认为 1
page_size (int): 每页大小, 默认为 20
platform_ids (list[str]): 平台 ID 列表, 可选
search_query (str): 搜索查询字符串, 可选
Returns:
conversations (list[Conversation]): 对话对象列表
"""
convs, cnt = await self.db.get_filtered_conversations(
page=page,
page_size=page_size,
platform_ids=platform_ids,
search_query=search_query,
**kwargs,
)
convs_res = []
for conv in convs:
conv_res = self._convert_conv_from_v2_to_v1(conv)
convs_res.append(conv_res)
return convs_res, cnt
async def update_conversation(
self, unified_msg_origin: str, conversation_id: str, history: List[Dict]
):
"""更新会话的对话
self,
unified_msg_origin: str,
conversation_id: str | None = None,
history: list[dict] | None = None,
title: str | None = None,
persona_id: str | None = None,
) -> None:
"""更新会话的对话.
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
conversation_id (str): 对话 ID, 是 uuid 格式的字符串
history (List[Dict]): 对话历史记录, 是一个字典列表, 每个字典包含 role 和 content 字段
"""
if not conversation_id:
# 如果没有提供 conversation_id则获取当前的
conversation_id = await self.get_curr_conversation_id(unified_msg_origin)
if conversation_id:
self.db.update_conversation(
user_id=unified_msg_origin,
await self.db.update_conversation(
cid=conversation_id,
history=json.dumps(history),
title=title,
persona_id=persona_id,
content=history,
)
async def update_conversation_title(self, unified_msg_origin: str, title: str):
"""更新会话的对话标题
async def update_conversation_title(
self,
unified_msg_origin: str,
title: str,
conversation_id: str | None = None,
) -> None:
"""更新会话的对话标题.
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
title (str): 对话标题
conversation_id (str): 对话 ID, 是 uuid 格式的字符串
Deprecated:
Use `update_conversation` with `title` parameter instead.
"""
conversation_id = self.session_conversations.get(unified_msg_origin)
if conversation_id:
self.db.update_conversation_title(
user_id=unified_msg_origin, cid=conversation_id, title=title
await self.update_conversation(
unified_msg_origin=unified_msg_origin,
conversation_id=conversation_id,
title=title,
)
async def update_conversation_persona_id(
self, unified_msg_origin: str, persona_id: str
):
"""更新会话的对话 Persona ID
self,
unified_msg_origin: str,
persona_id: str,
conversation_id: str | None = None,
) -> None:
"""更新会话的对话 Persona ID.
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
persona_id (str): 对话 Persona ID
conversation_id (str): 对话 ID, 是 uuid 格式的字符串
Deprecated:
Use `update_conversation` with `persona_id` parameter instead.
"""
conversation_id = self.session_conversations.get(unified_msg_origin)
if conversation_id:
self.db.update_conversation_persona_id(
user_id=unified_msg_origin, cid=conversation_id, persona_id=persona_id
await self.update_conversation(
unified_msg_origin=unified_msg_origin,
conversation_id=conversation_id,
persona_id=persona_id,
)
async def add_message_pair(
self,
cid: str,
user_message: UserMessageSegment | dict,
assistant_message: AssistantMessageSegment | dict,
) -> None:
"""Add a user-assistant message pair to the conversation history.
Args:
cid (str): Conversation ID
user_message (UserMessageSegment | dict): OpenAI-format user message object or dict
assistant_message (AssistantMessageSegment | dict): OpenAI-format assistant message object or dict
Raises:
Exception: If the conversation with the given ID is not found
"""
conv = await self.db.get_conversation_by_id(cid=cid)
if not conv:
raise Exception(f"Conversation with id {cid} not found")
history = conv.content or []
if isinstance(user_message, UserMessageSegment):
user_msg_dict = user_message.model_dump()
else:
user_msg_dict = user_message
if isinstance(assistant_message, AssistantMessageSegment):
assistant_msg_dict = assistant_message.model_dump()
else:
assistant_msg_dict = assistant_message
history.append(user_msg_dict)
history.append(assistant_msg_dict)
await self.db.update_conversation(
cid=cid,
content=history,
)
async def get_human_readable_context(
self, unified_msg_origin, conversation_id, page=1, page_size=10
):
"""获取人类可读的上下文
self,
unified_msg_origin: str,
conversation_id: str,
page: int = 1,
page_size: int = 10,
) -> tuple[list[str], int]:
"""获取人类可读的上下文.
Args:
unified_msg_origin (str): 统一的消息来源字符串。格式为 platform_name:message_type:session_id
conversation_id (str): 对话 ID, 是 uuid 格式的字符串
page (int): 页码
page_size (int): 每页大小
"""
conversation = await self.get_conversation(unified_msg_origin, conversation_id)
if not conversation:
return [], 0
history = json.loads(conversation.history)
contexts = []
temp_contexts = []
# contexts_groups 存放按顺序的段落(每个段落是一个 str 列表),
# 之后会被展平成一个扁平的 str 列表返回。
contexts_groups: list[list[str]] = []
temp_contexts: list[str] = []
for record in history:
if record["role"] == "user":
temp_contexts.append(f"User: {record['content']}")
elif record["role"] == "assistant":
if "content" in record and record["content"]:
if record.get("content"):
temp_contexts.append(f"Assistant: {record['content']}")
elif "tool_calls" in record:
tool_calls_str = json.dumps(
record["tool_calls"], ensure_ascii=False
record["tool_calls"],
ensure_ascii=False,
)
temp_contexts.append(f"Assistant: [函数调用] {tool_calls_str}")
else:
temp_contexts.append("Assistant: [未知的内容]")
contexts.insert(0, temp_contexts)
contexts_groups.insert(0, temp_contexts)
temp_contexts = []
# 展平 contexts 列表
contexts = [item for sublist in contexts for item in sublist]
# 展平分组后的 contexts 列表为单层字符串列表
contexts = [item for sublist in contexts_groups for item in sublist]
# 计算分页
paged_contexts = contexts[(page - 1) * page_size : page * page_size]

View File

@@ -1,5 +1,5 @@
"""
Astrbot 核心生命周期管理类, 负责管理 AstrBot 的启动、停止、重启等操作。
"""Astrbot 核心生命周期管理类, 负责管理 AstrBot 的启动、停止、重启等操作.
该类负责初始化各个组件, 包括 ProviderManager、PlatformManager、ConversationManager、PluginManager、PipelineScheduler、EventBus等。
该类还负责加载和执行插件, 以及处理事件总线的分发。
@@ -9,56 +9,73 @@ Astrbot 核心生命周期管理类, 负责管理 AstrBot 的启动、停止、
3. 执行启动完成事件钩子
"""
import traceback
import asyncio
import time
import threading
import os
from .event_bus import EventBus
from . import astrbot_config
import threading
import time
import traceback
from asyncio import Queue
from typing import List
from astrbot.core.pipeline.scheduler import PipelineScheduler, PipelineContext
from astrbot.core.star import PluginManager
from astrbot.core.platform.manager import PlatformManager
from astrbot.core.star.context import Context
from astrbot.core.provider.manager import ProviderManager
from astrbot.api import logger, sp
from astrbot.core import LogBroker
from astrbot.core.db import BaseDatabase
from astrbot.core.updator import AstrBotUpdator
from astrbot.core import logger
from astrbot.core.astrbot_config_mgr import AstrBotConfigManager
from astrbot.core.config.default import VERSION
from astrbot.core.conversation_mgr import ConversationManager
from astrbot.core.star.star_handler import star_handlers_registry, EventType
from astrbot.core.star.star_handler import star_map
from astrbot.core.db import BaseDatabase
from astrbot.core.knowledge_base.kb_mgr import KnowledgeBaseManager
from astrbot.core.persona_mgr import PersonaManager
from astrbot.core.pipeline.scheduler import PipelineContext, PipelineScheduler
from astrbot.core.platform.manager import PlatformManager
from astrbot.core.platform_message_history_mgr import PlatformMessageHistoryManager
from astrbot.core.provider.manager import ProviderManager
from astrbot.core.star import PluginManager
from astrbot.core.star.context import Context
from astrbot.core.star.star_handler import EventType, star_handlers_registry, star_map
from astrbot.core.umop_config_router import UmopConfigRouter
from astrbot.core.updator import AstrBotUpdator
from astrbot.core.utils.migra_helper import migra
from . import astrbot_config, html_renderer
from .event_bus import EventBus
class AstrBotCoreLifecycle:
"""
AstrBot 核心生命周期管理类, 负责管理 AstrBot 的启动、停止、重启等操作。
"""AstrBot 核心生命周期管理类, 负责管理 AstrBot 的启动、停止、重启等操作.
该类负责初始化各个组件, 包括 ProviderManager、PlatformManager、ConversationManager、PluginManager、PipelineScheduler、
EventBus 等。
该类还负责加载和执行插件, 以及处理事件总线的分发。
"""
def __init__(self, log_broker: LogBroker, db: BaseDatabase):
def __init__(self, log_broker: LogBroker, db: BaseDatabase) -> None:
self.log_broker = log_broker # 初始化日志代理
self.astrbot_config = astrbot_config # 初始化配置
self.db = db # 初始化数据库
# 设置代理
if self.astrbot_config.get("http_proxy", ""):
os.environ["https_proxy"] = self.astrbot_config["http_proxy"]
os.environ["http_proxy"] = self.astrbot_config["http_proxy"]
if proxy := os.environ.get("https_proxy"):
logger.debug(f"Using proxy: {proxy}")
os.environ["no_proxy"] = "localhost"
proxy_config = self.astrbot_config.get("http_proxy", "")
if proxy_config != "":
os.environ["https_proxy"] = proxy_config
os.environ["http_proxy"] = proxy_config
logger.debug(f"Using proxy: {proxy_config}")
# 设置 no_proxy
no_proxy_list = self.astrbot_config.get("no_proxy", [])
os.environ["no_proxy"] = ",".join(no_proxy_list)
else:
# 清空代理环境变量
if "https_proxy" in os.environ:
del os.environ["https_proxy"]
if "http_proxy" in os.environ:
del os.environ["http_proxy"]
if "no_proxy" in os.environ:
del os.environ["no_proxy"]
logger.debug("HTTP proxy cleared")
async def initialize(self):
"""
初始化 AstrBot 核心生命周期管理类, 负责初始化各个组件, 包括 ProviderManager、PlatformManager、ConversationManager、PluginManager、PipelineScheduler、EventBus、AstrBotUpdator等。
"""
async def initialize(self) -> None:
"""初始化 AstrBot 核心生命周期管理类.
负责初始化各个组件, 包括 ProviderManager、PlatformManager、ConversationManager、PluginManager、PipelineScheduler、EventBus、AstrBotUpdator等。
"""
# 初始化日志代理
logger.info("AstrBot v" + VERSION)
if os.environ.get("TESTING", ""):
@@ -66,11 +83,45 @@ class AstrBotCoreLifecycle:
else:
logger.setLevel(self.astrbot_config["log_level"]) # 设置日志级别
await self.db.initialize()
await html_renderer.initialize()
# 初始化 UMOP 配置路由器
self.umop_config_router = UmopConfigRouter(sp=sp)
# 初始化 AstrBot 配置管理器
self.astrbot_config_mgr = AstrBotConfigManager(
default_config=self.astrbot_config,
ucr=self.umop_config_router,
sp=sp,
)
# apply migration
try:
await migra(
self.db,
self.astrbot_config_mgr,
self.umop_config_router,
self.astrbot_config_mgr,
)
except Exception as e:
logger.error(f"AstrBot migration failed: {e!s}")
logger.error(traceback.format_exc())
# 初始化事件队列
self.event_queue = Queue()
# 初始化人格管理器
self.persona_mgr = PersonaManager(self.db, self.astrbot_config_mgr)
await self.persona_mgr.initialize()
# 初始化供应商管理器
self.provider_manager = ProviderManager(self.astrbot_config, self.db)
self.provider_manager = ProviderManager(
self.astrbot_config_mgr,
self.db,
self.persona_mgr,
)
# 初始化平台管理器
self.platform_manager = PlatformManager(self.astrbot_config, self.event_queue)
@@ -78,6 +129,12 @@ class AstrBotCoreLifecycle:
# 初始化对话管理器
self.conversation_manager = ConversationManager(self.db)
# 初始化平台消息历史管理器
self.platform_message_history_manager = PlatformMessageHistoryManager(self.db)
# 初始化知识库管理器
self.kb_manager = KnowledgeBaseManager(self.provider_manager)
# 初始化提供给插件的上下文
self.star_context = Context(
self.event_queue,
@@ -86,6 +143,10 @@ class AstrBotCoreLifecycle:
self.provider_manager,
self.platform_manager,
self.conversation_manager,
self.platform_message_history_manager,
self.persona_mgr,
self.astrbot_config_mgr,
self.kb_manager,
)
# 初始化插件管理器
@@ -97,23 +158,26 @@ class AstrBotCoreLifecycle:
# 根据配置实例化各个 Provider
await self.provider_manager.initialize()
await self.kb_manager.initialize()
# 初始化消息事件流水线调度器
self.pipeline_scheduler = PipelineScheduler(
PipelineContext(self.astrbot_config, self.plugin_manager)
)
await self.pipeline_scheduler.initialize()
self.pipeline_scheduler_mapping = await self.load_pipeline_scheduler()
# 初始化更新器
self.astrbot_updator = AstrBotUpdator()
# 初始化事件总线
self.event_bus = EventBus(self.event_queue, self.pipeline_scheduler)
self.event_bus = EventBus(
self.event_queue,
self.pipeline_scheduler_mapping,
self.astrbot_config_mgr,
)
# 记录启动时间
self.start_time = int(time.time())
# 初始化当前任务列表
self.curr_tasks: List[asyncio.Task] = []
self.curr_tasks: list[asyncio.Task] = []
# 根据配置实例化各个平台适配器
await self.platform_manager.initialize()
@@ -121,13 +185,13 @@ class AstrBotCoreLifecycle:
# 初始化关闭控制面板的事件
self.dashboard_shutdown_event = asyncio.Event()
def _load(self):
"""加载事件总线和任务并初始化"""
def _load(self) -> None:
"""加载事件总线和任务并初始化."""
# 创建一个异步任务来执行事件总线的 dispatch() 方法
# dispatch是一个无限循环的协程, 从事件队列中获取事件并处理
event_bus_task = asyncio.create_task(
self.event_bus.dispatch(), name="event_bus"
self.event_bus.dispatch(),
name="event_bus",
)
# 把插件中注册的所有协程函数注册到事件总线中并执行
@@ -138,16 +202,17 @@ class AstrBotCoreLifecycle:
tasks_ = [event_bus_task, *extra_tasks]
for task in tasks_:
self.curr_tasks.append(
asyncio.create_task(self._task_wrapper(task), name=task.get_name())
asyncio.create_task(self._task_wrapper(task), name=task.get_name()),
)
self.start_time = int(time.time())
async def _task_wrapper(self, task: asyncio.Task):
"""异步任务包装器, 用于处理异步任务执行中出现的各种异常
async def _task_wrapper(self, task: asyncio.Task) -> None:
"""异步任务包装器, 用于处理异步任务执行中出现的各种异常.
Args:
task (asyncio.Task): 要执行的异步任务
"""
try:
await task
@@ -160,19 +225,22 @@ class AstrBotCoreLifecycle:
logger.error(f"| {line}")
logger.error("-------")
async def start(self):
"""启动 AstrBot 核心生命周期管理类, 用load加载事件总线和任务并初始化, 执行启动完成事件钩子"""
async def start(self) -> None:
"""启动 AstrBot 核心生命周期管理类.
用load加载事件总线和任务并初始化, 执行启动完成事件钩子
"""
self._load()
logger.info("AstrBot 启动完成。")
# 执行启动完成事件钩子
handlers = star_handlers_registry.get_handlers_by_event_type(
EventType.OnAstrBotLoadedEvent
EventType.OnAstrBotLoadedEvent,
)
for handler in handlers:
try:
logger.info(
f"hook(on_astrbot_loaded) -> {star_map[handler.handler_module_path].name} - {handler.handler_name}"
f"hook(on_astrbot_loaded) -> {star_map[handler.handler_module_path].name} - {handler.handler_name}",
)
await handler.handler()
except BaseException:
@@ -181,8 +249,8 @@ class AstrBotCoreLifecycle:
# 同时运行curr_tasks中的所有任务
await asyncio.gather(*self.curr_tasks, return_exceptions=True)
async def stop(self):
"""停止 AstrBot 核心生命周期管理类, 取消所有当前任务并终止各个管理器"""
async def stop(self) -> None:
"""停止 AstrBot 核心生命周期管理类, 取消所有当前任务并终止各个管理器."""
# 请求停止所有正在运行的异步任务
for task in self.curr_tasks:
task.cancel()
@@ -193,11 +261,12 @@ class AstrBotCoreLifecycle:
except Exception as e:
logger.warning(traceback.format_exc())
logger.warning(
f"插件 {plugin.name} 未被正常终止 {e!s}, 可能会导致资源泄露等问题。"
f"插件 {plugin.name} 未被正常终止 {e!s}, 可能会导致资源泄露等问题。",
)
await self.provider_manager.terminate()
await self.platform_manager.terminate()
await self.kb_manager.terminate()
self.dashboard_shutdown_event.set()
# 再次遍历curr_tasks等待每个任务真正结束
@@ -209,21 +278,59 @@ class AstrBotCoreLifecycle:
except Exception as e:
logger.error(f"任务 {task.get_name()} 发生错误: {e}")
async def restart(self):
async def restart(self) -> None:
"""重启 AstrBot 核心生命周期管理类, 终止各个管理器并重新加载平台实例"""
await self.provider_manager.terminate()
await self.platform_manager.terminate()
await self.kb_manager.terminate()
self.dashboard_shutdown_event.set()
threading.Thread(
target=self.astrbot_updator._reboot, name="restart", daemon=True
target=self.astrbot_updator._reboot,
name="restart",
daemon=True,
).start()
def load_platform(self) -> List[asyncio.Task]:
def load_platform(self) -> list[asyncio.Task]:
"""加载平台实例并返回所有平台实例的异步任务列表"""
tasks = []
platform_insts = self.platform_manager.get_insts()
for platform_inst in platform_insts:
tasks.append(
asyncio.create_task(platform_inst.run(), name=platform_inst.meta().name)
asyncio.create_task(
platform_inst.run(),
name=f"{platform_inst.meta().id}({platform_inst.meta().name})",
),
)
return tasks
async def load_pipeline_scheduler(self) -> dict[str, PipelineScheduler]:
"""加载消息事件流水线调度器.
Returns:
dict[str, PipelineScheduler]: 平台 ID 到流水线调度器的映射
"""
mapping = {}
for conf_id, ab_config in self.astrbot_config_mgr.confs.items():
scheduler = PipelineScheduler(
PipelineContext(ab_config, self.plugin_manager, conf_id),
)
await scheduler.initialize()
mapping[conf_id] = scheduler
return mapping
async def reload_pipeline_scheduler(self, conf_id: str) -> None:
"""重新加载消息事件流水线调度器.
Returns:
dict[str, PipelineScheduler]: 平台 ID 到流水线调度器的映射
"""
ab_config = self.astrbot_config_mgr.confs.get(conf_id)
if not ab_config:
raise ValueError(f"配置文件 {conf_id} 不存在")
scheduler = PipelineScheduler(
PipelineContext(ab_config, self.plugin_manager, conf_id),
)
await scheduler.initialize()
self.pipeline_scheduler_mapping[conf_id] = scheduler

View File

@@ -1,161 +1,364 @@
import abc
import datetime
import typing as T
from contextlib import asynccontextmanager
from dataclasses import dataclass
from typing import List, Dict, Any, Tuple
from astrbot.core.db.po import Stats, LLMHistory, ATRIVision, Conversation
from deprecated import deprecated
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
from sqlalchemy.orm import sessionmaker
from astrbot.core.db.po import (
Attachment,
ConversationV2,
Persona,
PlatformMessageHistory,
PlatformSession,
PlatformStat,
Preference,
Stats,
)
@dataclass
class BaseDatabase(abc.ABC):
"""
数据库基类
"""
"""数据库基类"""
DATABASE_URL = ""
def __init__(self) -> None:
pass
self.engine = create_async_engine(
self.DATABASE_URL,
echo=False,
future=True,
)
self.AsyncSessionLocal = sessionmaker(
self.engine,
class_=AsyncSession,
expire_on_commit=False,
)
def insert_base_metrics(self, metrics: dict):
"""插入基础指标数据"""
self.insert_platform_metrics(metrics["platform_stats"])
self.insert_plugin_metrics(metrics["plugin_stats"])
self.insert_command_metrics(metrics["command_stats"])
self.insert_llm_metrics(metrics["llm_stats"])
async def initialize(self):
"""初始化数据库连接"""
@abc.abstractmethod
def insert_platform_metrics(self, metrics: dict):
"""插入平台指标数据"""
raise NotImplementedError
@abc.abstractmethod
def insert_plugin_metrics(self, metrics: dict):
"""插入插件指标数据"""
raise NotImplementedError
@abc.abstractmethod
def insert_command_metrics(self, metrics: dict):
"""插入指令指标数据"""
raise NotImplementedError
@abc.abstractmethod
def insert_llm_metrics(self, metrics: dict):
"""插入 LLM 指标数据"""
raise NotImplementedError
@abc.abstractmethod
def update_llm_history(self, session_id: str, content: str, provider_type: str):
"""更新 LLM 历史记录。当不存在 session_id 时插入"""
raise NotImplementedError
@abc.abstractmethod
def get_llm_history(
self, session_id: str = None, provider_type: str = None
) -> List[LLMHistory]:
"""获取 LLM 历史记录, 如果 session_id 为 None, 返回所有"""
raise NotImplementedError
@asynccontextmanager
async def get_db(self) -> T.AsyncGenerator[AsyncSession, None]:
"""Get a database session."""
if not self.inited:
await self.initialize()
self.inited = True
async with self.AsyncSessionLocal() as session:
yield session
@deprecated(version="4.0.0", reason="Use get_platform_stats instead")
@abc.abstractmethod
def get_base_stats(self, offset_sec: int = 86400) -> Stats:
"""获取基础统计数据"""
raise NotImplementedError
@deprecated(version="4.0.0", reason="Use get_platform_stats instead")
@abc.abstractmethod
def get_total_message_count(self) -> int:
"""获取总消息数"""
raise NotImplementedError
@deprecated(version="4.0.0", reason="Use get_platform_stats instead")
@abc.abstractmethod
def get_grouped_base_stats(self, offset_sec: int = 86400) -> Stats:
"""获取基础统计数据(合并)"""
raise NotImplementedError
@abc.abstractmethod
def insert_atri_vision_data(self, vision_data: ATRIVision):
"""插入 ATRI 视觉数据"""
raise NotImplementedError
# New methods in v4.0.0
@abc.abstractmethod
def get_atri_vision_data(self) -> List[ATRIVision]:
"""获取 ATRI 视觉数据"""
raise NotImplementedError
async def insert_platform_stats(
self,
platform_id: str,
platform_type: str,
count: int = 1,
timestamp: datetime.datetime | None = None,
) -> None:
"""Insert a new platform statistic record."""
...
@abc.abstractmethod
def get_atri_vision_data_by_path_or_id(
self, url_or_path: str, id: str
) -> ATRIVision:
"""通过 url 或 path 获取 ATRI 视觉数据"""
raise NotImplementedError
async def count_platform_stats(self) -> int:
"""Count the number of platform statistics records."""
...
@abc.abstractmethod
def get_conversation_by_user_id(self, user_id: str, cid: str) -> Conversation:
"""通过 user_id 和 cid 获取 Conversation"""
raise NotImplementedError
async def get_platform_stats(self, offset_sec: int = 86400) -> list[PlatformStat]:
"""Get platform statistics within the specified offset in seconds and group by platform_id."""
...
@abc.abstractmethod
def new_conversation(self, user_id: str, cid: str):
"""新建 Conversation"""
raise NotImplementedError
async def get_conversations(
self,
user_id: str | None = None,
platform_id: str | None = None,
) -> list[ConversationV2]:
"""Get all conversations for a specific user and platform_id(optional).
@abc.abstractmethod
def get_conversations(self, user_id: str) -> List[Conversation]:
raise NotImplementedError
@abc.abstractmethod
def update_conversation(self, user_id: str, cid: str, history: str):
"""更新 Conversation"""
raise NotImplementedError
@abc.abstractmethod
def delete_conversation(self, user_id: str, cid: str):
"""删除 Conversation"""
raise NotImplementedError
@abc.abstractmethod
def update_conversation_title(self, user_id: str, cid: str, title: str):
"""更新 Conversation 标题"""
raise NotImplementedError
@abc.abstractmethod
def update_conversation_persona_id(self, user_id: str, cid: str, persona_id: str):
"""更新 Conversation Persona ID"""
raise NotImplementedError
@abc.abstractmethod
def get_all_conversations(
self, page: int = 1, page_size: int = 20
) -> Tuple[List[Dict[str, Any]], int]:
"""获取所有对话,支持分页
Args:
page: 页码从1开始
page_size: 每页数量
Returns:
Tuple[List[Dict[str, Any]], int]: 返回一个元组,包含对话列表和总对话数
content is not included in the result.
"""
raise NotImplementedError
...
@abc.abstractmethod
def get_filtered_conversations(
async def get_conversation_by_id(self, cid: str) -> ConversationV2:
"""Get a specific conversation by its ID."""
...
@abc.abstractmethod
async def get_all_conversations(
self,
page: int = 1,
page_size: int = 20,
platforms: List[str] = None,
message_types: List[str] = None,
search_query: str = None,
exclude_ids: List[str] = None,
exclude_platforms: List[str] = None,
) -> Tuple[List[Dict[str, Any]], int]:
"""获取筛选后的对话列表
) -> list[ConversationV2]:
"""Get all conversations with pagination."""
...
Args:
page: 页码
page_size: 每页数量
platforms: 平台筛选列表
message_types: 消息类型筛选列表
search_query: 搜索关键词
exclude_ids: 排除的用户ID列表
exclude_platforms: 排除的平台列表
@abc.abstractmethod
async def get_filtered_conversations(
self,
page: int = 1,
page_size: int = 20,
platform_ids: list[str] | None = None,
search_query: str = "",
**kwargs,
) -> tuple[list[ConversationV2], int]:
"""Get conversations filtered by platform IDs and search query."""
...
Returns:
Tuple[List[Dict[str, Any]], int]: 返回一个元组,包含对话列表和总对话数
"""
raise NotImplementedError
@abc.abstractmethod
async def create_conversation(
self,
user_id: str,
platform_id: str,
content: list[dict] | None = None,
title: str | None = None,
persona_id: str | None = None,
cid: str | None = None,
created_at: datetime.datetime | None = None,
updated_at: datetime.datetime | None = None,
) -> ConversationV2:
"""Create a new conversation."""
...
@abc.abstractmethod
async def update_conversation(
self,
cid: str,
title: str | None = None,
persona_id: str | None = None,
content: list[dict] | None = None,
) -> None:
"""Update a conversation's history."""
...
@abc.abstractmethod
async def delete_conversation(self, cid: str) -> None:
"""Delete a conversation by its ID."""
...
@abc.abstractmethod
async def delete_conversations_by_user_id(self, user_id: str) -> None:
"""Delete all conversations for a specific user."""
...
@abc.abstractmethod
async def insert_platform_message_history(
self,
platform_id: str,
user_id: str,
content: dict,
sender_id: str | None = None,
sender_name: str | None = None,
) -> None:
"""Insert a new platform message history record."""
...
@abc.abstractmethod
async def delete_platform_message_offset(
self,
platform_id: str,
user_id: str,
offset_sec: int = 86400,
) -> None:
"""Delete platform message history records newer than the specified offset."""
...
@abc.abstractmethod
async def get_platform_message_history(
self,
platform_id: str,
user_id: str,
page: int = 1,
page_size: int = 20,
) -> list[PlatformMessageHistory]:
"""Get platform message history for a specific user."""
...
@abc.abstractmethod
async def insert_attachment(
self,
path: str,
type: str,
mime_type: str,
):
"""Insert a new attachment record."""
...
@abc.abstractmethod
async def get_attachment_by_id(self, attachment_id: str) -> Attachment:
"""Get an attachment by its ID."""
...
@abc.abstractmethod
async def insert_persona(
self,
persona_id: str,
system_prompt: str,
begin_dialogs: list[str] | None = None,
tools: list[str] | None = None,
) -> Persona:
"""Insert a new persona record."""
...
@abc.abstractmethod
async def get_persona_by_id(self, persona_id: str) -> Persona:
"""Get a persona by its ID."""
...
@abc.abstractmethod
async def get_personas(self) -> list[Persona]:
"""Get all personas for a specific bot."""
...
@abc.abstractmethod
async def update_persona(
self,
persona_id: str,
system_prompt: str | None = None,
begin_dialogs: list[str] | None = None,
tools: list[str] | None = None,
) -> Persona | None:
"""Update a persona's system prompt or begin dialogs."""
...
@abc.abstractmethod
async def delete_persona(self, persona_id: str) -> None:
"""Delete a persona by its ID."""
...
@abc.abstractmethod
async def insert_preference_or_update(
self,
scope: str,
scope_id: str,
key: str,
value: dict,
) -> Preference:
"""Insert a new preference record."""
...
@abc.abstractmethod
async def get_preference(self, scope: str, scope_id: str, key: str) -> Preference:
"""Get a preference by scope ID and key."""
...
@abc.abstractmethod
async def get_preferences(
self,
scope: str,
scope_id: str | None = None,
key: str | None = None,
) -> list[Preference]:
"""Get all preferences for a specific scope ID or key."""
...
@abc.abstractmethod
async def remove_preference(self, scope: str, scope_id: str, key: str) -> None:
"""Remove a preference by scope ID and key."""
...
@abc.abstractmethod
async def clear_preferences(self, scope: str, scope_id: str) -> None:
"""Clear all preferences for a specific scope ID."""
...
# @abc.abstractmethod
# async def insert_llm_message(
# self,
# cid: str,
# role: str,
# content: list,
# tool_calls: list = None,
# tool_call_id: str = None,
# parent_id: str = None,
# ) -> LLMMessage:
# """Insert a new LLM message into the conversation."""
# ...
# @abc.abstractmethod
# async def get_llm_messages(self, cid: str) -> list[LLMMessage]:
# """Get all LLM messages for a specific conversation."""
# ...
@abc.abstractmethod
async def get_session_conversations(
self,
page: int = 1,
page_size: int = 20,
search_query: str | None = None,
platform: str | None = None,
) -> tuple[list[dict], int]:
"""Get paginated session conversations with joined conversation and persona details, support search and platform filter."""
...
# ====
# Platform Session Management
# ====
@abc.abstractmethod
async def create_platform_session(
self,
creator: str,
platform_id: str = "webchat",
session_id: str | None = None,
display_name: str | None = None,
is_group: int = 0,
) -> PlatformSession:
"""Create a new Platform session."""
...
@abc.abstractmethod
async def get_platform_session_by_id(
self, session_id: str
) -> PlatformSession | None:
"""Get a Platform session by its ID."""
...
@abc.abstractmethod
async def get_platform_sessions_by_creator(
self,
creator: str,
platform_id: str | None = None,
page: int = 1,
page_size: int = 20,
) -> list[PlatformSession]:
"""Get all Platform sessions for a specific creator (username) and optionally platform."""
...
@abc.abstractmethod
async def update_platform_session(
self,
session_id: str,
display_name: str | None = None,
) -> None:
"""Update a Platform session's updated_at timestamp and optionally display_name."""
...
@abc.abstractmethod
async def delete_platform_session(self, session_id: str) -> None:
"""Delete a Platform session by its ID."""
...

View File

@@ -0,0 +1,69 @@
import os
from astrbot.api import logger, sp
from astrbot.core.config import AstrBotConfig
from astrbot.core.db import BaseDatabase
from astrbot.core.utils.astrbot_path import get_astrbot_data_path
from .migra_3_to_4 import (
migration_conversation_table,
migration_persona_data,
migration_platform_table,
migration_preferences,
migration_webchat_data,
)
async def check_migration_needed_v4(db_helper: BaseDatabase) -> bool:
"""检查是否需要进行数据库迁移
如果存在 data_v3.db 并且 preference 中没有 migration_done_v4则需要进行迁移。
"""
# 仅当 data 目录下存在旧版本数据data_v3.db 文件)时才考虑迁移
data_dir = get_astrbot_data_path()
data_v3_db = os.path.join(data_dir, "data_v3.db")
if not os.path.exists(data_v3_db):
return False
migration_done = await db_helper.get_preference(
"global",
"global",
"migration_done_v4",
)
if migration_done:
return False
return True
async def do_migration_v4(
db_helper: BaseDatabase,
platform_id_map: dict[str, dict[str, str]],
astrbot_config: AstrBotConfig,
) -> None:
"""执行数据库迁移
迁移旧的 webchat_conversation 表到新的 conversation 表。
迁移旧的 platform 到新的 platform_stats 表。
"""
if not await check_migration_needed_v4(db_helper):
return
logger.info("开始执行数据库迁移...")
# 执行会话表迁移
await migration_conversation_table(db_helper, platform_id_map)
# 执行人格数据迁移
await migration_persona_data(db_helper, astrbot_config)
# 执行 WebChat 数据迁移
await migration_webchat_data(db_helper, platform_id_map)
# 执行偏好设置迁移
await migration_preferences(db_helper, platform_id_map)
# 执行平台统计表迁移
await migration_platform_table(db_helper, platform_id_map)
# 标记迁移完成
await sp.put_async("global", "global", "migration_done_v4", True)
logger.info("数据库迁移完成。")

View File

@@ -0,0 +1,357 @@
import datetime
import json
from sqlalchemy import text
from sqlalchemy.ext.asyncio import AsyncSession
from astrbot.api import logger, sp
from astrbot.core.config import AstrBotConfig
from astrbot.core.config.default import DB_PATH
from astrbot.core.db.po import ConversationV2, PlatformMessageHistory
from astrbot.core.platform.astr_message_event import MessageSesion
from .. import BaseDatabase
from .shared_preferences_v3 import sp as sp_v3
from .sqlite_v3 import SQLiteDatabase as SQLiteV3DatabaseV3
"""
1. 迁移旧的 webchat_conversation 表到新的 conversation 表。
2. 迁移旧的 platform 到新的 platform_stats 表。
"""
def get_platform_id(
platform_id_map: dict[str, dict[str, str]],
old_platform_name: str,
) -> str:
return platform_id_map.get(
old_platform_name,
{"platform_id": old_platform_name, "platform_type": old_platform_name},
).get("platform_id", old_platform_name)
def get_platform_type(
platform_id_map: dict[str, dict[str, str]],
old_platform_name: str,
) -> str:
return platform_id_map.get(
old_platform_name,
{"platform_id": old_platform_name, "platform_type": old_platform_name},
).get("platform_type", old_platform_name)
async def migration_conversation_table(
db_helper: BaseDatabase,
platform_id_map: dict[str, dict[str, str]],
):
db_helper_v3 = SQLiteV3DatabaseV3(
db_path=DB_PATH.replace("data_v4.db", "data_v3.db"),
)
conversations, total_cnt = db_helper_v3.get_all_conversations(
page=1,
page_size=10000000,
)
logger.info(f"迁移 {total_cnt} 条旧的会话数据到新的表中...")
async with db_helper.get_db() as dbsession:
dbsession: AsyncSession
async with dbsession.begin():
for idx, conversation in enumerate(conversations):
if total_cnt > 0 and (idx + 1) % max(1, total_cnt // 10) == 0:
progress = int((idx + 1) / total_cnt * 100)
if progress % 10 == 0:
logger.info(f"进度: {progress}% ({idx + 1}/{total_cnt})")
try:
conv = db_helper_v3.get_conversation_by_user_id(
user_id=conversation.get("user_id", "unknown"),
cid=conversation.get("cid", "unknown"),
)
if not conv:
logger.info(
f"未找到该条旧会话对应的具体数据: {conversation}, 跳过。",
)
if ":" not in conv.user_id:
continue
session = MessageSesion.from_str(session_str=conv.user_id)
platform_id = get_platform_id(
platform_id_map,
session.platform_name,
)
session.platform_id = platform_id # 更新平台名称为新的 ID
conv_v2 = ConversationV2(
user_id=str(session),
content=json.loads(conv.history) if conv.history else [],
platform_id=platform_id,
title=conv.title,
persona_id=conv.persona_id,
conversation_id=conv.cid,
created_at=datetime.datetime.fromtimestamp(conv.created_at),
updated_at=datetime.datetime.fromtimestamp(conv.updated_at),
)
dbsession.add(conv_v2)
except Exception as e:
logger.error(
f"迁移旧会话 {conversation.get('cid', 'unknown')} 失败: {e}",
exc_info=True,
)
logger.info(f"成功迁移 {total_cnt} 条旧的会话数据到新表。")
async def migration_platform_table(
db_helper: BaseDatabase,
platform_id_map: dict[str, dict[str, str]],
):
db_helper_v3 = SQLiteV3DatabaseV3(
db_path=DB_PATH.replace("data_v4.db", "data_v3.db"),
)
secs_from_2023_4_10_to_now = (
datetime.datetime.now(datetime.timezone.utc)
- datetime.datetime(2023, 4, 10, tzinfo=datetime.timezone.utc)
).total_seconds()
offset_sec = int(secs_from_2023_4_10_to_now)
logger.info(f"迁移旧平台数据offset_sec: {offset_sec} 秒。")
stats = db_helper_v3.get_base_stats(offset_sec=offset_sec)
logger.info(f"迁移 {len(stats.platform)} 条旧的平台数据到新的表中...")
platform_stats_v3 = stats.platform
if not platform_stats_v3:
logger.info("没有找到旧平台数据,跳过迁移。")
return
first_time_stamp = platform_stats_v3[0].timestamp
end_time_stamp = platform_stats_v3[-1].timestamp
start_time = first_time_stamp - (first_time_stamp % 3600) # 向下取整到小时
end_time = end_time_stamp + (3600 - (end_time_stamp % 3600)) # 向上取整到小时
idx = 0
async with db_helper.get_db() as dbsession:
dbsession: AsyncSession
async with dbsession.begin():
total_buckets = (end_time - start_time) // 3600
for bucket_idx, bucket_end in enumerate(range(start_time, end_time, 3600)):
if bucket_idx % 500 == 0:
progress = int((bucket_idx + 1) / total_buckets * 100)
logger.info(f"进度: {progress}% ({bucket_idx + 1}/{total_buckets})")
cnt = 0
while (
idx < len(platform_stats_v3)
and platform_stats_v3[idx].timestamp < bucket_end
):
cnt += platform_stats_v3[idx].count
idx += 1
if cnt == 0:
continue
platform_id = get_platform_id(
platform_id_map,
platform_stats_v3[idx].name,
)
platform_type = get_platform_type(
platform_id_map,
platform_stats_v3[idx].name,
)
try:
await dbsession.execute(
text("""
INSERT INTO platform_stats (timestamp, platform_id, platform_type, count)
VALUES (:timestamp, :platform_id, :platform_type, :count)
ON CONFLICT(timestamp, platform_id, platform_type) DO UPDATE SET
count = platform_stats.count + EXCLUDED.count
"""),
{
"timestamp": datetime.datetime.fromtimestamp(
bucket_end,
tz=datetime.timezone.utc,
),
"platform_id": platform_id,
"platform_type": platform_type,
"count": cnt,
},
)
except Exception:
logger.error(
f"迁移平台统计数据失败: {platform_id}, {platform_type}, 时间戳: {bucket_end}",
exc_info=True,
)
logger.info(f"成功迁移 {len(platform_stats_v3)} 条旧的平台数据到新表。")
async def migration_webchat_data(
db_helper: BaseDatabase,
platform_id_map: dict[str, dict[str, str]],
):
"""迁移 WebChat 的历史记录到新的 PlatformMessageHistory 表中"""
db_helper_v3 = SQLiteV3DatabaseV3(
db_path=DB_PATH.replace("data_v4.db", "data_v3.db"),
)
conversations, total_cnt = db_helper_v3.get_all_conversations(
page=1,
page_size=10000000,
)
logger.info(f"迁移 {total_cnt} 条旧的 WebChat 会话数据到新的表中...")
async with db_helper.get_db() as dbsession:
dbsession: AsyncSession
async with dbsession.begin():
for idx, conversation in enumerate(conversations):
if total_cnt > 0 and (idx + 1) % max(1, total_cnt // 10) == 0:
progress = int((idx + 1) / total_cnt * 100)
if progress % 10 == 0:
logger.info(f"进度: {progress}% ({idx + 1}/{total_cnt})")
try:
conv = db_helper_v3.get_conversation_by_user_id(
user_id=conversation.get("user_id", "unknown"),
cid=conversation.get("cid", "unknown"),
)
if not conv:
logger.info(
f"未找到该条旧会话对应的具体数据: {conversation}, 跳过。",
)
if ":" in conv.user_id:
continue
platform_id = "webchat"
history = json.loads(conv.history) if conv.history else []
for msg in history:
type_ = msg.get("type") # user type, "bot" or "user"
new_history = PlatformMessageHistory(
platform_id=platform_id,
user_id=conv.cid, # we use conv.cid as user_id for webchat
content=msg,
sender_id=type_,
sender_name=type_,
)
dbsession.add(new_history)
except Exception:
logger.error(
f"迁移旧 WebChat 会话 {conversation.get('cid', 'unknown')} 失败",
exc_info=True,
)
logger.info(f"成功迁移 {total_cnt} 条旧的 WebChat 会话数据到新表。")
async def migration_persona_data(
db_helper: BaseDatabase,
astrbot_config: AstrBotConfig,
):
"""迁移 Persona 数据到新的表中。
旧的 Persona 数据存储在 preference 中,新的 Persona 数据存储在 persona 表中。
"""
v3_persona_config: list[dict] = astrbot_config.get("persona", [])
total_personas = len(v3_persona_config)
logger.info(f"迁移 {total_personas} 个 Persona 配置到新表中...")
for idx, persona in enumerate(v3_persona_config):
if total_personas > 0 and (idx + 1) % max(1, total_personas // 10) == 0:
progress = int((idx + 1) / total_personas * 100)
if progress % 10 == 0:
logger.info(f"进度: {progress}% ({idx + 1}/{total_personas})")
try:
begin_dialogs = persona.get("begin_dialogs", [])
mood_imitation_dialogs = persona.get("mood_imitation_dialogs", [])
parts = []
user_turn = True
for mood_dialog in mood_imitation_dialogs:
if user_turn:
parts.append(f"A: {mood_dialog}\n")
else:
parts.append(f"B: {mood_dialog}\n")
user_turn = not user_turn
mood_prompt = "".join(parts)
system_prompt = persona.get("prompt", "")
if mood_prompt:
system_prompt += f"Here are few shots of dialogs, you need to imitate the tone of 'B' in the following dialogs to respond:\n {mood_prompt}"
persona_new = await db_helper.insert_persona(
persona_id=persona["name"],
system_prompt=system_prompt,
begin_dialogs=begin_dialogs,
)
logger.info(
f"迁移 Persona {persona['name']}({persona_new.system_prompt[:30]}...) 到新表成功。",
)
except Exception as e:
logger.error(f"解析 Persona 配置失败:{e}")
async def migration_preferences(
db_helper: BaseDatabase,
platform_id_map: dict[str, dict[str, str]],
):
# 1. global scope migration
keys = [
"inactivated_llm_tools",
"inactivated_plugins",
"curr_provider",
"curr_provider_tts",
"curr_provider_stt",
"alter_cmd",
]
for key in keys:
value = sp_v3.get(key)
if value is not None:
await sp.put_async("global", "global", key, value)
logger.info(f"迁移全局偏好设置 {key} 成功,值: {value}")
# 2. umo scope migration
session_conversation = sp_v3.get("session_conversation", default={})
for umo, conversation_id in session_conversation.items():
if not umo or not conversation_id:
continue
try:
session = MessageSesion.from_str(session_str=umo)
platform_id = get_platform_id(platform_id_map, session.platform_name)
session.platform_id = platform_id
await sp.put_async("umo", str(session), "sel_conv_id", conversation_id)
logger.info(f"迁移会话 {umo} 的对话数据到新表成功,平台 ID: {platform_id}")
except Exception as e:
logger.error(f"迁移会话 {umo} 的对话数据失败: {e}", exc_info=True)
session_service_config = sp_v3.get("session_service_config", default={})
for umo, config in session_service_config.items():
if not umo or not config:
continue
try:
session = MessageSesion.from_str(session_str=umo)
platform_id = get_platform_id(platform_id_map, session.platform_name)
session.platform_id = platform_id
await sp.put_async("umo", str(session), "session_service_config", config)
logger.info(f"迁移会话 {umo} 的服务配置到新表成功,平台 ID: {platform_id}")
except Exception as e:
logger.error(f"迁移会话 {umo} 的服务配置失败: {e}", exc_info=True)
session_variables = sp_v3.get("session_variables", default={})
for umo, variables in session_variables.items():
if not umo or not variables:
continue
try:
session = MessageSesion.from_str(session_str=umo)
platform_id = get_platform_id(platform_id_map, session.platform_name)
session.platform_id = platform_id
await sp.put_async("umo", str(session), "session_variables", variables)
except Exception as e:
logger.error(f"迁移会话 {umo} 的变量失败: {e}", exc_info=True)
session_provider_perf = sp_v3.get("session_provider_perf", default={})
for umo, perf in session_provider_perf.items():
if not umo or not perf:
continue
try:
session = MessageSesion.from_str(session_str=umo)
platform_id = get_platform_id(platform_id_map, session.platform_name)
session.platform_id = platform_id
for provider_type, provider_id in perf.items():
await sp.put_async(
"umo",
str(session),
f"provider_perf_{provider_type}",
provider_id,
)
logger.info(
f"迁移会话 {umo} 的提供商偏好到新表成功,平台 ID: {platform_id}",
)
except Exception as e:
logger.error(f"迁移会话 {umo} 的提供商偏好失败: {e}", exc_info=True)

View File

@@ -0,0 +1,44 @@
from astrbot.api import logger, sp
from astrbot.core.astrbot_config_mgr import AstrBotConfigManager
from astrbot.core.umop_config_router import UmopConfigRouter
async def migrate_45_to_46(acm: AstrBotConfigManager, ucr: UmopConfigRouter):
abconf_data = acm.abconf_data
if not isinstance(abconf_data, dict):
# should be unreachable
logger.warning(
f"migrate_45_to_46: abconf_data is not a dict (type={type(abconf_data)}). Value: {abconf_data!r}",
)
return
# 如果任何一项带有 umop则说明需要迁移
need_migration = False
for conf_id, conf_info in abconf_data.items():
if isinstance(conf_info, dict) and "umop" in conf_info:
need_migration = True
break
if not need_migration:
return
logger.info("Starting migration from version 4.5 to 4.6")
# extract umo->conf_id mapping
umo_to_conf_id = {}
for conf_id, conf_info in abconf_data.items():
if isinstance(conf_info, dict) and "umop" in conf_info:
umop_ls = conf_info.pop("umop")
if not isinstance(umop_ls, list):
continue
for umo in umop_ls:
if isinstance(umo, str) and umo not in umo_to_conf_id:
umo_to_conf_id[umo] = conf_id
# update the abconf data
await sp.global_put("abconf_mapping", abconf_data)
# update the umop config router
await ucr.update_routing_data(umo_to_conf_id)
logger.info("Migration from version 45 to 46 completed successfully")

View File

@@ -0,0 +1,131 @@
"""Migration script for WebChat sessions.
This migration creates PlatformSession from existing platform_message_history records.
Changes:
- Creates platform_sessions table
- Adds platform_id field (default: 'webchat')
- Adds display_name field
- Session_id format: {platform_id}_{uuid}
"""
from sqlalchemy import func, select
from sqlmodel import col
from astrbot.api import logger, sp
from astrbot.core.db import BaseDatabase
from astrbot.core.db.po import ConversationV2, PlatformMessageHistory, PlatformSession
async def migrate_webchat_session(db_helper: BaseDatabase):
"""Create PlatformSession records from platform_message_history.
This migration extracts all unique user_ids from platform_message_history
where platform_id='webchat' and creates corresponding PlatformSession records.
"""
# 检查是否已经完成迁移
migration_done = await db_helper.get_preference(
"global", "global", "migration_done_webchat_session_1"
)
if migration_done:
return
logger.info("开始执行数据库迁移WebChat 会话迁移)...")
try:
async with db_helper.get_db() as session:
# 从 platform_message_history 创建 PlatformSession
query = (
select(
col(PlatformMessageHistory.user_id),
col(PlatformMessageHistory.sender_name),
func.min(PlatformMessageHistory.created_at).label("earliest"),
func.max(PlatformMessageHistory.updated_at).label("latest"),
)
.where(col(PlatformMessageHistory.platform_id) == "webchat")
.where(col(PlatformMessageHistory.sender_id) != "bot")
.group_by(col(PlatformMessageHistory.user_id))
)
result = await session.execute(query)
webchat_users = result.all()
if not webchat_users:
logger.info("没有找到需要迁移的 WebChat 数据")
await sp.put_async(
"global", "global", "migration_done_webchat_session_1", True
)
return
logger.info(f"找到 {len(webchat_users)} 个 WebChat 会话需要迁移")
# 检查已存在的会话
existing_query = select(col(PlatformSession.session_id))
existing_result = await session.execute(existing_query)
existing_session_ids = {row[0] for row in existing_result.fetchall()}
# 查询 Conversations 表中的 title用于设置 display_name
# 对于每个 user_id对应的 conversation user_id 格式为: webchat:FriendMessage:webchat!astrbot!{user_id}
user_ids_to_query = [
f"webchat:FriendMessage:webchat!astrbot!{user_id}"
for user_id, _, _, _ in webchat_users
]
conv_query = select(
col(ConversationV2.user_id), col(ConversationV2.title)
).where(col(ConversationV2.user_id).in_(user_ids_to_query))
conv_result = await session.execute(conv_query)
# 创建 user_id -> title 的映射字典
title_map = {
user_id.replace("webchat:FriendMessage:webchat!astrbot!", ""): title
for user_id, title in conv_result.fetchall()
}
# 批量创建 PlatformSession 记录
sessions_to_add = []
skipped_count = 0
for user_id, sender_name, created_at, updated_at in webchat_users:
# user_id 就是 webchat_conv_id (session_id)
session_id = user_id
# sender_name 通常是 username但可能为 None
creator = sender_name if sender_name else "guest"
# 检查是否已经存在该会话
if session_id in existing_session_ids:
logger.debug(f"会话 {session_id} 已存在,跳过")
skipped_count += 1
continue
# 从 Conversations 表中获取 display_name
display_name = title_map.get(user_id)
# 创建新的 PlatformSession保留原有的时间戳
new_session = PlatformSession(
session_id=session_id,
platform_id="webchat",
creator=creator,
is_group=0,
created_at=created_at,
updated_at=updated_at,
display_name=display_name,
)
sessions_to_add.append(new_session)
# 批量插入
if sessions_to_add:
session.add_all(sessions_to_add)
await session.commit()
logger.info(
f"WebChat 会话迁移完成!成功迁移: {len(sessions_to_add)}, 跳过: {skipped_count}",
)
else:
logger.info("没有新会话需要迁移")
# 标记迁移完成
await sp.put_async("global", "global", "migration_done_webchat_session_1", True)
except Exception as e:
logger.error(f"迁移过程中发生错误: {e}", exc_info=True)
raise

View File

@@ -0,0 +1,48 @@
import json
import os
from typing import TypeVar
from astrbot.core.utils.astrbot_path import get_astrbot_data_path
_VT = TypeVar("_VT")
class SharedPreferences:
def __init__(self, path=None):
if path is None:
path = os.path.join(get_astrbot_data_path(), "shared_preferences.json")
self.path = path
self._data = self._load_preferences()
def _load_preferences(self):
if os.path.exists(self.path):
try:
with open(self.path) as f:
return json.load(f)
except json.JSONDecodeError:
os.remove(self.path)
return {}
def _save_preferences(self):
with open(self.path, "w") as f:
json.dump(self._data, f, indent=4, ensure_ascii=False)
f.flush()
def get(self, key, default: _VT = None) -> _VT:
return self._data.get(key, default)
def put(self, key, value):
self._data[key] = value
self._save_preferences()
def remove(self, key):
if key in self._data:
del self._data[key]
self._save_preferences()
def clear(self):
self._data.clear()
self._save_preferences()
sp = SharedPreferences()

View File

@@ -0,0 +1,497 @@
import sqlite3
import time
from dataclasses import dataclass
from typing import Any
from astrbot.core.db.po import Platform, Stats
@dataclass
class Conversation:
"""LLM 对话存储
对于网页聊天history 存储了包括指令、回复、图片等在内的所有消息。
对于其他平台的聊天,不存储非 LLM 的回复(因为考虑到已经存储在各自的平台上)。
"""
user_id: str
cid: str
history: str = ""
"""字符串格式的列表。"""
created_at: int = 0
updated_at: int = 0
title: str = ""
persona_id: str = ""
INIT_SQL = """
CREATE TABLE IF NOT EXISTS platform(
name VARCHAR(32),
count INTEGER,
timestamp INTEGER
);
CREATE TABLE IF NOT EXISTS llm(
name VARCHAR(32),
count INTEGER,
timestamp INTEGER
);
CREATE TABLE IF NOT EXISTS plugin(
name VARCHAR(32),
count INTEGER,
timestamp INTEGER
);
CREATE TABLE IF NOT EXISTS command(
name VARCHAR(32),
count INTEGER,
timestamp INTEGER
);
CREATE TABLE IF NOT EXISTS llm_history(
provider_type VARCHAR(32),
session_id VARCHAR(32),
content TEXT
);
-- ATRI
CREATE TABLE IF NOT EXISTS atri_vision(
id TEXT,
url_or_path TEXT,
caption TEXT,
is_meme BOOLEAN,
keywords TEXT,
platform_name VARCHAR(32),
session_id VARCHAR(32),
sender_nickname VARCHAR(32),
timestamp INTEGER
);
CREATE TABLE IF NOT EXISTS webchat_conversation(
user_id TEXT, -- 会话 id
cid TEXT, -- 对话 id
history TEXT,
created_at INTEGER,
updated_at INTEGER,
title TEXT,
persona_id TEXT
);
PRAGMA encoding = 'UTF-8';
"""
class SQLiteDatabase:
def __init__(self, db_path: str) -> None:
super().__init__()
self.db_path = db_path
sql = INIT_SQL
# 初始化数据库
self.conn = self._get_conn(self.db_path)
c = self.conn.cursor()
c.executescript(sql)
self.conn.commit()
# 检查 webchat_conversation 的 title 字段是否存在
c.execute(
"""
PRAGMA table_info(webchat_conversation)
""",
)
res = c.fetchall()
has_title = False
has_persona_id = False
for row in res:
if row[1] == "title":
has_title = True
if row[1] == "persona_id":
has_persona_id = True
if not has_title:
c.execute(
"""
ALTER TABLE webchat_conversation ADD COLUMN title TEXT;
""",
)
self.conn.commit()
if not has_persona_id:
c.execute(
"""
ALTER TABLE webchat_conversation ADD COLUMN persona_id TEXT;
""",
)
self.conn.commit()
c.close()
def _get_conn(self, db_path: str) -> sqlite3.Connection:
conn = sqlite3.connect(self.db_path)
conn.text_factory = str
return conn
def _exec_sql(self, sql: str, params: tuple = None):
conn = self.conn
try:
c = self.conn.cursor()
except sqlite3.ProgrammingError:
conn = self._get_conn(self.db_path)
c = conn.cursor()
if params:
c.execute(sql, params)
c.close()
else:
c.execute(sql)
c.close()
conn.commit()
def insert_platform_metrics(self, metrics: dict):
for k, v in metrics.items():
self._exec_sql(
"""
INSERT INTO platform(name, count, timestamp) VALUES (?, ?, ?)
""",
(k, v, int(time.time())),
)
def insert_llm_metrics(self, metrics: dict):
for k, v in metrics.items():
self._exec_sql(
"""
INSERT INTO llm(name, count, timestamp) VALUES (?, ?, ?)
""",
(k, v, int(time.time())),
)
def get_base_stats(self, offset_sec: int = 86400) -> Stats:
"""获取 offset_sec 秒前到现在的基础统计数据"""
where_clause = f" WHERE timestamp >= {int(time.time()) - offset_sec}"
try:
c = self.conn.cursor()
except sqlite3.ProgrammingError:
c = self._get_conn(self.db_path).cursor()
c.execute(
"""
SELECT * FROM platform
"""
+ where_clause,
)
platform = []
for row in c.fetchall():
platform.append(Platform(*row))
c.close()
return Stats(platform=platform)
def get_total_message_count(self) -> int:
try:
c = self.conn.cursor()
except sqlite3.ProgrammingError:
c = self._get_conn(self.db_path).cursor()
c.execute(
"""
SELECT SUM(count) FROM platform
""",
)
res = c.fetchone()
c.close()
return res[0]
def get_grouped_base_stats(self, offset_sec: int = 86400) -> Stats:
"""获取 offset_sec 秒前到现在的基础统计数据(合并)"""
where_clause = f" WHERE timestamp >= {int(time.time()) - offset_sec}"
try:
c = self.conn.cursor()
except sqlite3.ProgrammingError:
c = self._get_conn(self.db_path).cursor()
c.execute(
"""
SELECT name, SUM(count), timestamp FROM platform
"""
+ where_clause
+ " GROUP BY name",
)
platform = []
for row in c.fetchall():
platform.append(Platform(*row))
c.close()
return Stats(platform, [], [])
def get_conversation_by_user_id(self, user_id: str, cid: str) -> Conversation:
try:
c = self.conn.cursor()
except sqlite3.ProgrammingError:
c = self._get_conn(self.db_path).cursor()
c.execute(
"""
SELECT * FROM webchat_conversation WHERE user_id = ? AND cid = ?
""",
(user_id, cid),
)
res = c.fetchone()
c.close()
if not res:
return None
return Conversation(*res)
def new_conversation(self, user_id: str, cid: str):
history = "[]"
updated_at = int(time.time())
created_at = updated_at
self._exec_sql(
"""
INSERT INTO webchat_conversation(user_id, cid, history, updated_at, created_at) VALUES (?, ?, ?, ?, ?)
""",
(user_id, cid, history, updated_at, created_at),
)
def get_conversations(self, user_id: str) -> tuple:
try:
c = self.conn.cursor()
except sqlite3.ProgrammingError:
c = self._get_conn(self.db_path).cursor()
c.execute(
"""
SELECT cid, created_at, updated_at, title, persona_id FROM webchat_conversation WHERE user_id = ? ORDER BY updated_at DESC
""",
(user_id,),
)
res = c.fetchall()
c.close()
conversations = []
for row in res:
cid = row[0]
created_at = row[1]
updated_at = row[2]
title = row[3]
persona_id = row[4]
conversations.append(
Conversation("", cid, "[]", created_at, updated_at, title, persona_id),
)
return conversations
def update_conversation(self, user_id: str, cid: str, history: str):
"""更新对话,并且同时更新时间"""
updated_at = int(time.time())
self._exec_sql(
"""
UPDATE webchat_conversation SET history = ?, updated_at = ? WHERE user_id = ? AND cid = ?
""",
(history, updated_at, user_id, cid),
)
def update_conversation_title(self, user_id: str, cid: str, title: str):
self._exec_sql(
"""
UPDATE webchat_conversation SET title = ? WHERE user_id = ? AND cid = ?
""",
(title, user_id, cid),
)
def update_conversation_persona_id(self, user_id: str, cid: str, persona_id: str):
self._exec_sql(
"""
UPDATE webchat_conversation SET persona_id = ? WHERE user_id = ? AND cid = ?
""",
(persona_id, user_id, cid),
)
def delete_conversation(self, user_id: str, cid: str):
self._exec_sql(
"""
DELETE FROM webchat_conversation WHERE user_id = ? AND cid = ?
""",
(user_id, cid),
)
def get_all_conversations(
self,
page: int = 1,
page_size: int = 20,
) -> tuple[list[dict[str, Any]], int]:
"""获取所有对话,支持分页,按更新时间降序排序"""
try:
c = self.conn.cursor()
except sqlite3.ProgrammingError:
c = self._get_conn(self.db_path).cursor()
try:
# 获取总记录数
c.execute("""
SELECT COUNT(*) FROM webchat_conversation
""")
total_count = c.fetchone()[0]
# 计算偏移量
offset = (page - 1) * page_size
# 获取分页数据,按更新时间降序排序
c.execute(
"""
SELECT user_id, cid, created_at, updated_at, title, persona_id
FROM webchat_conversation
ORDER BY updated_at DESC
LIMIT ? OFFSET ?
""",
(page_size, offset),
)
rows = c.fetchall()
conversations = []
for row in rows:
user_id, cid, created_at, updated_at, title, persona_id = row
# 确保 cid 是字符串类型且至少有8个字符否则使用一个默认值
safe_cid = str(cid) if cid else "unknown"
display_cid = safe_cid[:8] if len(safe_cid) >= 8 else safe_cid
conversations.append(
{
"user_id": user_id or "",
"cid": safe_cid,
"title": title or f"对话 {display_cid}",
"persona_id": persona_id or "",
"created_at": created_at or 0,
"updated_at": updated_at or 0,
},
)
return conversations, total_count
except Exception as _:
# 返回空列表和0确保即使出错也有有效的返回值
return [], 0
finally:
c.close()
def get_filtered_conversations(
self,
page: int = 1,
page_size: int = 20,
platforms: list[str] | None = None,
message_types: list[str] | None = None,
search_query: str | None = None,
exclude_ids: list[str] | None = None,
exclude_platforms: list[str] | None = None,
) -> tuple[list[dict[str, Any]], int]:
"""获取筛选后的对话列表"""
try:
c = self.conn.cursor()
except sqlite3.ProgrammingError:
c = self._get_conn(self.db_path).cursor()
try:
# 构建查询条件
where_clauses = []
params = []
# 平台筛选
if platforms and len(platforms) > 0:
platform_conditions = []
for platform in platforms:
platform_conditions.append("user_id LIKE ?")
params.append(f"{platform}:%")
if platform_conditions:
where_clauses.append(f"({' OR '.join(platform_conditions)})")
# 消息类型筛选
if message_types and len(message_types) > 0:
message_type_conditions = []
for msg_type in message_types:
message_type_conditions.append("user_id LIKE ?")
params.append(f"%:{msg_type}:%")
if message_type_conditions:
where_clauses.append(f"({' OR '.join(message_type_conditions)})")
# 搜索关键词
if search_query:
search_query = search_query.encode("unicode_escape").decode("utf-8")
where_clauses.append(
"(title LIKE ? OR user_id LIKE ? OR cid LIKE ? OR history LIKE ?)",
)
search_param = f"%{search_query}%"
params.extend([search_param, search_param, search_param, search_param])
# 排除特定用户ID
if exclude_ids and len(exclude_ids) > 0:
for exclude_id in exclude_ids:
where_clauses.append("user_id NOT LIKE ?")
params.append(f"{exclude_id}%")
# 排除特定平台
if exclude_platforms and len(exclude_platforms) > 0:
for exclude_platform in exclude_platforms:
where_clauses.append("user_id NOT LIKE ?")
params.append(f"{exclude_platform}:%")
# 构建完整的 WHERE 子句
where_sql = " WHERE " + " AND ".join(where_clauses) if where_clauses else ""
# 构建计数查询
count_sql = f"SELECT COUNT(*) FROM webchat_conversation{where_sql}"
# 获取总记录数
c.execute(count_sql, params)
total_count = c.fetchone()[0]
# 计算偏移量
offset = (page - 1) * page_size
# 构建分页数据查询
data_sql = f"""
SELECT user_id, cid, created_at, updated_at, title, persona_id
FROM webchat_conversation
{where_sql}
ORDER BY updated_at DESC
LIMIT ? OFFSET ?
"""
query_params = params + [page_size, offset]
# 获取分页数据
c.execute(data_sql, query_params)
rows = c.fetchall()
conversations = []
for row in rows:
user_id, cid, created_at, updated_at, title, persona_id = row
# 确保 cid 是字符串类型,否则使用一个默认值
safe_cid = str(cid) if cid else "unknown"
display_cid = safe_cid[:8] if len(safe_cid) >= 8 else safe_cid
conversations.append(
{
"user_id": user_id or "",
"cid": safe_cid,
"title": title or f"对话 {display_cid}",
"persona_id": persona_id or "",
"created_at": created_at or 0,
"updated_at": updated_at or 0,
},
)
return conversations, total_count
except Exception as _:
# 返回空列表和0确保即使出错也有有效的返回值
return [], 0
finally:
c.close()

View File

@@ -1,7 +1,282 @@
"""指标数据"""
import uuid
from dataclasses import dataclass, field
from typing import List
from datetime import datetime, timezone
from typing import TypedDict
from sqlmodel import JSON, Field, SQLModel, Text, UniqueConstraint
class PlatformStat(SQLModel, table=True):
"""This class represents the statistics of bot usage across different platforms.
Note: In astrbot v4, we moved `platform` table to here.
"""
__tablename__ = "platform_stats" # type: ignore
id: int = Field(primary_key=True, sa_column_kwargs={"autoincrement": True})
timestamp: datetime = Field(nullable=False)
platform_id: str = Field(nullable=False)
platform_type: str = Field(nullable=False) # such as "aiocqhttp", "slack", etc.
count: int = Field(default=0, nullable=False)
__table_args__ = (
UniqueConstraint(
"timestamp",
"platform_id",
"platform_type",
name="uix_platform_stats",
),
)
class ConversationV2(SQLModel, table=True):
__tablename__ = "conversations" # type: ignore
inner_conversation_id: int = Field(
primary_key=True,
sa_column_kwargs={"autoincrement": True},
)
conversation_id: str = Field(
max_length=36,
nullable=False,
unique=True,
default_factory=lambda: str(uuid.uuid4()),
)
platform_id: str = Field(nullable=False)
user_id: str = Field(nullable=False)
content: list | None = Field(default=None, sa_type=JSON)
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc),
sa_column_kwargs={"onupdate": datetime.now(timezone.utc)},
)
title: str | None = Field(default=None, max_length=255)
persona_id: str | None = Field(default=None)
__table_args__ = (
UniqueConstraint(
"conversation_id",
name="uix_conversation_id",
),
)
class Persona(SQLModel, table=True):
"""Persona is a set of instructions for LLMs to follow.
It can be used to customize the behavior of LLMs.
"""
__tablename__ = "personas" # type: ignore
id: int | None = Field(
primary_key=True,
sa_column_kwargs={"autoincrement": True},
default=None,
)
persona_id: str = Field(max_length=255, nullable=False)
system_prompt: str = Field(sa_type=Text, nullable=False)
begin_dialogs: list | None = Field(default=None, sa_type=JSON)
"""a list of strings, each representing a dialog to start with"""
tools: list | None = Field(default=None, sa_type=JSON)
"""None means use ALL tools for default, empty list means no tools, otherwise a list of tool names."""
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc),
sa_column_kwargs={"onupdate": datetime.now(timezone.utc)},
)
__table_args__ = (
UniqueConstraint(
"persona_id",
name="uix_persona_id",
),
)
class Preference(SQLModel, table=True):
"""This class represents preferences for bots."""
__tablename__ = "preferences" # type: ignore
id: int | None = Field(
default=None,
primary_key=True,
sa_column_kwargs={"autoincrement": True},
)
scope: str = Field(nullable=False)
"""Scope of the preference, such as 'global', 'umo', 'plugin'."""
scope_id: str = Field(nullable=False)
"""ID of the scope, such as 'global', 'umo', 'plugin_name'."""
key: str = Field(nullable=False)
value: dict = Field(sa_type=JSON, nullable=False)
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc),
sa_column_kwargs={"onupdate": datetime.now(timezone.utc)},
)
__table_args__ = (
UniqueConstraint(
"scope",
"scope_id",
"key",
name="uix_preference_scope_scope_id_key",
),
)
class PlatformMessageHistory(SQLModel, table=True):
"""This class represents the message history for a specific platform.
It is used to store messages that are not LLM-generated, such as user messages
or platform-specific messages.
"""
__tablename__ = "platform_message_history" # type: ignore
id: int | None = Field(
primary_key=True,
sa_column_kwargs={"autoincrement": True},
default=None,
)
platform_id: str = Field(nullable=False)
user_id: str = Field(nullable=False) # An id of group, user in platform
sender_id: str | None = Field(default=None) # ID of the sender in the platform
sender_name: str | None = Field(
default=None,
) # Name of the sender in the platform
content: dict = Field(sa_type=JSON, nullable=False) # a message chain list
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc),
sa_column_kwargs={"onupdate": datetime.now(timezone.utc)},
)
class PlatformSession(SQLModel, table=True):
"""Platform session table for managing user sessions across different platforms.
A session represents a chat window for a specific user on a specific platform.
Each session can have multiple conversations (对话) associated with it.
"""
__tablename__ = "platform_sessions" # type: ignore
inner_id: int | None = Field(
primary_key=True,
sa_column_kwargs={"autoincrement": True},
default=None,
)
session_id: str = Field(
max_length=100,
nullable=False,
unique=True,
default_factory=lambda: str(uuid.uuid4()),
)
platform_id: str = Field(default="webchat", nullable=False)
"""Platform identifier (e.g., 'webchat', 'qq', 'discord')"""
creator: str = Field(nullable=False)
"""Username of the session creator"""
display_name: str | None = Field(default=None, max_length=255)
"""Display name for the session"""
is_group: int = Field(default=0, nullable=False)
"""0 for private chat, 1 for group chat (not implemented yet)"""
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc),
sa_column_kwargs={"onupdate": datetime.now(timezone.utc)},
)
__table_args__ = (
UniqueConstraint(
"session_id",
name="uix_platform_session_id",
),
)
class Attachment(SQLModel, table=True):
"""This class represents attachments for messages in AstrBot.
Attachments can be images, files, or other media types.
"""
__tablename__ = "attachments" # type: ignore
inner_attachment_id: int | None = Field(
primary_key=True,
sa_column_kwargs={"autoincrement": True},
default=None,
)
attachment_id: str = Field(
max_length=36,
nullable=False,
unique=True,
default_factory=lambda: str(uuid.uuid4()),
)
path: str = Field(nullable=False) # Path to the file on disk
type: str = Field(nullable=False) # Type of the file (e.g., 'image', 'file')
mime_type: str = Field(nullable=False) # MIME type of the file
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc),
sa_column_kwargs={"onupdate": datetime.now(timezone.utc)},
)
__table_args__ = (
UniqueConstraint(
"attachment_id",
name="uix_attachment_id",
),
)
@dataclass
class Conversation:
"""LLM 对话类
对于 WebChathistory 存储了包括指令、回复、图片等在内的所有消息。
对于其他平台的聊天,不存储非 LLM 的回复(因为考虑到已经存储在各自的平台上)。
在 v4.0.0 版本及之后WebChat 的历史记录被迁移至 `PlatformMessageHistory` 表中,
"""
platform_id: str
user_id: str
cid: str
"""对话 ID, 是 uuid 格式的字符串"""
history: str = ""
"""字符串格式的对话列表。"""
title: str | None = ""
persona_id: str | None = ""
created_at: int = 0
updated_at: int = 0
class Personality(TypedDict):
"""LLM 人格类。
在 v4.0.0 版本及之后,推荐使用上面的 Persona 类。并且, mood_imitation_dialogs 字段已被废弃。
"""
prompt: str = ""
name: str = ""
begin_dialogs: list[str] = []
mood_imitation_dialogs: list[str] = []
"""情感模拟对话预设。在 v4.0.0 版本及之后,已被废弃。"""
tools: list[str] | None = None
"""工具列表。None 表示使用所有工具,空列表表示不使用任何工具"""
# cache
_begin_dialogs_processed: list[dict] = []
_mood_imitation_dialogs_processed: str = ""
# ====
# Deprecated, and will be removed in future versions.
# ====
@dataclass
@@ -13,77 +288,6 @@ class Platform:
timestamp: int
@dataclass
class Provider:
"""供应商使用统计数据"""
name: str
count: int
timestamp: int
@dataclass
class Plugin:
"""插件使用统计数据"""
name: str
count: int
timestamp: int
@dataclass
class Command:
"""命令使用统计数据"""
name: str
count: int
timestamp: int
@dataclass
class Stats:
platform: List[Platform] = field(default_factory=list)
command: List[Command] = field(default_factory=list)
llm: List[Provider] = field(default_factory=list)
@dataclass
class LLMHistory:
"""LLM 聊天时持久化的信息"""
provider_type: str
session_id: str
content: str
@dataclass
class ATRIVision:
"""Deprecated"""
id: str
url_or_path: str
caption: str
is_meme: bool
keywords: List[str]
platform_name: str
session_id: str
sender_nickname: str
timestamp: int = -1
@dataclass
class Conversation:
"""LLM 对话存储
对于网页聊天history 存储了包括指令、回复、图片等在内的所有消息。
对于其他平台的聊天,不存储非 LLM 的回复(因为考虑到已经存储在各自的平台上)。
"""
user_id: str
cid: str
history: str = ""
"""字符串格式的列表。"""
created_at: int = 0
updated_at: int = 0
title: str = ""
persona_id: str = ""
platform: list[Platform] = field(default_factory=list)

File diff suppressed because it is too large Load Diff

View File

@@ -1,50 +0,0 @@
CREATE TABLE IF NOT EXISTS platform(
name VARCHAR(32),
count INTEGER,
timestamp INTEGER
);
CREATE TABLE IF NOT EXISTS llm(
name VARCHAR(32),
count INTEGER,
timestamp INTEGER
);
CREATE TABLE IF NOT EXISTS plugin(
name VARCHAR(32),
count INTEGER,
timestamp INTEGER
);
CREATE TABLE IF NOT EXISTS command(
name VARCHAR(32),
count INTEGER,
timestamp INTEGER
);
CREATE TABLE IF NOT EXISTS llm_history(
provider_type VARCHAR(32),
session_id VARCHAR(32),
content TEXT
);
-- ATRI
CREATE TABLE IF NOT EXISTS atri_vision(
id TEXT,
url_or_path TEXT,
caption TEXT,
is_meme BOOLEAN,
keywords TEXT,
platform_name VARCHAR(32),
session_id VARCHAR(32),
sender_nickname VARCHAR(32),
timestamp INTEGER
);
CREATE TABLE IF NOT EXISTS webchat_conversation(
user_id TEXT, -- 会话 id
cid TEXT, -- 对话 id
history TEXT,
created_at INTEGER,
updated_at INTEGER,
title TEXT,
persona_id TEXT
);
PRAGMA encoding = 'UTF-8';

View File

@@ -10,22 +10,47 @@ class Result:
class BaseVecDB:
async def initialize(self):
"""
初始化向量数据库
"""
pass
"""初始化向量数据库"""
@abc.abstractmethod
async def insert(self, content: str, metadata: dict = None, id: str = None) -> int:
"""
插入一条文本和其对应向量,自动生成 ID 并保持一致性。
async def insert(
self,
content: str,
metadata: dict | None = None,
id: str | None = None,
) -> int:
"""插入一条文本和其对应向量,自动生成 ID 并保持一致性。"""
...
@abc.abstractmethod
async def insert_batch(
self,
contents: list[str],
metadatas: list[dict] | None = None,
ids: list[str] | None = None,
batch_size: int = 32,
tasks_limit: int = 3,
max_retries: int = 3,
progress_callback=None,
) -> int:
"""批量插入文本和其对应向量,自动生成 ID 并保持一致性。
Args:
progress_callback: 进度回调函数,接收参数 (current, total)
"""
...
@abc.abstractmethod
async def retrieve(self, query: str, top_k: int = 5) -> list[Result]:
"""
搜索最相似的文档。
async def retrieve(
self,
query: str,
top_k: int = 5,
fetch_k: int = 20,
rerank: bool = False,
metadata_filters: dict | None = None,
) -> list[Result]:
"""搜索最相似的文档。
Args:
query (str): 查询文本
top_k (int): 返回的最相似文档的数量
@@ -36,11 +61,13 @@ class BaseVecDB:
@abc.abstractmethod
async def delete(self, doc_id: str) -> bool:
"""
删除指定文档。
"""删除指定文档。
Args:
doc_id (str): 要删除的文档 ID
Returns:
bool: 删除是否成功
"""
...
@abc.abstractmethod
async def close(self): ...

View File

@@ -1,59 +1,232 @@
import aiosqlite
import json
import os
from contextlib import asynccontextmanager
from datetime import datetime
from sqlalchemy import Column, Text
from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession, create_async_engine
from sqlalchemy.orm import sessionmaker
from sqlmodel import Field, MetaData, SQLModel, col, func, select, text
from astrbot.core import logger
class BaseDocModel(SQLModel, table=False):
metadata = MetaData()
class Document(BaseDocModel, table=True):
"""SQLModel for documents table."""
__tablename__ = "documents" # type: ignore
id: int | None = Field(
default=None,
primary_key=True,
sa_column_kwargs={"autoincrement": True},
)
doc_id: str = Field(nullable=False)
text: str = Field(nullable=False)
metadata_: str | None = Field(default=None, sa_column=Column("metadata", Text))
created_at: datetime | None = Field(default=None)
updated_at: datetime | None = Field(default=None)
class DocumentStorage:
def __init__(self, db_path: str):
self.db_path = db_path
self.connection = None
self.DATABASE_URL = f"sqlite+aiosqlite:///{db_path}"
self.engine: AsyncEngine | None = None
self.async_session_maker: sessionmaker | None = None
self.sqlite_init_path = os.path.join(
os.path.dirname(__file__), "sqlite_init.sql"
os.path.dirname(__file__),
"sqlite_init.sql",
)
async def initialize(self):
"""Initialize the SQLite database and create the documents table if it doesn't exist."""
if not os.path.exists(self.db_path):
await self.connect()
async with self.connection.cursor() as cursor:
with open(self.sqlite_init_path, "r", encoding="utf-8") as f:
sql_script = f.read()
await cursor.executescript(sql_script)
await self.connection.commit()
else:
await self.connect()
async with self.engine.begin() as conn: # type: ignore
# Create tables using SQLModel
await conn.run_sync(BaseDocModel.metadata.create_all)
try:
await conn.execute(
text(
"ALTER TABLE documents ADD COLUMN kb_doc_id TEXT "
"GENERATED ALWAYS AS (json_extract(metadata, '$.kb_doc_id')) STORED",
),
)
await conn.execute(
text(
"ALTER TABLE documents ADD COLUMN user_id TEXT "
"GENERATED ALWAYS AS (json_extract(metadata, '$.user_id')) STORED",
),
)
# Create indexes
await conn.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_documents_kb_doc_id ON documents(kb_doc_id)",
),
)
await conn.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_documents_user_id ON documents(user_id)",
),
)
except BaseException:
pass
await conn.commit()
async def connect(self):
"""Connect to the SQLite database."""
self.connection = await aiosqlite.connect(self.db_path)
if self.engine is None:
self.engine = create_async_engine(
self.DATABASE_URL,
echo=False,
future=True,
)
self.async_session_maker = sessionmaker(
self.engine, # type: ignore
class_=AsyncSession,
expire_on_commit=False,
) # type: ignore
async def get_documents(self, metadata_filters: dict, ids: list = None):
@asynccontextmanager
async def get_session(self):
"""Context manager for database sessions."""
async with self.async_session_maker() as session: # type: ignore
yield session
async def get_documents(
self,
metadata_filters: dict,
ids: list | None = None,
offset: int | None = 0,
limit: int | None = 100,
) -> list[dict]:
"""Retrieve documents by metadata filters and ids.
Args:
metadata_filters (dict): The metadata filters to apply.
ids (list | None): Optional list of document IDs to filter.
offset (int | None): Offset for pagination.
limit (int | None): Limit for pagination.
Returns:
list: The list of document IDs(primary key, not doc_id) that match the filters.
"""
# metadata filter -> SQL WHERE clause
where_clauses = []
values = []
for key, val in metadata_filters.items():
where_clauses.append(f"json_extract(metadata, '$.{key}') = ?")
values.append(val)
if ids is not None and len(ids) > 0:
ids = [str(i) for i in ids if i != -1]
where_clauses.append("id IN ({})".format(",".join("?" * len(ids))))
values.extend(ids)
where_sql = " AND ".join(where_clauses) or "1=1"
list: The list of documents that match the filters.
result = []
async with self.connection.cursor() as cursor:
sql = "SELECT * FROM documents WHERE " + where_sql
await cursor.execute(sql, values)
for row in await cursor.fetchall():
result.append(await self.tuple_to_dict(row))
return result
"""
if self.engine is None:
logger.warning(
"Database connection is not initialized, returning empty result",
)
return []
async with self.get_session() as session:
query = select(Document)
for key, val in metadata_filters.items():
query = query.where(
text(f"json_extract(metadata, '$.{key}') = :filter_{key}"),
).params(**{f"filter_{key}": val})
if ids is not None and len(ids) > 0:
valid_ids = [int(i) for i in ids if i != -1]
if valid_ids:
query = query.where(col(Document.id).in_(valid_ids))
if limit is not None:
query = query.limit(limit)
if offset is not None:
query = query.offset(offset)
result = await session.execute(query)
documents = result.scalars().all()
return [self._document_to_dict(doc) for doc in documents]
async def insert_document(self, doc_id: str, text: str, metadata: dict) -> int:
"""Insert a single document and return its integer ID.
Args:
doc_id (str): The document ID (UUID string).
text (str): The document text.
metadata (dict): The document metadata.
Returns:
int: The integer ID of the inserted document.
"""
assert self.engine is not None, "Database connection is not initialized."
async with self.get_session() as session, session.begin():
document = Document(
doc_id=doc_id,
text=text,
metadata_=json.dumps(metadata),
created_at=datetime.now(),
updated_at=datetime.now(),
)
session.add(document)
await session.flush() # Flush to get the ID
return document.id # type: ignore
async def insert_documents_batch(
self,
doc_ids: list[str],
texts: list[str],
metadatas: list[dict],
) -> list[int]:
"""Batch insert documents and return their integer IDs.
Args:
doc_ids (list[str]): List of document IDs (UUID strings).
texts (list[str]): List of document texts.
metadatas (list[dict]): List of document metadata.
Returns:
list[int]: List of integer IDs of the inserted documents.
"""
assert self.engine is not None, "Database connection is not initialized."
async with self.get_session() as session, session.begin():
import json
documents = []
for doc_id, text, metadata in zip(doc_ids, texts, metadatas):
document = Document(
doc_id=doc_id,
text=text,
metadata_=json.dumps(metadata),
created_at=datetime.now(),
updated_at=datetime.now(),
)
documents.append(document)
session.add(document)
await session.flush() # Flush to get all IDs
return [doc.id for doc in documents] # type: ignore
async def delete_document_by_doc_id(self, doc_id: str):
"""Delete a document by its doc_id.
Args:
doc_id (str): The doc_id of the document to delete.
"""
assert self.engine is not None, "Database connection is not initialized."
async with self.get_session() as session, session.begin():
query = select(Document).where(col(Document.doc_id) == doc_id)
result = await session.execute(query)
document = result.scalar_one_or_none()
if document:
await session.delete(document)
async def get_document_by_doc_id(self, doc_id: str):
"""Retrieve a document by its doc_id.
@@ -62,40 +235,134 @@ class DocumentStorage:
doc_id (str): The doc_id of the document to retrieve.
Returns:
dict: The document data.
dict: The document data or None if not found.
"""
async with self.connection.cursor() as cursor:
await cursor.execute("SELECT * FROM documents WHERE doc_id = ?", (doc_id,))
row = await cursor.fetchone()
if row:
return await self.tuple_to_dict(row)
else:
assert self.engine is not None, "Database connection is not initialized."
async with self.get_session() as session:
query = select(Document).where(col(Document.doc_id) == doc_id)
result = await session.execute(query)
document = result.scalar_one_or_none()
if document:
return self._document_to_dict(document)
return None
async def update_document_by_doc_id(self, doc_id: str, new_text: str):
"""Retrieve a document by its doc_id.
"""Update a document by its doc_id.
Args:
doc_id (str): The doc_id.
new_text (str): The new text to update the document with.
"""
async with self.connection.cursor() as cursor:
await cursor.execute(
"UPDATE documents SET text = ? WHERE doc_id = ?", (new_text, doc_id)
assert self.engine is not None, "Database connection is not initialized."
async with self.get_session() as session, session.begin():
query = select(Document).where(col(Document.doc_id) == doc_id)
result = await session.execute(query)
document = result.scalar_one_or_none()
if document:
document.text = new_text
document.updated_at = datetime.now()
session.add(document)
async def delete_documents(self, metadata_filters: dict):
"""Delete documents by their metadata filters.
Args:
metadata_filters (dict): The metadata filters to apply.
"""
if self.engine is None:
logger.warning(
"Database connection is not initialized, skipping delete operation",
)
await self.connection.commit()
return
async with self.get_session() as session, session.begin():
query = select(Document)
for key, val in metadata_filters.items():
query = query.where(
text(f"json_extract(metadata, '$.{key}') = :filter_{key}"),
).params(**{f"filter_{key}": val})
result = await session.execute(query)
documents = result.scalars().all()
for doc in documents:
await session.delete(doc)
async def count_documents(self, metadata_filters: dict | None = None) -> int:
"""Count documents in the database.
Args:
metadata_filters (dict | None): Metadata filters to apply.
Returns:
int: The count of documents.
"""
if self.engine is None:
logger.warning("Database connection is not initialized, returning 0")
return 0
async with self.get_session() as session:
query = select(func.count(col(Document.id)))
if metadata_filters:
for key, val in metadata_filters.items():
query = query.where(
text(f"json_extract(metadata, '$.{key}') = :filter_{key}"),
).params(**{f"filter_{key}": val})
result = await session.execute(query)
count = result.scalar_one_or_none()
return count if count is not None else 0
async def get_user_ids(self) -> list[str]:
"""Retrieve all user IDs from the documents table.
Returns:
list: A list of user IDs.
"""
async with self.connection.cursor() as cursor:
await cursor.execute("SELECT DISTINCT user_id FROM documents")
rows = await cursor.fetchall()
assert self.engine is not None, "Database connection is not initialized."
async with self.get_session() as session:
query = text(
"SELECT DISTINCT user_id FROM documents WHERE user_id IS NOT NULL",
)
result = await session.execute(query)
rows = result.fetchall()
return [row[0] for row in rows]
def _document_to_dict(self, document: Document) -> dict:
"""Convert a Document model to a dictionary.
Args:
document (Document): The document to convert.
Returns:
dict: The converted dictionary.
"""
return {
"id": document.id,
"doc_id": document.doc_id,
"text": document.text,
"metadata": document.metadata_,
"created_at": document.created_at.isoformat()
if isinstance(document.created_at, datetime)
else document.created_at,
"updated_at": document.updated_at.isoformat()
if isinstance(document.updated_at, datetime)
else document.updated_at,
}
async def tuple_to_dict(self, row):
"""Convert a tuple to a dictionary.
@@ -104,6 +371,9 @@ class DocumentStorage:
Returns:
dict: The converted dictionary.
Note: This method is kept for backward compatibility but is no longer used internally.
"""
return {
"id": row[0],
@@ -116,6 +386,7 @@ class DocumentStorage:
async def close(self):
"""Close the connection to the SQLite database."""
if self.connection:
await self.connection.close()
self.connection = None
if self.engine:
await self.engine.dispose()
self.engine = None
self.async_session_maker = None

View File

@@ -2,14 +2,15 @@ try:
import faiss
except ModuleNotFoundError:
raise ImportError(
"faiss 未安装。请使用 'pip install faiss-cpu''pip install faiss-gpu' 安装。"
"faiss 未安装。请使用 'pip install faiss-cpu''pip install faiss-gpu' 安装。",
)
import os
import numpy as np
class EmbeddingStorage:
def __init__(self, dimension: int, path: str = None):
def __init__(self, dimension: int, path: str | None = None):
self.dimension = dimension
self.path = path
self.index = None
@@ -18,7 +19,6 @@ class EmbeddingStorage:
else:
base_index = faiss.IndexFlatL2(dimension)
self.index = faiss.IndexIDMap(base_index)
self.storage = {}
async def insert(self, vector: np.ndarray, id: int):
"""插入向量
@@ -28,13 +28,32 @@ class EmbeddingStorage:
id (int): 向量的ID
Raises:
ValueError: 如果向量的维度与存储的维度不匹配
"""
assert self.index is not None, "FAISS index is not initialized."
if vector.shape[0] != self.dimension:
raise ValueError(
f"向量维度不匹配, 期望: {self.dimension}, 实际: {vector.shape[0]}"
f"向量维度不匹配, 期望: {self.dimension}, 实际: {vector.shape[0]}",
)
self.index.add_with_ids(vector.reshape(1, -1), np.array([id]))
self.storage[id] = vector
await self.save_index()
async def insert_batch(self, vectors: np.ndarray, ids: list[int]):
"""批量插入向量
Args:
vectors (np.ndarray): 要插入的向量数组
ids (list[int]): 向量的ID列表
Raises:
ValueError: 如果向量的维度与存储的维度不匹配
"""
assert self.index is not None, "FAISS index is not initialized."
if vectors.shape[1] != self.dimension:
raise ValueError(
f"向量维度不匹配, 期望: {self.dimension}, 实际: {vectors.shape[1]}",
)
self.index.add_with_ids(vectors, np.array(ids))
await self.save_index()
async def search(self, vector: np.ndarray, k: int) -> tuple:
@@ -45,15 +64,30 @@ class EmbeddingStorage:
k (int): 返回的最相似向量的数量
Returns:
tuple: (距离, 索引)
"""
assert self.index is not None, "FAISS index is not initialized."
faiss.normalize_L2(vector)
distances, indices = self.index.search(vector, k)
return distances, indices
async def delete(self, ids: list[int]):
"""删除向量
Args:
ids (list[int]): 要删除的向量ID列表
"""
assert self.index is not None, "FAISS index is not initialized."
id_array = np.array(ids, dtype=np.int64)
self.index.remove_ids(id_array)
await self.save_index()
async def save_index(self):
"""保存索引
Args:
path (str): 保存索引的路径
"""
faiss.write_index(self.index, self.path)

View File

@@ -1,89 +1,143 @@
import time
import uuid
import json
import numpy as np
from astrbot import logger
from astrbot.core.provider.provider import EmbeddingProvider, RerankProvider
from ..base import BaseVecDB, Result
from .document_storage import DocumentStorage
from .embedding_storage import EmbeddingStorage
from ..base import Result, BaseVecDB
from astrbot.core.provider.provider import EmbeddingProvider
class FaissVecDB(BaseVecDB):
"""
A class to represent a vector database.
"""
"""A class to represent a vector database."""
def __init__(
self,
doc_store_path: str,
index_store_path: str,
embedding_provider: EmbeddingProvider,
rerank_provider: RerankProvider | None = None,
):
self.doc_store_path = doc_store_path
self.index_store_path = index_store_path
self.embedding_provider = embedding_provider
self.document_storage = DocumentStorage(doc_store_path)
self.embedding_storage = EmbeddingStorage(
embedding_provider.get_dim(), index_store_path
embedding_provider.get_dim(),
index_store_path,
)
self.embedding_provider = embedding_provider
self.rerank_provider = rerank_provider
async def initialize(self):
await self.document_storage.initialize()
async def insert(self, content: str, metadata: dict = None, id: str = None) -> int:
"""
插入一条文本和其对应向量,自动生成 ID 并保持一致性。
"""
async def insert(
self,
content: str,
metadata: dict | None = None,
id: str | None = None,
) -> int:
"""插入一条文本和其对应向量,自动生成 ID 并保持一致性。"""
metadata = metadata or {}
str_id = id or str(uuid.uuid4()) # 使用 UUID 作为原始 ID
vector = await self.embedding_provider.get_embedding(content)
vector = np.array(vector, dtype=np.float32)
async with self.document_storage.connection.cursor() as cursor:
await cursor.execute(
"INSERT INTO documents (doc_id, text, metadata) VALUES (?, ?, ?)",
(str_id, content, json.dumps(metadata)),
)
await self.document_storage.connection.commit()
result = await self.document_storage.get_document_by_doc_id(str_id)
int_id = result["id"]
# 使用 DocumentStorage 的方法插入文档
int_id = await self.document_storage.insert_document(str_id, content, metadata)
# 插入向量到 FAISS
await self.embedding_storage.insert(vector, int_id)
return int_id
async def retrieve(
self, query: str, k: int = 5, fetch_k: int = 20, metadata_filters: dict = None
) -> list[Result]:
async def insert_batch(
self,
contents: list[str],
metadatas: list[dict] | None = None,
ids: list[str] | None = None,
batch_size: int = 32,
tasks_limit: int = 3,
max_retries: int = 3,
progress_callback=None,
) -> list[int]:
"""批量插入文本和其对应向量,自动生成 ID 并保持一致性。
Args:
progress_callback: 进度回调函数,接收参数 (current, total)
"""
搜索最相似的文档。
metadatas = metadatas or [{} for _ in contents]
ids = ids or [str(uuid.uuid4()) for _ in contents]
start = time.time()
logger.debug(f"Generating embeddings for {len(contents)} contents...")
vectors = await self.embedding_provider.get_embeddings_batch(
contents,
batch_size=batch_size,
tasks_limit=tasks_limit,
max_retries=max_retries,
progress_callback=progress_callback,
)
end = time.time()
logger.debug(
f"Generated embeddings for {len(contents)} contents in {end - start:.2f} seconds.",
)
# 使用 DocumentStorage 的批量插入方法
int_ids = await self.document_storage.insert_documents_batch(
ids,
contents,
metadatas,
)
# 批量插入向量到 FAISS
vectors_array = np.array(vectors).astype("float32")
await self.embedding_storage.insert_batch(vectors_array, int_ids)
return int_ids
async def retrieve(
self,
query: str,
k: int = 5,
fetch_k: int = 20,
rerank: bool = False,
metadata_filters: dict | None = None,
) -> list[Result]:
"""搜索最相似的文档。
Args:
query (str): 查询文本
k (int): 返回的最相似文档的数量
fetch_k (int): 在根据 metadata 过滤前从 FAISS 中获取的数量
rerank (bool): 是否使用重排序。这需要在实例化时提供 rerank_provider, 如果未提供并且 rerank 为 True, 不会抛出异常。
metadata_filters (dict): 元数据过滤器
Returns:
List[Result]: 查询结果
"""
embedding = await self.embedding_provider.get_embedding(query)
scores, indices = await self.embedding_storage.search(
vector=np.array([embedding]).astype("float32"),
k=fetch_k if metadata_filters else k,
)
# TODO: rerank
if len(indices[0]) == 0 or indices[0][0] == -1:
return []
# normalize scores
scores[0] = 1.0 - (scores[0] / 2.0)
# NOTE: maybe the size is less than k.
fetched_docs = await self.document_storage.get_documents(
metadata_filters=metadata_filters or {}, ids=indices[0]
metadata_filters=metadata_filters or {},
ids=indices[0],
)
if not fetched_docs:
return []
result_docs = []
result_docs: list[Result] = []
idx_pos = {fetch_doc["id"]: idx for idx, fetch_doc in enumerate(fetched_docs)}
for i, indice_idx in enumerate(indices[0]):
@@ -93,25 +147,58 @@ class FaissVecDB(BaseVecDB):
fetch_doc = fetched_docs[pos]
score = scores[0][i]
result_docs.append(Result(similarity=float(score), data=fetch_doc))
return result_docs[:k]
async def delete(self, doc_id: int):
"""
删除一条文档
"""
await self.document_storage.connection.execute(
"DELETE FROM documents WHERE doc_id = ?", (doc_id,)
top_k_results = result_docs[:k]
if rerank and self.rerank_provider:
documents = [doc.data["text"] for doc in top_k_results]
reranked_results = await self.rerank_provider.rerank(query, documents)
reranked_results = sorted(
reranked_results,
key=lambda x: x.relevance_score,
reverse=True,
)
await self.document_storage.connection.commit()
top_k_results = [
top_k_results[reranked_result.index]
for reranked_result in reranked_results
]
return top_k_results
async def delete(self, doc_id: str):
"""删除一条文档块chunk"""
# 获得对应的 int id
result = await self.document_storage.get_document_by_doc_id(doc_id)
int_id = result["id"] if result else None
if int_id is None:
return
# 使用 DocumentStorage 的删除方法
await self.document_storage.delete_document_by_doc_id(doc_id)
await self.embedding_storage.delete([int_id])
async def close(self):
await self.document_storage.close()
async def count_documents(self) -> int:
async def count_documents(self, metadata_filter: dict | None = None) -> int:
"""计算文档数量
Args:
metadata_filter (dict | None): 元数据过滤器
"""
计算文档数量
"""
async with self.document_storage.connection.cursor() as cursor:
await cursor.execute("SELECT COUNT(*) FROM documents")
count = await cursor.fetchone()
return count[0] if count else 0
count = await self.document_storage.count_documents(
metadata_filters=metadata_filter or {},
)
return count
async def delete_documents(self, metadata_filters: dict):
"""根据元数据过滤器删除文档"""
docs = await self.document_storage.get_documents(
metadata_filters=metadata_filters,
offset=None,
limit=None,
)
doc_ids: list[int] = [doc["id"] for doc in docs]
await self.embedding_storage.delete(doc_ids)
await self.document_storage.delete_documents(metadata_filters=metadata_filters)

View File

@@ -1,5 +1,4 @@
"""
事件总线, 用于处理事件的分发和处理
"""事件总线, 用于处理事件的分发和处理
事件总线是一个异步队列, 用于接收各种消息事件, 并将其发送到Scheduler调度器进行处理
其中包含了一个无限循环的调度函数, 用于从事件队列中获取新的事件, 并创建一个新的异步任务来执行管道调度器的处理逻辑
@@ -13,45 +12,50 @@ class:
import asyncio
from asyncio import Queue
from astrbot.core.pipeline.scheduler import PipelineScheduler
from astrbot.core import logger
from astrbot.core.astrbot_config_mgr import AstrBotConfigManager
from astrbot.core.pipeline.scheduler import PipelineScheduler
from .platform import AstrMessageEvent
class EventBus:
"""事件总线: 用于处理事件的分发和处理
"""用于处理事件的分发和处理"""
维护一个异步队列, 来接受各种消息事件
"""
def __init__(self, event_queue: Queue, pipeline_scheduler: PipelineScheduler):
def __init__(
self,
event_queue: Queue,
pipeline_scheduler_mapping: dict[str, PipelineScheduler],
astrbot_config_mgr: AstrBotConfigManager = None,
):
self.event_queue = event_queue # 事件队列
self.pipeline_scheduler = pipeline_scheduler # 管道调度器
# abconf uuid -> scheduler
self.pipeline_scheduler_mapping = pipeline_scheduler_mapping
self.astrbot_config_mgr = astrbot_config_mgr
async def dispatch(self):
"""无限循环的调度函数, 从事件队列中获取新的事件, 打印日志并创建一个新的异步任务来执行管道调度器的处理逻辑"""
while True:
event: AstrMessageEvent = (
await self.event_queue.get()
) # 从事件队列中获取新的事件
self._print_event(event) # 打印日志
asyncio.create_task(
self.pipeline_scheduler.execute(event)
) # 创建新的异步任务来执行管道调度器的处理逻辑
event: AstrMessageEvent = await self.event_queue.get()
conf_info = self.astrbot_config_mgr.get_conf_info(event.unified_msg_origin)
self._print_event(event, conf_info["name"])
scheduler = self.pipeline_scheduler_mapping.get(conf_info["id"])
asyncio.create_task(scheduler.execute(event))
def _print_event(self, event: AstrMessageEvent):
def _print_event(self, event: AstrMessageEvent, conf_name: str):
"""用于记录事件信息
Args:
event (AstrMessageEvent): 事件对象
"""
# 如果有发送者名称: [平台名] 发送者名称/发送者ID: 消息概要
if event.get_sender_name():
logger.info(
f"[{event.get_platform_name()}] {event.get_sender_name()}/{event.get_sender_id()}: {event.get_message_outline()}"
f"[{conf_name}] [{event.get_platform_id()}({event.get_platform_name()})] {event.get_sender_name()}/{event.get_sender_id()}: {event.get_message_outline()}",
)
# 没有发送者名称: [平台名] 发送者ID: 消息概要
else:
logger.info(
f"[{event.get_platform_name()}] {event.get_sender_id()}: {event.get_message_outline()}"
f"[{conf_name}] [{event.get_platform_id()}({event.get_platform_name()})] {event.get_sender_id()}: {event.get_message_outline()}",
)

View File

@@ -0,0 +1,9 @@
from __future__ import annotations
class AstrBotError(Exception):
"""Base exception for all AstrBot errors."""
class ProviderNotFoundError(AstrBotError):
"""Raised when a specified provider is not found."""

View File

@@ -1,7 +1,9 @@
import asyncio
import os
import uuid
import platform
import time
import uuid
from urllib.parse import unquote, urlparse
class FileTokenService:
@@ -15,11 +17,18 @@ class FileTokenService:
async def _cleanup_expired_tokens(self):
"""清理过期的令牌"""
now = time.time()
expired_tokens = [token for token, (_, expire) in self.staged_files.items() if expire < now]
expired_tokens = [
token for token, (_, expire) in self.staged_files.items() if expire < now
]
for token in expired_tokens:
self.staged_files.pop(token, None)
async def register_file(self, file_path: str, timeout: float = None) -> str:
async def check_token_expired(self, file_token: str) -> bool:
async with self.lock:
await self._cleanup_expired_tokens()
return file_token not in self.staged_files
async def register_file(self, file_path: str, timeout: float | None = None) -> str:
"""向令牌服务注册一个文件。
Args:
@@ -31,16 +40,36 @@ class FileTokenService:
Raises:
FileNotFoundError: 当路径不存在时抛出
"""
# 处理 file:///
try:
parsed_uri = urlparse(file_path)
if parsed_uri.scheme == "file":
local_path = unquote(parsed_uri.path)
if platform.system() == "Windows" and local_path.startswith("/"):
local_path = local_path[1:]
else:
# 如果没有 file:/// 前缀,则认为是普通路径
local_path = file_path
except Exception:
# 解析失败时,按原路径处理
local_path = file_path
async with self.lock:
await self._cleanup_expired_tokens()
if not os.path.exists(file_path):
raise FileNotFoundError(f"文件不存在: {file_path}")
if not os.path.exists(local_path):
raise FileNotFoundError(
f"文件不存在: {local_path} (原始输入: {file_path})",
)
file_token = str(uuid.uuid4())
expire_time = time.time() + (timeout if timeout is not None else self.default_timeout)
self.staged_files[file_token] = (file_path, expire_time)
expire_time = time.time() + (
timeout if timeout is not None else self.default_timeout
)
# 存储转换后的真实路径
self.staged_files[file_token] = (local_path, expire_time)
return file_token
async def handle_file(self, file_token: str) -> str:
@@ -55,6 +84,7 @@ class FileTokenService:
Raises:
KeyError: 当令牌不存在或已过期时抛出
FileNotFoundError: 当文件本身已被删除时抛出
"""
async with self.lock:
await self._cleanup_expired_tokens()

View File

@@ -1,5 +1,4 @@
"""
AstrBot 启动器,负责初始化和启动核心组件和仪表板服务器。
"""AstrBot 启动器,负责初始化和启动核心组件和仪表板服务器。
工作流程:
1. 初始化核心生命周期, 传递数据库和日志代理实例到核心生命周期
@@ -8,10 +7,10 @@ AstrBot 启动器,负责初始化和启动核心组件和仪表板服务器。
import asyncio
import traceback
from astrbot.core import logger
from astrbot.core import LogBroker, logger
from astrbot.core.core_lifecycle import AstrBotCoreLifecycle
from astrbot.core.db import BaseDatabase
from astrbot.core import LogBroker
from astrbot.dashboard.server import AstrBotDashboard
@@ -22,6 +21,7 @@ class InitialLoader:
self.db = db
self.logger = logger
self.log_broker = log_broker
self.webui_dir: str | None = None
async def start(self):
core_lifecycle = AstrBotCoreLifecycle(self.log_broker, self.db)
@@ -35,13 +35,21 @@ class InitialLoader:
core_task = core_lifecycle.start()
self.dashboard_server = AstrBotDashboard(
core_lifecycle, self.db, core_lifecycle.dashboard_shutdown_event
)
task = asyncio.gather(
core_task, self.dashboard_server.run()
) # 启动核心任务和仪表板服务器
webui_dir = self.webui_dir
self.dashboard_server = AstrBotDashboard(
core_lifecycle,
self.db,
core_lifecycle.dashboard_shutdown_event,
webui_dir,
)
coro = self.dashboard_server.run()
if coro:
# 启动核心任务和仪表板服务器
task = asyncio.gather(core_task, coro)
else:
task = core_task
try:
await task # 整个AstrBot在这里运行
except asyncio.CancelledError:

View File

@@ -0,0 +1,9 @@
"""文档分块模块"""
from .base import BaseChunker
from .fixed_size import FixedSizeChunker
__all__ = [
"BaseChunker",
"FixedSizeChunker",
]

View File

@@ -0,0 +1,25 @@
"""文档分块器基类
定义了文档分块处理的抽象接口。
"""
from abc import ABC, abstractmethod
class BaseChunker(ABC):
"""分块器基类
所有分块器都应该继承此类并实现 chunk 方法。
"""
@abstractmethod
async def chunk(self, text: str, **kwargs) -> list[str]:
"""将文本分块
Args:
text: 输入文本
Returns:
list[str]: 分块后的文本列表
"""

View File

@@ -0,0 +1,59 @@
"""固定大小分块器
按照固定的字符数将文本分块,支持重叠区域。
"""
from .base import BaseChunker
class FixedSizeChunker(BaseChunker):
"""固定大小分块器
按照固定的字符数分块,并支持块之间的重叠。
"""
def __init__(self, chunk_size: int = 512, chunk_overlap: int = 50):
"""初始化分块器
Args:
chunk_size: 块的大小(字符数)
chunk_overlap: 块之间的重叠字符数
"""
self.chunk_size = chunk_size
self.chunk_overlap = chunk_overlap
async def chunk(self, text: str, **kwargs) -> list[str]:
"""固定大小分块
Args:
text: 输入文本
chunk_size: 每个文本块的最大大小
chunk_overlap: 每个文本块之间的重叠部分大小
Returns:
list[str]: 分块后的文本列表
"""
chunk_size = kwargs.get("chunk_size", self.chunk_size)
chunk_overlap = kwargs.get("chunk_overlap", self.chunk_overlap)
chunks = []
start = 0
text_len = len(text)
while start < text_len:
end = start + chunk_size
chunk = text[start:end]
if chunk:
chunks.append(chunk)
# 移动窗口,保留重叠部分
start = end - chunk_overlap
# 防止无限循环: 如果重叠过大,直接移到end
if start >= end or chunk_overlap >= chunk_size:
start = end
return chunks

View File

@@ -0,0 +1,161 @@
from collections.abc import Callable
from .base import BaseChunker
class RecursiveCharacterChunker(BaseChunker):
def __init__(
self,
chunk_size: int = 500,
chunk_overlap: int = 100,
length_function: Callable[[str], int] = len,
is_separator_regex: bool = False,
separators: list[str] | None = None,
):
"""初始化递归字符文本分割器
Args:
chunk_size: 每个文本块的最大大小
chunk_overlap: 每个文本块之间的重叠部分大小
length_function: 计算文本长度的函数
is_separator_regex: 分隔符是否为正则表达式
separators: 用于分割文本的分隔符列表,按优先级排序
"""
self.chunk_size = chunk_size
self.chunk_overlap = chunk_overlap
self.length_function = length_function
self.is_separator_regex = is_separator_regex
# 默认分隔符列表,按优先级从高到低
self.separators = separators or [
"\n\n", # 段落
"\n", # 换行
"", # 中文句子
"", # 中文逗号
". ", # 句子
", ", # 逗号分隔
" ", # 单词
"", # 字符
]
async def chunk(self, text: str, **kwargs) -> list[str]:
"""递归地将文本分割成块
Args:
text: 要分割的文本
chunk_size: 每个文本块的最大大小
chunk_overlap: 每个文本块之间的重叠部分大小
Returns:
分割后的文本块列表
"""
if not text:
return []
overlap = kwargs.get("chunk_overlap", self.chunk_overlap)
chunk_size = kwargs.get("chunk_size", self.chunk_size)
text_length = self.length_function(text)
if text_length <= chunk_size:
return [text]
for separator in self.separators:
if separator == "":
return self._split_by_character(text, chunk_size, overlap)
if separator in text:
splits = text.split(separator)
# 重新添加分隔符(除了最后一个片段)
splits = [s + separator for s in splits[:-1]] + [splits[-1]]
splits = [s for s in splits if s]
if len(splits) == 1:
continue
# 递归合并分割后的文本块
final_chunks = []
current_chunk = []
current_chunk_length = 0
for split in splits:
split_length = self.length_function(split)
# 如果单个分割部分已经超过了chunk_size需要递归分割
if split_length > chunk_size:
# 先处理当前积累的块
if current_chunk:
combined_text = "".join(current_chunk)
final_chunks.extend(
await self.chunk(
combined_text,
chunk_size=chunk_size,
chunk_overlap=overlap,
),
)
current_chunk = []
current_chunk_length = 0
# 递归分割过大的部分
final_chunks.extend(
await self.chunk(
split,
chunk_size=chunk_size,
chunk_overlap=overlap,
),
)
# 如果添加这部分会使当前块超过chunk_size
elif current_chunk_length + split_length > chunk_size:
# 合并当前块并添加到结果中
combined_text = "".join(current_chunk)
final_chunks.append(combined_text)
# 处理重叠部分
overlap_start = max(0, len(combined_text) - overlap)
if overlap_start > 0:
overlap_text = combined_text[overlap_start:]
current_chunk = [overlap_text, split]
current_chunk_length = (
self.length_function(overlap_text) + split_length
)
else:
current_chunk = [split]
current_chunk_length = split_length
else:
# 添加到当前块
current_chunk.append(split)
current_chunk_length += split_length
# 处理剩余的块
if current_chunk:
final_chunks.append("".join(current_chunk))
return final_chunks
return [text]
def _split_by_character(
self,
text: str,
chunk_size: int | None = None,
overlap: int | None = None,
) -> list[str]:
"""按字符级别分割文本
Args:
text: 要分割的文本
Returns:
分割后的文本块列表
"""
chunk_size = chunk_size or self.chunk_size
overlap = overlap or self.chunk_overlap
result = []
for i in range(0, len(text), chunk_size - overlap):
end = min(i + chunk_size, len(text))
result.append(text[i:end])
if end == len(text):
break
return result

View File

@@ -0,0 +1,301 @@
from contextlib import asynccontextmanager
from pathlib import Path
from sqlalchemy import delete, func, select, text, update
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker, create_async_engine
from sqlmodel import col, desc
from astrbot.core import logger
from astrbot.core.db.vec_db.faiss_impl import FaissVecDB
from astrbot.core.knowledge_base.models import (
BaseKBModel,
KBDocument,
KBMedia,
KnowledgeBase,
)
class KBSQLiteDatabase:
def __init__(self, db_path: str = "data/knowledge_base/kb.db") -> None:
"""初始化知识库数据库
Args:
db_path: 数据库文件路径, 默认为 data/knowledge_base/kb.db
"""
self.db_path = db_path
self.DATABASE_URL = f"sqlite+aiosqlite:///{db_path}"
self.inited = False
# 确保目录存在
Path(db_path).parent.mkdir(parents=True, exist_ok=True)
# 创建异步引擎
self.engine = create_async_engine(
self.DATABASE_URL,
echo=False,
pool_pre_ping=True,
pool_recycle=3600,
)
# 创建会话工厂
self.async_session = async_sessionmaker(
self.engine,
class_=AsyncSession,
expire_on_commit=False,
)
@asynccontextmanager
async def get_db(self):
"""获取数据库会话
用法:
async with kb_db.get_db() as session:
# 执行数据库操作
result = await session.execute(stmt)
"""
async with self.async_session() as session:
yield session
async def initialize(self) -> None:
"""初始化数据库,创建表并配置 SQLite 参数"""
async with self.engine.begin() as conn:
# 创建所有知识库相关表
await conn.run_sync(BaseKBModel.metadata.create_all)
# 配置 SQLite 性能优化参数
await conn.execute(text("PRAGMA journal_mode=WAL"))
await conn.execute(text("PRAGMA synchronous=NORMAL"))
await conn.execute(text("PRAGMA cache_size=20000"))
await conn.execute(text("PRAGMA temp_store=MEMORY"))
await conn.execute(text("PRAGMA mmap_size=134217728"))
await conn.execute(text("PRAGMA optimize"))
await conn.commit()
self.inited = True
async def migrate_to_v1(self) -> None:
"""执行知识库数据库 v1 迁移
创建所有必要的索引以优化查询性能
"""
async with self.get_db() as session:
session: AsyncSession
async with session.begin():
# 创建知识库表索引
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_kb_kb_id "
"ON knowledge_bases(kb_id)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_kb_name "
"ON knowledge_bases(kb_name)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_kb_created_at "
"ON knowledge_bases(created_at)",
),
)
# 创建文档表索引
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_doc_doc_id "
"ON kb_documents(doc_id)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_doc_kb_id "
"ON kb_documents(kb_id)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_doc_name "
"ON kb_documents(doc_name)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_doc_type "
"ON kb_documents(file_type)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_doc_created_at "
"ON kb_documents(created_at)",
),
)
# 创建多媒体表索引
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_media_media_id "
"ON kb_media(media_id)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_media_doc_id "
"ON kb_media(doc_id)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_media_kb_id ON kb_media(kb_id)",
),
)
await session.execute(
text(
"CREATE INDEX IF NOT EXISTS idx_media_type "
"ON kb_media(media_type)",
),
)
await session.commit()
async def close(self) -> None:
"""关闭数据库连接"""
await self.engine.dispose()
logger.info(f"知识库数据库已关闭: {self.db_path}")
async def get_kb_by_id(self, kb_id: str) -> KnowledgeBase | None:
"""根据 ID 获取知识库"""
async with self.get_db() as session:
stmt = select(KnowledgeBase).where(col(KnowledgeBase.kb_id) == kb_id)
result = await session.execute(stmt)
return result.scalar_one_or_none()
async def get_kb_by_name(self, kb_name: str) -> KnowledgeBase | None:
"""根据名称获取知识库"""
async with self.get_db() as session:
stmt = select(KnowledgeBase).where(col(KnowledgeBase.kb_name) == kb_name)
result = await session.execute(stmt)
return result.scalar_one_or_none()
async def list_kbs(self, offset: int = 0, limit: int = 100) -> list[KnowledgeBase]:
"""列出所有知识库"""
async with self.get_db() as session:
stmt = (
select(KnowledgeBase)
.offset(offset)
.limit(limit)
.order_by(desc(KnowledgeBase.created_at))
)
result = await session.execute(stmt)
return list(result.scalars().all())
async def count_kbs(self) -> int:
"""统计知识库数量"""
async with self.get_db() as session:
stmt = select(func.count(col(KnowledgeBase.id)))
result = await session.execute(stmt)
return result.scalar() or 0
# ===== 文档查询 =====
async def get_document_by_id(self, doc_id: str) -> KBDocument | None:
"""根据 ID 获取文档"""
async with self.get_db() as session:
stmt = select(KBDocument).where(col(KBDocument.doc_id) == doc_id)
result = await session.execute(stmt)
return result.scalar_one_or_none()
async def list_documents_by_kb(
self,
kb_id: str,
offset: int = 0,
limit: int = 100,
) -> list[KBDocument]:
"""列出知识库的所有文档"""
async with self.get_db() as session:
stmt = (
select(KBDocument)
.where(col(KBDocument.kb_id) == kb_id)
.offset(offset)
.limit(limit)
.order_by(desc(KBDocument.created_at))
)
result = await session.execute(stmt)
return list(result.scalars().all())
async def count_documents_by_kb(self, kb_id: str) -> int:
"""统计知识库的文档数量"""
async with self.get_db() as session:
stmt = select(func.count(col(KBDocument.id))).where(
col(KBDocument.kb_id) == kb_id,
)
result = await session.execute(stmt)
return result.scalar() or 0
async def get_document_with_metadata(self, doc_id: str) -> dict | None:
async with self.get_db() as session:
stmt = (
select(KBDocument, KnowledgeBase)
.join(KnowledgeBase, col(KBDocument.kb_id) == col(KnowledgeBase.kb_id))
.where(col(KBDocument.doc_id) == doc_id)
)
result = await session.execute(stmt)
row = result.first()
if not row:
return None
return {
"document": row[0],
"knowledge_base": row[1],
}
async def delete_document_by_id(self, doc_id: str, vec_db: FaissVecDB):
"""删除单个文档及其相关数据"""
# 在知识库表中删除
async with self.get_db() as session, session.begin():
# 删除文档记录
delete_stmt = delete(KBDocument).where(col(KBDocument.doc_id) == doc_id)
await session.execute(delete_stmt)
await session.commit()
# 在 vec db 中删除相关向量
await vec_db.delete_documents(metadata_filters={"kb_doc_id": doc_id})
# ===== 多媒体查询 =====
async def list_media_by_doc(self, doc_id: str) -> list[KBMedia]:
"""列出文档的所有多媒体资源"""
async with self.get_db() as session:
stmt = select(KBMedia).where(col(KBMedia.doc_id) == doc_id)
result = await session.execute(stmt)
return list(result.scalars().all())
async def get_media_by_id(self, media_id: str) -> KBMedia | None:
"""根据 ID 获取多媒体资源"""
async with self.get_db() as session:
stmt = select(KBMedia).where(col(KBMedia.media_id) == media_id)
result = await session.execute(stmt)
return result.scalar_one_or_none()
async def update_kb_stats(self, kb_id: str, vec_db: FaissVecDB) -> None:
"""更新知识库统计信息"""
chunk_cnt = await vec_db.count_documents()
async with self.get_db() as session, session.begin():
update_stmt = (
update(KnowledgeBase)
.where(col(KnowledgeBase.kb_id) == kb_id)
.values(
doc_count=select(func.count(col(KBDocument.id)))
.where(col(KBDocument.kb_id) == kb_id)
.scalar_subquery(),
chunk_count=chunk_cnt,
)
)
await session.execute(update_stmt)
await session.commit()

View File

@@ -0,0 +1,642 @@
import asyncio
import json
import re
import time
import uuid
from pathlib import Path
import aiofiles
from astrbot.core import logger
from astrbot.core.db.vec_db.base import BaseVecDB
from astrbot.core.db.vec_db.faiss_impl.vec_db import FaissVecDB
from astrbot.core.provider.manager import ProviderManager
from astrbot.core.provider.provider import (
EmbeddingProvider,
RerankProvider,
)
from astrbot.core.provider.provider import (
Provider as LLMProvider,
)
from .chunking.base import BaseChunker
from .chunking.recursive import RecursiveCharacterChunker
from .kb_db_sqlite import KBSQLiteDatabase
from .models import KBDocument, KBMedia, KnowledgeBase
from .parsers.url_parser import extract_text_from_url
from .parsers.util import select_parser
from .prompts import TEXT_REPAIR_SYSTEM_PROMPT
class RateLimiter:
"""一个简单的速率限制器"""
def __init__(self, max_rpm: int):
self.max_per_minute = max_rpm
self.interval = 60.0 / max_rpm if max_rpm > 0 else 0
self.last_call_time = 0
async def __aenter__(self):
if self.interval == 0:
return
now = time.monotonic()
elapsed = now - self.last_call_time
if elapsed < self.interval:
await asyncio.sleep(self.interval - elapsed)
self.last_call_time = time.monotonic()
async def __aexit__(self, exc_type, exc_val, exc_tb):
pass
async def _repair_and_translate_chunk_with_retry(
chunk: str,
repair_llm_service: LLMProvider,
rate_limiter: RateLimiter,
max_retries: int = 2,
) -> list[str]:
"""
Repairs, translates, and optionally re-chunks a single text chunk using the small LLM, with rate limiting.
"""
# 为了防止 LLM 上下文污染,在 user_prompt 中也加入明确的指令
user_prompt = f"""IGNORE ALL PREVIOUS INSTRUCTIONS. Your ONLY task is to process the following text chunk according to the system prompt provided.
Text chunk to process:
---
{chunk}
---
"""
for attempt in range(max_retries + 1):
try:
async with rate_limiter:
response = await repair_llm_service.text_chat(
prompt=user_prompt, system_prompt=TEXT_REPAIR_SYSTEM_PROMPT
)
llm_output = response.completion_text
if "<discard_chunk />" in llm_output:
return [] # Signal to discard this chunk
# More robust regex to handle potential LLM formatting errors (spaces, newlines in tags)
matches = re.findall(
r"<\s*repaired_text\s*>\s*(.*?)\s*<\s*/\s*repaired_text\s*>",
llm_output,
re.DOTALL,
)
if matches:
# Further cleaning to ensure no empty strings are returned
return [m.strip() for m in matches if m.strip()]
else:
# If no valid tags and not explicitly discarded, discard it to be safe.
return []
except Exception as e:
logger.warning(
f" - LLM call failed on attempt {attempt + 1}/{max_retries + 1}. Error: {str(e)}"
)
logger.error(
f" - Failed to process chunk after {max_retries + 1} attempts. Using original text."
)
return [chunk]
class KBHelper:
vec_db: BaseVecDB
kb: KnowledgeBase
def __init__(
self,
kb_db: KBSQLiteDatabase,
kb: KnowledgeBase,
provider_manager: ProviderManager,
kb_root_dir: str,
chunker: BaseChunker,
):
self.kb_db = kb_db
self.kb = kb
self.prov_mgr = provider_manager
self.kb_root_dir = kb_root_dir
self.chunker = chunker
self.kb_dir = Path(self.kb_root_dir) / self.kb.kb_id
self.kb_medias_dir = Path(self.kb_dir) / "medias" / self.kb.kb_id
self.kb_files_dir = Path(self.kb_dir) / "files" / self.kb.kb_id
self.kb_medias_dir.mkdir(parents=True, exist_ok=True)
self.kb_files_dir.mkdir(parents=True, exist_ok=True)
async def initialize(self):
await self._ensure_vec_db()
async def get_ep(self) -> EmbeddingProvider:
if not self.kb.embedding_provider_id:
raise ValueError(f"知识库 {self.kb.kb_name} 未配置 Embedding Provider")
ep: EmbeddingProvider = await self.prov_mgr.get_provider_by_id(
self.kb.embedding_provider_id,
) # type: ignore
if not ep:
raise ValueError(
f"无法找到 ID 为 {self.kb.embedding_provider_id} 的 Embedding Provider",
)
return ep
async def get_rp(self) -> RerankProvider | None:
if not self.kb.rerank_provider_id:
return None
rp: RerankProvider = await self.prov_mgr.get_provider_by_id(
self.kb.rerank_provider_id,
) # type: ignore
if not rp:
raise ValueError(
f"无法找到 ID 为 {self.kb.rerank_provider_id} 的 Rerank Provider",
)
return rp
async def _ensure_vec_db(self) -> FaissVecDB:
if not self.kb.embedding_provider_id:
raise ValueError(f"知识库 {self.kb.kb_name} 未配置 Embedding Provider")
ep = await self.get_ep()
rp = await self.get_rp()
vec_db = FaissVecDB(
doc_store_path=str(self.kb_dir / "doc.db"),
index_store_path=str(self.kb_dir / "index.faiss"),
embedding_provider=ep,
rerank_provider=rp,
)
await vec_db.initialize()
self.vec_db = vec_db
return vec_db
async def delete_vec_db(self):
"""删除知识库的向量数据库和所有相关文件"""
import shutil
await self.terminate()
if self.kb_dir.exists():
shutil.rmtree(self.kb_dir)
async def terminate(self):
if self.vec_db:
await self.vec_db.close()
async def upload_document(
self,
file_name: str,
file_content: bytes | None,
file_type: str,
chunk_size: int = 512,
chunk_overlap: int = 50,
batch_size: int = 32,
tasks_limit: int = 3,
max_retries: int = 3,
progress_callback=None,
pre_chunked_text: list[str] | None = None,
) -> KBDocument:
"""上传并处理文档(带原子性保证和失败清理)
流程:
1. 保存原始文件
2. 解析文档内容
3. 提取多媒体资源
4. 分块处理
5. 生成向量并存储
6. 保存元数据(事务)
7. 更新统计
Args:
progress_callback: 进度回调函数,接收参数 (stage, current, total)
- stage: 当前阶段 ('parsing', 'chunking', 'embedding')
- current: 当前进度
- total: 总数
"""
await self._ensure_vec_db()
doc_id = str(uuid.uuid4())
media_paths: list[Path] = []
file_size = 0
# file_path = self.kb_files_dir / f"{doc_id}.{file_type}"
# async with aiofiles.open(file_path, "wb") as f:
# await f.write(file_content)
try:
chunks_text = []
saved_media = []
if pre_chunked_text is not None:
# 如果提供了预分块文本,直接使用
chunks_text = pre_chunked_text
file_size = sum(len(chunk) for chunk in chunks_text)
logger.info(f"使用预分块文本进行上传,共 {len(chunks_text)} 个块。")
else:
# 否则,执行标准的文件解析和分块流程
if file_content is None:
raise ValueError(
"当未提供 pre_chunked_text 时file_content 不能为空。"
)
file_size = len(file_content)
# 阶段1: 解析文档
if progress_callback:
await progress_callback("parsing", 0, 100)
parser = await select_parser(f".{file_type}")
parse_result = await parser.parse(file_content, file_name)
text_content = parse_result.text
media_items = parse_result.media
if progress_callback:
await progress_callback("parsing", 100, 100)
# 保存媒体文件
for media_item in media_items:
media = await self._save_media(
doc_id=doc_id,
media_type=media_item.media_type,
file_name=media_item.file_name,
content=media_item.content,
mime_type=media_item.mime_type,
)
saved_media.append(media)
media_paths.append(Path(media.file_path))
# 阶段2: 分块
if progress_callback:
await progress_callback("chunking", 0, 100)
chunks_text = await self.chunker.chunk(
text_content,
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
)
contents = []
metadatas = []
for idx, chunk_text in enumerate(chunks_text):
contents.append(chunk_text)
metadatas.append(
{
"kb_id": self.kb.kb_id,
"kb_doc_id": doc_id,
"chunk_index": idx,
},
)
if progress_callback:
await progress_callback("chunking", 100, 100)
# 阶段3: 生成向量(带进度回调)
async def embedding_progress_callback(current, total):
if progress_callback:
await progress_callback("embedding", current, total)
await self.vec_db.insert_batch(
contents=contents,
metadatas=metadatas,
batch_size=batch_size,
tasks_limit=tasks_limit,
max_retries=max_retries,
progress_callback=embedding_progress_callback,
)
# 保存文档的元数据
doc = KBDocument(
doc_id=doc_id,
kb_id=self.kb.kb_id,
doc_name=file_name,
file_type=file_type,
file_size=file_size,
# file_path=str(file_path),
file_path="",
chunk_count=len(chunks_text),
media_count=0,
)
async with self.kb_db.get_db() as session:
async with session.begin():
session.add(doc)
for media in saved_media:
session.add(media)
await session.commit()
await session.refresh(doc)
vec_db: FaissVecDB = self.vec_db # type: ignore
await self.kb_db.update_kb_stats(kb_id=self.kb.kb_id, vec_db=vec_db)
await self.refresh_kb()
await self.refresh_document(doc_id)
return doc
except Exception as e:
logger.error(f"上传文档失败: {e}")
# if file_path.exists():
# file_path.unlink()
for media_path in media_paths:
try:
if media_path.exists():
media_path.unlink()
except Exception as me:
logger.warning(f"清理多媒体文件失败 {media_path}: {me}")
raise e
async def list_documents(
self,
offset: int = 0,
limit: int = 100,
) -> list[KBDocument]:
"""列出知识库的所有文档"""
docs = await self.kb_db.list_documents_by_kb(self.kb.kb_id, offset, limit)
return docs
async def get_document(self, doc_id: str) -> KBDocument | None:
"""获取单个文档"""
doc = await self.kb_db.get_document_by_id(doc_id)
return doc
async def delete_document(self, doc_id: str):
"""删除单个文档及其相关数据"""
await self.kb_db.delete_document_by_id(
doc_id=doc_id,
vec_db=self.vec_db, # type: ignore
)
await self.kb_db.update_kb_stats(
kb_id=self.kb.kb_id,
vec_db=self.vec_db, # type: ignore
)
await self.refresh_kb()
async def delete_chunk(self, chunk_id: str, doc_id: str):
"""删除单个文本块及其相关数据"""
vec_db: FaissVecDB = self.vec_db # type: ignore
await vec_db.delete(chunk_id)
await self.kb_db.update_kb_stats(
kb_id=self.kb.kb_id,
vec_db=self.vec_db, # type: ignore
)
await self.refresh_kb()
await self.refresh_document(doc_id)
async def refresh_kb(self):
if self.kb:
kb = await self.kb_db.get_kb_by_id(self.kb.kb_id)
if kb:
self.kb = kb
async def refresh_document(self, doc_id: str) -> None:
"""更新文档的元数据"""
doc = await self.get_document(doc_id)
if not doc:
raise ValueError(f"无法找到 ID 为 {doc_id} 的文档")
chunk_count = await self.get_chunk_count_by_doc_id(doc_id)
doc.chunk_count = chunk_count
async with self.kb_db.get_db() as session:
async with session.begin():
session.add(doc)
await session.commit()
await session.refresh(doc)
async def get_chunks_by_doc_id(
self,
doc_id: str,
offset: int = 0,
limit: int = 100,
) -> list[dict]:
"""获取文档的所有块及其元数据"""
vec_db: FaissVecDB = self.vec_db # type: ignore
chunks = await vec_db.document_storage.get_documents(
metadata_filters={"kb_doc_id": doc_id},
offset=offset,
limit=limit,
)
result = []
for chunk in chunks:
chunk_md = json.loads(chunk["metadata"])
result.append(
{
"chunk_id": chunk["doc_id"],
"doc_id": chunk_md["kb_doc_id"],
"kb_id": chunk_md["kb_id"],
"chunk_index": chunk_md["chunk_index"],
"content": chunk["text"],
"char_count": len(chunk["text"]),
},
)
return result
async def get_chunk_count_by_doc_id(self, doc_id: str) -> int:
"""获取文档的块数量"""
vec_db: FaissVecDB = self.vec_db # type: ignore
count = await vec_db.count_documents(metadata_filter={"kb_doc_id": doc_id})
return count
async def _save_media(
self,
doc_id: str,
media_type: str,
file_name: str,
content: bytes,
mime_type: str,
) -> KBMedia:
"""保存多媒体资源"""
media_id = str(uuid.uuid4())
ext = Path(file_name).suffix
# 保存文件
file_path = self.kb_medias_dir / doc_id / f"{media_id}{ext}"
file_path.parent.mkdir(parents=True, exist_ok=True)
async with aiofiles.open(file_path, "wb") as f:
await f.write(content)
media = KBMedia(
media_id=media_id,
doc_id=doc_id,
kb_id=self.kb.kb_id,
media_type=media_type,
file_name=file_name,
file_path=str(file_path),
file_size=len(content),
mime_type=mime_type,
)
return media
async def upload_from_url(
self,
url: str,
chunk_size: int = 512,
chunk_overlap: int = 50,
batch_size: int = 32,
tasks_limit: int = 3,
max_retries: int = 3,
progress_callback=None,
enable_cleaning: bool = False,
cleaning_provider_id: str | None = None,
) -> KBDocument:
"""从 URL 上传并处理文档(带原子性保证和失败清理)
Args:
url: 要提取内容的网页 URL
chunk_size: 文本块大小
chunk_overlap: 文本块重叠大小
batch_size: 批处理大小
tasks_limit: 并发任务限制
max_retries: 最大重试次数
progress_callback: 进度回调函数,接收参数 (stage, current, total)
- stage: 当前阶段 ('extracting', 'cleaning', 'parsing', 'chunking', 'embedding')
- current: 当前进度
- total: 总数
Returns:
KBDocument: 上传的文档对象
Raises:
ValueError: 如果 URL 为空或无法提取内容
IOError: 如果网络请求失败
"""
# 获取 Tavily API 密钥
config = self.prov_mgr.acm.default_conf
tavily_keys = config.get("provider_settings", {}).get(
"websearch_tavily_key", []
)
if not tavily_keys:
raise ValueError(
"Error: Tavily API key is not configured in provider_settings."
)
# 阶段1: 从 URL 提取内容
if progress_callback:
await progress_callback("extracting", 0, 100)
try:
text_content = await extract_text_from_url(url, tavily_keys)
except Exception as e:
logger.error(f"Failed to extract content from URL {url}: {e}")
raise OSError(f"Failed to extract content from URL {url}: {e}") from e
if not text_content:
raise ValueError(f"No content extracted from URL: {url}")
if progress_callback:
await progress_callback("extracting", 100, 100)
# 阶段2: (可选)清洗内容并分块
final_chunks = await self._clean_and_rechunk_content(
content=text_content,
url=url,
progress_callback=progress_callback,
enable_cleaning=enable_cleaning,
cleaning_provider_id=cleaning_provider_id,
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
)
if enable_cleaning and not final_chunks:
raise ValueError(
"内容清洗后未提取到有效文本。请尝试关闭内容清洗功能或更换更高性能的LLM模型后重试。"
)
# 创建一个虚拟文件名
file_name = url.split("/")[-1] or f"document_from_{url}"
if not Path(file_name).suffix:
file_name += ".url"
# 复用现有的 upload_document 方法,但传入预分块文本
return await self.upload_document(
file_name=file_name,
file_content=None,
file_type="url", # 使用 'url' 作为特殊文件类型
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
batch_size=batch_size,
tasks_limit=tasks_limit,
max_retries=max_retries,
progress_callback=progress_callback,
pre_chunked_text=final_chunks,
)
async def _clean_and_rechunk_content(
self,
content: str,
url: str,
progress_callback=None,
enable_cleaning: bool = False,
cleaning_provider_id: str | None = None,
repair_max_rpm: int = 60,
chunk_size: int = 512,
chunk_overlap: int = 50,
) -> list[str]:
"""
对从 URL 获取的内容进行清洗、修复、翻译和重新分块。
"""
if not enable_cleaning:
# 如果不启用清洗,则使用从前端传递的参数进行分块
logger.info(
f"内容清洗未启用,使用指定参数进行分块: chunk_size={chunk_size}, chunk_overlap={chunk_overlap}"
)
return await self.chunker.chunk(
content, chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
if not cleaning_provider_id:
logger.warning(
"启用了内容清洗,但未提供 cleaning_provider_id跳过清洗并使用默认分块。"
)
return await self.chunker.chunk(content)
if progress_callback:
await progress_callback("cleaning", 0, 100)
try:
# 获取指定的 LLM Provider
llm_provider = await self.prov_mgr.get_provider_by_id(cleaning_provider_id)
if not llm_provider or not isinstance(llm_provider, LLMProvider):
raise ValueError(
f"无法找到 ID 为 {cleaning_provider_id} 的 LLM Provider 或类型不正确"
)
# 初步分块
# 优化分隔符,优先按段落分割,以获得更高质量的文本块
text_splitter = RecursiveCharacterChunker(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
separators=["\n\n", "\n", " "], # 优先使用段落分隔符
)
initial_chunks = await text_splitter.chunk(content)
logger.info(f"初步分块完成,生成 {len(initial_chunks)} 个块用于修复。")
# 并发处理所有块
rate_limiter = RateLimiter(repair_max_rpm)
tasks = [
_repair_and_translate_chunk_with_retry(
chunk, llm_provider, rate_limiter
)
for chunk in initial_chunks
]
repaired_results = await asyncio.gather(*tasks, return_exceptions=True)
final_chunks = []
for i, result in enumerate(repaired_results):
if isinstance(result, Exception):
logger.warning(f"{i} 处理异常: {str(result)}. 回退到原始块。")
final_chunks.append(initial_chunks[i])
elif isinstance(result, list):
final_chunks.extend(result)
logger.info(
f"文本修复完成: {len(initial_chunks)} 个原始块 -> {len(final_chunks)} 个最终块。"
)
if progress_callback:
await progress_callback("cleaning", 100, 100)
return final_chunks
except Exception as e:
logger.error(f"使用 Provider '{cleaning_provider_id}' 清洗内容失败: {e}")
# 清洗失败,返回默认分块结果,保证流程不中断
return await self.chunker.chunk(content)

View File

@@ -0,0 +1,330 @@
import traceback
from pathlib import Path
from astrbot.core import logger
from astrbot.core.provider.manager import ProviderManager
# from .chunking.fixed_size import FixedSizeChunker
from .chunking.recursive import RecursiveCharacterChunker
from .kb_db_sqlite import KBSQLiteDatabase
from .kb_helper import KBHelper
from .models import KBDocument, KnowledgeBase
from .retrieval.manager import RetrievalManager, RetrievalResult
from .retrieval.rank_fusion import RankFusion
from .retrieval.sparse_retriever import SparseRetriever
FILES_PATH = "data/knowledge_base"
DB_PATH = Path(FILES_PATH) / "kb.db"
"""Knowledge Base storage root directory"""
CHUNKER = RecursiveCharacterChunker()
class KnowledgeBaseManager:
kb_db: KBSQLiteDatabase
retrieval_manager: RetrievalManager
def __init__(
self,
provider_manager: ProviderManager,
):
Path(DB_PATH).parent.mkdir(parents=True, exist_ok=True)
self.provider_manager = provider_manager
self._session_deleted_callback_registered = False
self.kb_insts: dict[str, KBHelper] = {}
async def initialize(self):
"""初始化知识库模块"""
try:
logger.info("正在初始化知识库模块...")
# 初始化数据库
await self._init_kb_database()
# 初始化检索管理器
sparse_retriever = SparseRetriever(self.kb_db)
rank_fusion = RankFusion(self.kb_db)
self.retrieval_manager = RetrievalManager(
sparse_retriever=sparse_retriever,
rank_fusion=rank_fusion,
kb_db=self.kb_db,
)
await self.load_kbs()
except ImportError as e:
logger.error(f"知识库模块导入失败: {e}")
logger.warning("请确保已安装所需依赖: pypdf, aiofiles, Pillow, rank-bm25")
except Exception as e:
logger.error(f"知识库模块初始化失败: {e}")
logger.error(traceback.format_exc())
async def _init_kb_database(self):
self.kb_db = KBSQLiteDatabase(DB_PATH.as_posix())
await self.kb_db.initialize()
await self.kb_db.migrate_to_v1()
logger.info(f"KnowledgeBase database initialized: {DB_PATH}")
async def load_kbs(self):
"""加载所有知识库实例"""
kb_records = await self.kb_db.list_kbs()
for record in kb_records:
kb_helper = KBHelper(
kb_db=self.kb_db,
kb=record,
provider_manager=self.provider_manager,
kb_root_dir=FILES_PATH,
chunker=CHUNKER,
)
await kb_helper.initialize()
self.kb_insts[record.kb_id] = kb_helper
async def create_kb(
self,
kb_name: str,
description: str | None = None,
emoji: str | None = None,
embedding_provider_id: str | None = None,
rerank_provider_id: str | None = None,
chunk_size: int | None = None,
chunk_overlap: int | None = None,
top_k_dense: int | None = None,
top_k_sparse: int | None = None,
top_m_final: int | None = None,
) -> KBHelper:
"""创建新的知识库实例"""
kb = KnowledgeBase(
kb_name=kb_name,
description=description,
emoji=emoji or "📚",
embedding_provider_id=embedding_provider_id,
rerank_provider_id=rerank_provider_id,
chunk_size=chunk_size if chunk_size is not None else 512,
chunk_overlap=chunk_overlap if chunk_overlap is not None else 50,
top_k_dense=top_k_dense if top_k_dense is not None else 50,
top_k_sparse=top_k_sparse if top_k_sparse is not None else 50,
top_m_final=top_m_final if top_m_final is not None else 5,
)
async with self.kb_db.get_db() as session:
session.add(kb)
await session.commit()
await session.refresh(kb)
kb_helper = KBHelper(
kb_db=self.kb_db,
kb=kb,
provider_manager=self.provider_manager,
kb_root_dir=FILES_PATH,
chunker=CHUNKER,
)
await kb_helper.initialize()
self.kb_insts[kb.kb_id] = kb_helper
return kb_helper
async def get_kb(self, kb_id: str) -> KBHelper | None:
"""获取知识库实例"""
if kb_id in self.kb_insts:
return self.kb_insts[kb_id]
async def get_kb_by_name(self, kb_name: str) -> KBHelper | None:
"""通过名称获取知识库实例"""
for kb_helper in self.kb_insts.values():
if kb_helper.kb.kb_name == kb_name:
return kb_helper
return None
async def delete_kb(self, kb_id: str) -> bool:
"""删除知识库实例"""
kb_helper = await self.get_kb(kb_id)
if not kb_helper:
return False
await kb_helper.delete_vec_db()
async with self.kb_db.get_db() as session:
await session.delete(kb_helper.kb)
await session.commit()
self.kb_insts.pop(kb_id, None)
return True
async def list_kbs(self) -> list[KnowledgeBase]:
"""列出所有知识库实例"""
kbs = [kb_helper.kb for kb_helper in self.kb_insts.values()]
return kbs
async def update_kb(
self,
kb_id: str,
kb_name: str,
description: str | None = None,
emoji: str | None = None,
embedding_provider_id: str | None = None,
rerank_provider_id: str | None = None,
chunk_size: int | None = None,
chunk_overlap: int | None = None,
top_k_dense: int | None = None,
top_k_sparse: int | None = None,
top_m_final: int | None = None,
) -> KBHelper | None:
"""更新知识库实例"""
kb_helper = await self.get_kb(kb_id)
if not kb_helper:
return None
kb = kb_helper.kb
if kb_name is not None:
kb.kb_name = kb_name
if description is not None:
kb.description = description
if emoji is not None:
kb.emoji = emoji
if embedding_provider_id is not None:
kb.embedding_provider_id = embedding_provider_id
kb.rerank_provider_id = rerank_provider_id # 允许设置为 None
if chunk_size is not None:
kb.chunk_size = chunk_size
if chunk_overlap is not None:
kb.chunk_overlap = chunk_overlap
if top_k_dense is not None:
kb.top_k_dense = top_k_dense
if top_k_sparse is not None:
kb.top_k_sparse = top_k_sparse
if top_m_final is not None:
kb.top_m_final = top_m_final
async with self.kb_db.get_db() as session:
session.add(kb)
await session.commit()
await session.refresh(kb)
return kb_helper
async def retrieve(
self,
query: str,
kb_names: list[str],
top_k_fusion: int = 20,
top_m_final: int = 5,
) -> dict | None:
"""从指定知识库中检索相关内容"""
kb_ids = []
kb_id_helper_map = {}
for kb_name in kb_names:
if kb_helper := await self.get_kb_by_name(kb_name):
kb_ids.append(kb_helper.kb.kb_id)
kb_id_helper_map[kb_helper.kb.kb_id] = kb_helper
if not kb_ids:
return {}
results = await self.retrieval_manager.retrieve(
query=query,
kb_ids=kb_ids,
kb_id_helper_map=kb_id_helper_map,
top_k_fusion=top_k_fusion,
top_m_final=top_m_final,
)
if not results:
return None
context_text = self._format_context(results)
results_dict = [
{
"chunk_id": r.chunk_id,
"doc_id": r.doc_id,
"kb_id": r.kb_id,
"kb_name": r.kb_name,
"doc_name": r.doc_name,
"chunk_index": r.metadata.get("chunk_index", 0),
"content": r.content,
"score": r.score,
"char_count": r.metadata.get("char_count", 0),
}
for r in results
]
return {
"context_text": context_text,
"results": results_dict,
}
def _format_context(self, results: list[RetrievalResult]) -> str:
"""格式化知识上下文
Args:
results: 检索结果列表
Returns:
str: 格式化的上下文文本
"""
lines = ["以下是相关的知识库内容,请参考这些信息回答用户的问题:\n"]
for i, result in enumerate(results, 1):
lines.append(f"【知识 {i}")
lines.append(f"来源: {result.kb_name} / {result.doc_name}")
lines.append(f"内容: {result.content}")
lines.append(f"相关度: {result.score:.2f}")
lines.append("")
return "\n".join(lines)
async def terminate(self):
"""终止所有知识库实例,关闭数据库连接"""
for kb_id, kb_helper in self.kb_insts.items():
try:
await kb_helper.terminate()
except Exception as e:
logger.error(f"关闭知识库 {kb_id} 失败: {e}")
self.kb_insts.clear()
# 关闭元数据数据库
if hasattr(self, "kb_db") and self.kb_db:
try:
await self.kb_db.close()
except Exception as e:
logger.error(f"关闭知识库元数据数据库失败: {e}")
async def upload_from_url(
self,
kb_id: str,
url: str,
chunk_size: int = 512,
chunk_overlap: int = 50,
batch_size: int = 32,
tasks_limit: int = 3,
max_retries: int = 3,
progress_callback=None,
) -> KBDocument:
"""从 URL 上传文档到指定的知识库
Args:
kb_id: 知识库 ID
url: 要提取内容的网页 URL
chunk_size: 文本块大小
chunk_overlap: 文本块重叠大小
batch_size: 批处理大小
tasks_limit: 并发任务限制
max_retries: 最大重试次数
progress_callback: 进度回调函数
Returns:
KBDocument: 上传的文档对象
Raises:
ValueError: 如果知识库不存在或 URL 为空
IOError: 如果网络请求失败
"""
kb_helper = await self.get_kb(kb_id)
if not kb_helper:
raise ValueError(f"Knowledge base with id {kb_id} not found.")
return await kb_helper.upload_from_url(
url=url,
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
batch_size=batch_size,
tasks_limit=tasks_limit,
max_retries=max_retries,
progress_callback=progress_callback,
)

View File

@@ -0,0 +1,120 @@
import uuid
from datetime import datetime, timezone
from sqlmodel import Field, MetaData, SQLModel, Text, UniqueConstraint
class BaseKBModel(SQLModel, table=False):
metadata = MetaData()
class KnowledgeBase(BaseKBModel, table=True):
"""知识库表
存储知识库的基本信息和统计数据。
"""
__tablename__ = "knowledge_bases" # type: ignore
id: int | None = Field(
primary_key=True,
sa_column_kwargs={"autoincrement": True},
default=None,
)
kb_id: str = Field(
max_length=36,
nullable=False,
unique=True,
default_factory=lambda: str(uuid.uuid4()),
index=True,
)
kb_name: str = Field(max_length=100, nullable=False)
description: str | None = Field(default=None, sa_type=Text)
emoji: str | None = Field(default="📚", max_length=10)
embedding_provider_id: str | None = Field(default=None, max_length=100)
rerank_provider_id: str | None = Field(default=None, max_length=100)
# 分块配置参数
chunk_size: int | None = Field(default=512, nullable=True)
chunk_overlap: int | None = Field(default=50, nullable=True)
# 检索配置参数
top_k_dense: int | None = Field(default=50, nullable=True)
top_k_sparse: int | None = Field(default=50, nullable=True)
top_m_final: int | None = Field(default=5, nullable=True)
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc),
sa_column_kwargs={"onupdate": datetime.now(timezone.utc)},
)
doc_count: int = Field(default=0, nullable=False)
chunk_count: int = Field(default=0, nullable=False)
__table_args__ = (
UniqueConstraint(
"kb_name",
name="uix_kb_name",
),
)
class KBDocument(BaseKBModel, table=True):
"""文档表
存储上传到知识库的文档元数据。
"""
__tablename__ = "kb_documents" # type: ignore
id: int | None = Field(
primary_key=True,
sa_column_kwargs={"autoincrement": True},
default=None,
)
doc_id: str = Field(
max_length=36,
nullable=False,
unique=True,
default_factory=lambda: str(uuid.uuid4()),
index=True,
)
kb_id: str = Field(max_length=36, nullable=False, index=True)
doc_name: str = Field(max_length=255, nullable=False)
file_type: str = Field(max_length=20, nullable=False)
file_size: int = Field(nullable=False)
file_path: str = Field(max_length=512, nullable=False)
chunk_count: int = Field(default=0, nullable=False)
media_count: int = Field(default=0, nullable=False)
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc),
sa_column_kwargs={"onupdate": datetime.now(timezone.utc)},
)
class KBMedia(BaseKBModel, table=True):
"""多媒体资源表
存储从文档中提取的图片、视频等多媒体资源。
"""
__tablename__ = "kb_media" # type: ignore
id: int | None = Field(
primary_key=True,
sa_column_kwargs={"autoincrement": True},
default=None,
)
media_id: str = Field(
max_length=36,
nullable=False,
unique=True,
default_factory=lambda: str(uuid.uuid4()),
index=True,
)
doc_id: str = Field(max_length=36, nullable=False, index=True)
kb_id: str = Field(max_length=36, nullable=False, index=True)
media_type: str = Field(max_length=20, nullable=False)
file_name: str = Field(max_length=255, nullable=False)
file_path: str = Field(max_length=512, nullable=False)
file_size: int = Field(nullable=False)
mime_type: str = Field(max_length=100, nullable=False)
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))

View File

@@ -0,0 +1,13 @@
"""文档解析器模块"""
from .base import BaseParser, MediaItem, ParseResult
from .pdf_parser import PDFParser
from .text_parser import TextParser
__all__ = [
"BaseParser",
"MediaItem",
"PDFParser",
"ParseResult",
"TextParser",
]

View File

@@ -0,0 +1,51 @@
"""文档解析器基类和数据结构
定义了文档解析器的抽象接口和相关数据类。
"""
from abc import ABC, abstractmethod
from dataclasses import dataclass
@dataclass
class MediaItem:
"""多媒体项
表示从文档中提取的多媒体资源。
"""
media_type: str # image, video
file_name: str
content: bytes
mime_type: str
@dataclass
class ParseResult:
"""解析结果
包含解析后的文本内容和提取的多媒体资源。
"""
text: str
media: list[MediaItem]
class BaseParser(ABC):
"""文档解析器基类
所有文档解析器都应该继承此类并实现 parse 方法。
"""
@abstractmethod
async def parse(self, file_content: bytes, file_name: str) -> ParseResult:
"""解析文档
Args:
file_content: 文件内容
file_name: 文件名
Returns:
ParseResult: 解析结果
"""

View File

@@ -0,0 +1,26 @@
import io
import os
from markitdown_no_magika import MarkItDown, StreamInfo
from astrbot.core.knowledge_base.parsers.base import (
BaseParser,
ParseResult,
)
class MarkitdownParser(BaseParser):
"""解析 docx, xls, xlsx 格式"""
async def parse(self, file_content: bytes, file_name: str) -> ParseResult:
md = MarkItDown(enable_plugins=False)
bio = io.BytesIO(file_content)
stream_info = StreamInfo(
extension=os.path.splitext(file_name)[1].lower(),
filename=file_name,
)
result = md.convert(bio, stream_info=stream_info)
return ParseResult(
text=result.markdown,
media=[],
)

View File

@@ -0,0 +1,101 @@
"""PDF 文件解析器
支持解析 PDF 文件中的文本和图片资源。
"""
import io
from pypdf import PdfReader
from astrbot.core.knowledge_base.parsers.base import (
BaseParser,
MediaItem,
ParseResult,
)
class PDFParser(BaseParser):
"""PDF 文档解析器
提取 PDF 中的文本内容和嵌入的图片资源。
"""
async def parse(self, file_content: bytes, file_name: str) -> ParseResult:
"""解析 PDF 文件
Args:
file_content: 文件内容
file_name: 文件名
Returns:
ParseResult: 包含文本和图片的解析结果
"""
pdf_file = io.BytesIO(file_content)
reader = PdfReader(pdf_file)
text_parts = []
media_items = []
# 提取文本
for page in reader.pages:
text = page.extract_text()
if text:
text_parts.append(text)
# 提取图片
image_counter = 0
for page_num, page in enumerate(reader.pages):
try:
# 安全检查 Resources
if "/Resources" not in page:
continue
resources = page["/Resources"]
if not resources or "/XObject" not in resources: # type: ignore
continue
xobjects = resources["/XObject"].get_object() # type: ignore
if not xobjects:
continue
for obj_name in xobjects:
try:
obj = xobjects[obj_name]
if obj.get("/Subtype") != "/Image":
continue
# 提取图片数据
image_data = obj.get_data()
# 确定格式
filter_type = obj.get("/Filter", "")
if filter_type == "/DCTDecode":
ext = "jpg"
mime_type = "image/jpeg"
elif filter_type == "/FlateDecode":
ext = "png"
mime_type = "image/png"
else:
ext = "png"
mime_type = "image/png"
image_counter += 1
media_items.append(
MediaItem(
media_type="image",
file_name=f"page_{page_num}_img_{image_counter}.{ext}",
content=image_data,
mime_type=mime_type,
),
)
except Exception:
# 单个图片提取失败不影响整体
continue
except Exception:
# 页面处理失败不影响其他页面
continue
full_text = "\n\n".join(text_parts)
return ParseResult(text=full_text, media=media_items)

View File

@@ -0,0 +1,42 @@
"""文本文件解析器
支持解析 TXT 和 Markdown 文件。
"""
from astrbot.core.knowledge_base.parsers.base import BaseParser, ParseResult
class TextParser(BaseParser):
"""TXT/MD 文本解析器
支持多种字符编码的自动检测。
"""
async def parse(self, file_content: bytes, file_name: str) -> ParseResult:
"""解析文本文件
尝试使用多种编码解析文件内容。
Args:
file_content: 文件内容
file_name: 文件名
Returns:
ParseResult: 解析结果,不包含多媒体资源
Raises:
ValueError: 如果无法解码文件
"""
# 尝试多种编码
for encoding in ["utf-8", "gbk", "gb2312", "gb18030"]:
try:
text = file_content.decode(encoding)
break
except UnicodeDecodeError:
continue
else:
raise ValueError(f"无法解码文件: {file_name}")
# 文本文件无多媒体资源
return ParseResult(text=text, media=[])

Some files were not shown because too many files have changed in this diff Show More