Compare commits

...

252 Commits

Author SHA1 Message Date
Soulter
3b77df0556 fix: 修复下载更新后压缩包不解压的问题 2024-09-21 12:37:05 -04:00
Soulter
1fa11062de fix: /plugin u 指令异常 2024-09-21 12:33:00 -04:00
Soulter
6883de0f1c feat: partially test http server api 2024-09-21 12:19:49 -04:00
Soulter
bdde0fe094 refactor: HTTP 请求全部异步化,移除了 baidu_aip, request 依赖 2024-09-21 11:36:02 -04:00
Soulter
ab22b8103e Merge pull request #208 from Soulter/fix-issue-207
fix: 修复仪表盘保存配置递归校验失效的问题
2024-09-21 22:42:16 +08:00
Soulter
641d5cd67b fix: 修复仪表盘保存配置递归校验失效的问题 2024-09-21 10:40:32 -04:00
Soulter
9fe941e457 fix(dashboard): 修复配置页不显示模型配置的问题 2024-09-20 05:10:47 -04:00
Soulter
78060c9985 refactor: moveplugins and temp folder to data/ 2024-09-20 04:41:44 -04:00
Soulter
5bd6af3400 Merge pull request #202 from Soulter/feat-middleware
支持插件注册消息中间件
2024-09-18 13:29:48 +08:00
Soulter
4ecd78d6a8 perf: remove error raise when command handler return an unexpected value 2024-09-17 04:49:49 -04:00
Soulter
7e9f54ed2c fix: change_password api 2024-09-17 03:33:18 -04:00
Soulter
7dd29c707f perf: 优化部分配置项的显示 2024-09-15 10:28:23 -04:00
Soulter
a1489fb1f9 Merge pull request #203 from Soulter/feat-custom-t2i-tmpl
自定义文转图 HTML 模板
2024-09-14 20:38:50 +08:00
Soulter
5f0f5398e8 fix: custom t2i 2024-09-14 08:21:34 -04:00
Soulter
e3b2396f32 feat: custom t2i tmpl 2024-09-14 19:59:30 +08:00
Soulter
6fd70ed26a fix: call middleware 2024-09-11 04:59:49 -04:00
Soulter
a93e6ff01a feat: middleware 2024-09-11 16:47:44 +08:00
Soulter
6db8c38c58 chore: remove agent function of helloworld plugin 2024-09-11 15:38:08 +08:00
Soulter
d3d3ff7970 Update .codecov.yml 2024-09-11 12:34:49 +08:00
Soulter
c5b2b30f79 Merge pull request #200 from Soulter/config-refactor
Update dashboard
2024-09-10 11:43:58 +00:00
Soulter
ac2144d65b chore(dashboard): update dashboard 2024-09-10 07:40:39 -04:00
Soulter
c620b4f919 Merge pull request #184 from Soulter/config-refactor
更易读的配置格式和平台、LLM多实例
2024-09-10 11:01:42 +00:00
Soulter
292a3a43ba perf: 完善覆盖率测试 2024-09-10 03:56:44 -04:00
Soulter
5fc4693b9c remove: .coverage 2024-09-10 01:57:51 -04:00
Soulter
6dfbaf1b88 bugfixes 2024-09-10 01:57:13 -04:00
Soulter
14c6e56287 Merge branch 'master' into config-refactor 2024-09-10 13:17:04 +08:00
Soulter
7e48514f67 Update README.md 2024-09-08 21:06:20 +08:00
Soulter
d8e70c4d7f perf: 优化 llm tool 返回值处理 2024-09-08 08:41:26 -04:00
Soulter
fb52989d62 Merge pull request #199 from Soulter/dev
解耦合 LLM Tool Use 注册并暴露插件接口
2024-09-08 12:24:34 +00:00
Soulter
5b72ebaad5 delete: remove deprecated files 2024-09-08 08:23:43 -04:00
Soulter
98863ab901 feat: customized tool-use 2024-09-08 08:16:36 -04:00
Soulter
b5cb5eb969 feat: customized tool-use 2024-09-08 19:41:00 +08:00
Soulter
7f4f96f77b Merge branch 'master' into dev 2024-09-08 19:39:26 +08:00
Soulter
3b3f75f03e fix: 增大超时时间 2024-08-18 04:00:45 -04:00
Soulter
a5db4d4e47 fix: 修复异端情况下主动信息发送带有本地图片url的消息时报错的问题 2024-08-18 03:55:11 -04:00
Soulter
d3b0f25cfe refactor: Update ProviderOpenAIOfficial to skip test message when TEST_MODE=on
This commit updates the `ProviderOpenAIOfficial` class to skip returning the test message when the environment variable `TEST_MODE` is set to "on". This change ensures that the test message is only returned when both `TEST_LLM` and `TEST_MODE` are set to "on".
2024-08-17 06:19:08 -04:00
Soulter
a9c6a68c5f Update README.md 2024-08-17 17:59:59 +08:00
Soulter
c27f172452 Merge pull request #190 from Soulter/feat-test
[Feature] 添加自动化测试
2024-08-17 17:56:43 +08:00
Soulter
2eeb5822c1 chore: add codecov.yml 2024-08-17 05:54:38 -04:00
Soulter
743046d48f chore: Create necessary directories for data and temp in coverage test workflow 2024-08-17 05:29:52 -04:00
Soulter
d3a5205bde refactor: Update coverage test workflow to properly create command configuration file 2024-08-17 05:27:33 -04:00
Soulter
ae6dd8929a refactor: Update coverage test workflow to create command configuration file properly 2024-08-17 05:25:45 -04:00
Soulter
dcf96896ef chore: Update coverage test workflow to install dependencies from requirements.txt 2024-08-17 05:10:05 -04:00
Soulter
67792100bb refactor: Fix command configuration file creation in coverage test workflow 2024-08-17 05:08:08 -04:00
Soulter
48c1263417 chore: add coverage test workflow 2024-08-17 05:02:34 -04:00
Soulter
12d37381fe perf: request llm api when only TEST_LLM=on 2024-08-17 04:49:43 -04:00
Soulter
dcec3f5f84 feat: unit test
perf: func call improvement
2024-08-17 04:46:23 -04:00
Soulter
32e2a7830a feat: Add timeout parameter to QQOfficial bot client initialization 2024-08-17 03:20:08 -04:00
Soulter
6992249e53 refactor: Update image downloading method in ProviderOpenAIOfficial 2024-08-17 15:06:13 +08:00
Soulter
107214ac53 fix: Handle errors in AstrBotBootstrap gracefully 2024-08-17 15:01:55 +08:00
Soulter
8a58772911 perf: fill the missing metric record 2024-08-17 14:58:43 +08:00
Soulter
e21736b470 perf: remove message reply when rate limit occur 2024-08-17 14:54:11 +08:00
Soulter
e8679f8984 Create codeql.yml 2024-08-17 14:34:02 +08:00
Soulter
970fe02027 fix: 修复QQ官方机器人API聊天时不能找到平台的问题 #189 2024-08-17 14:30:35 +08:00
Soulter
12216853c5 chore: issue and pr template 2024-08-17 11:20:36 +08:00
Soulter
33ec92258d Update config.py 2024-08-13 15:05:16 +08:00
Soulter
a578edf137 fix: metrics
perf: aiocqhttp image url
2024-08-12 02:50:31 -04:00
Soulter
f8949ebead perf: reboot after installing plugin 2024-08-11 23:24:37 -04:00
Soulter
141c91301f perf: Improve sleep time handling in QQOfficial and ProviderOpenAIOfficial 2024-08-11 23:24:37 -04:00
Soulter
8d95e67b5a Update README.md 2024-08-11 17:13:49 +08:00
Soulter
0633e7f25f perf: improve the effects of local function-calling 2024-08-11 03:55:31 -04:00
Soulter
266da0a9d8 fix: 修复重启时 aiocqhttp 没有正常退出导致端口占用的问题 2024-08-11 02:30:49 -04:00
Soulter
121c40f273 perf: raise error when badrequest 2024-08-11 01:49:33 -04:00
Soulter
a876efb95f fix: 更新后覆盖文件路径错误 2024-08-10 04:35:07 -04:00
Soulter
95a8cc9498 fix: 修复部分字段未更新导致的错误 2024-08-10 04:13:24 -04:00
Soulter
f02731055e fix: 修复插件启用忽略前缀之后可能的逻辑冲突 2024-08-10 03:25:50 -04:00
Soulter
1df83addfc update: add gcc 2024-08-10 14:59:00 +08:00
Soulter
9db43ac5e6 feat: 注册指令支持忽略指令前缀;快捷主动回复 2024-08-10 02:35:54 -04:00
Soulter
0f470cf96f Update README.md 2024-08-09 12:26:00 +08:00
Soulter
da3fcb7b86 Merge pull request #186 from itgpt-com/master
优化 docker build
2024-08-08 22:15:48 +08:00
Soulter
73dd4703b9 Update .dockerignore 2024-08-08 22:15:05 +08:00
itgpt
0c679a0151 添加 .dockerignore 过滤 docker cp 不必要文件。缩小镜像 2024-08-08 16:21:30 +08:00
itgpt
1d6ea2dbe6 添加端口输出 2024-08-08 16:16:55 +08:00
itgpt
933df57654 优化 docker build 2024-08-08 15:53:44 +08:00
Soulter
a7c87642b4 refactor: Update configuration format and handling 2024-08-06 23:21:18 -04:00
Soulter
cbe761fc33 Update README.md 2024-08-07 00:49:00 +08:00
Soulter
f8aef78d25 feat: 重构配置格式
perf: 优化配置处理过程和呈现方式
2024-08-06 04:58:29 -04:00
Soulter
14dbdb2d83 feat: 插件支持正则匹配 2024-08-05 12:12:00 -04:00
Soulter
abda226d63 Merge pull request #183 from irorange27/master
fix: fix logo syntax warning
2024-08-05 23:37:57 +08:00
niina
a2dc6f0a49 fix: fix logo syntax warning 2024-08-05 22:53:45 +08:00
Soulter
7a94c26333 fix: 修复 wake 唤醒词无法触发 command 的问题 2024-08-05 05:02:57 -04:00
Soulter
9b1ffb384b perf: 优化aiocqhttp适配器的异常处理 2024-08-05 04:46:12 -04:00
Soulter
9566bfe122 workaround for issue #181 2024-08-03 17:03:38 +08:00
Soulter
89ff103bda chore: Add mimetypes workaround for issue #188 2024-08-03 17:02:45 +08:00
Soulter
6c788db53a Merge remote-tracking branch 'refs/remotes/origin/master' 2024-08-03 16:17:25 +08:00
Soulter
344b5fa419 fix: f-string eror 2024-08-03 16:17:04 +08:00
Soulter
c6d161b837 Update README.md 2024-08-03 15:04:20 +08:00
Soulter
2065ba0c60 Update README.md 2024-08-03 01:05:27 +08:00
Soulter
a481fd1a3e fix: Strip leading and trailing whitespace from llm_wake_prefix 2024-08-02 23:17:35 +08:00
Soulter
c50bcdbdb9 fix: Register command only if plugin is found 2024-08-02 22:48:04 +08:00
Soulter
36a2a7632c fix: 优化初始化、消息处理时的配置读取过程,减少性能损耗 2024-07-31 23:38:31 +08:00
Soulter
e77b7014e6 fix: 修复更新、卸载插件时的报错 2024-07-30 09:15:45 +08:00
Soulter
d57fd0f827 fix: metadata is not seralizable 2024-07-29 09:47:42 +08:00
Soulter
6a83d2a62a update version 2024-07-28 12:11:07 +08:00
Soulter
2d29726c18 fix: 修复带空格路径导致的重启失败 2024-07-28 11:55:57 +08:00
Soulter
b241b0f954 update version 2024-07-27 12:31:15 -04:00
Soulter
171dd1dc02 feat: qq 官方机器人接口支持C2C 2024-07-27 12:30:09 -04:00
Soulter
af62d969d7 perf: 更改 send_msg 接口 2024-07-27 11:26:02 -04:00
Soulter
c4fd9a66c6 update version to 3.3.3 2024-07-27 11:08:51 -04:00
Soulter
d191997a39 feat: aiocqhttp 适配器适配主动发送消息接口 2024-07-27 11:07:26 -04:00
Soulter
853ac4c104 fix: 优化 update 提示 2024-07-27 04:58:15 -04:00
Soulter
ed053acad6 update: version 2024-07-27 04:47:57 -04:00
Soulter
f147634e51 fix: 修复update异常 2024-07-27 04:43:53 -04:00
Soulter
e3b2a68341 Merge pull request #179 from Soulter/refactor-v3.3.0
feat: 新增 Provider 注册接口;新增 provider 指令
2024-07-27 16:31:03 +08:00
Soulter
84c450aef9 feat: 新增 Provider 注册接口;新增 provider 指令 2024-07-27 04:25:27 -04:00
Soulter
f52a0eb43a fix: 修复默认配置迁移问题 2024-07-27 08:58:26 +08:00
Soulter
6ed7559518 Merge pull request #174 from Soulter/refactor-v3.3.0
重写工程,提高稳定性
2024-07-26 18:24:33 +08:00
Soulter
d977dbe9a7 update version 2024-07-26 06:24:11 -04:00
Soulter
17fc761c61 chore: update default plugin 2024-07-26 05:15:41 -04:00
Soulter
af878f2ed3 fix: 修复 aiocqhttp 运行导致 ctrl+c 无法退出 bot 的问题
perf: 支持通过context注册task
2024-07-26 05:02:29 -04:00
Soulter
bb2164c324 perf: 在 context 中添加message_handler 2024-07-25 12:58:45 -04:00
Soulter
0496becc50 perf: 增强 aiocqhttp 发图的稳定性 2024-07-25 12:33:31 -04:00
Soulter
618f8aa7d2 fix: 修复一些指令的bug 2024-07-25 10:44:17 -04:00
Soulter
c57f711c48 update: metrics refactoring 2024-07-24 09:48:25 -04:00
Soulter
4edd11f2f7 fix: 修复了一些bug。 2024-07-24 09:19:43 -04:00
Soulter
a2cf058951 update: refactor codes 2024-07-24 18:40:08 +08:00
Soulter
d52eb10ddd chore: remove large font files to shrink the source code size 2024-07-07 21:37:44 +08:00
Soulter
4b6dae71fc update: 更新默认插件 helloworld 2024-07-07 21:00:18 +08:00
Soulter
ddad30c22e feat: 支持本地上传插件 2024-07-07 20:59:12 +08:00
Soulter
77067c545c feat: 使用压缩包文件的更新方式 2024-07-07 18:26:58 +08:00
Soulter
465d283cad Update README.md 2024-06-23 11:23:17 +08:00
Soulter
05071144fb fix: 修复文转图相关问题 2024-06-09 08:56:52 -04:00
Soulter
a4e7904953 chore: clean codes 2024-06-03 20:40:18 -04:00
Soulter
986a8c7554 Update README.md 2024-06-03 21:18:53 +08:00
Soulter
9272843b77 Update README.md 2024-06-03 21:18:00 +08:00
Soulter
542d4bc703 typo: fix t2i typo 2024-06-03 08:47:51 -04:00
Soulter
e3640fdac9 perf: 优化update、help等指令的输出效果 2024-06-03 08:33:17 -04:00
Soulter
f64ab4b190 chore: 移除了一些过时的方法 2024-06-03 05:54:40 -04:00
Soulter
bd571e1577 feat: 提供新的文本转图片样式 2024-06-03 05:51:44 -04:00
Soulter
e4a5cbd893 prof: 改善加载插件时的稳定性 2024-06-03 00:20:56 -04:00
Soulter
7a9fd7fd1e fix: 修复报配置文件未找到的问题 2024-06-02 23:14:48 -04:00
Soulter
d9b60108db Update README.md 2024-05-30 18:11:57 +08:00
Soulter
8455c8b4ed Update README.md 2024-05-30 18:03:59 +08:00
Soulter
5c2e7099fc Update README.md 2024-05-26 21:38:32 +08:00
Soulter
1fd1d55895 Update config.py 2024-05-26 21:31:26 +08:00
Soulter
5ce4137e75 fix: 修复model指令 2024-05-26 21:15:33 +08:00
Soulter
d49179541e feat: 给插件的init方法传入 ctx 2024-05-26 21:10:19 +08:00
Soulter
676f258981 perf: 重启后终止子进程 2024-05-26 21:09:23 +08:00
Soulter
fa44749240 fix: 修复相对路径导致的windows启动器无法安装依赖的问题 2024-05-26 18:15:25 +08:00
Soulter
6c856f9da2 fix(typo): 修复插件注册器的一个typo导致无法注册消息平台插件的问题 2024-05-26 18:07:07 +08:00
Soulter
e8773cea7f fix: 修复配置文件没有有效迁移的问题 2024-05-25 20:59:37 +08:00
Soulter
4d36ffcb08 fix: 优化插件的结果处理 2024-05-25 18:46:38 +08:00
Soulter
c653e492c4 Merge pull request #164 from Soulter/stat-upload-perf
/models 指令优化
2024-05-25 18:35:56 +08:00
Soulter
f08de1f404 perf: 添加 models 指令到帮助中 2024-05-25 18:34:08 +08:00
Soulter
1218691b61 perf: model 指令放宽限制,支持输入自定义模型。设置模型后持久化保存。 2024-05-25 18:29:01 +08:00
Soulter
61fc27ff79 Merge pull request #163 from Soulter/stat-upload-perf
优化统计记录数据结构
2024-05-25 18:28:08 +08:00
Soulter
123ee24f7e fix: stat perf 2024-05-25 18:01:16 +08:00
Soulter
52c9045a28 feat: 优化了统计信息数据结构 2024-05-25 17:47:41 +08:00
Soulter
f00f1e8933 fix: 画图报错 2024-05-24 13:33:02 +08:00
Soulter
8da4433e57 chore: 更改相关字段 2024-05-21 08:44:05 +08:00
Soulter
7babb87934 perf: 更改库的加载顺序 2024-05-21 08:41:46 +08:00
Soulter
f67b171385 perf: 数据库迁移至 data 目录下 2024-05-19 17:10:11 +08:00
Soulter
1780d1355d perf: 将内部pip全部更换为阿里云镜像; 插件依赖更新逻辑优化 2024-05-19 16:45:08 +08:00
Soulter
5a3390e4f3 fix: force update 2024-05-19 16:06:47 +08:00
Soulter
337d96b41d Merge pull request #160 from Soulter/dev_default_openai_refactor
优化自带的 OpenAI LLM 交互, 人格, 网页搜索
2024-05-19 15:23:19 +08:00
Soulter
38a1dfea98 fix: web content scraper add proxy 2024-05-19 15:08:22 +08:00
Soulter
fbef73aeec fix: websearch encoding set to utf-8 2024-05-19 14:42:28 +08:00
Soulter
d6214c2b7c fix: web search 2024-05-19 12:55:54 +08:00
Soulter
d58c86f6fc perf: websearch 优化;项目结构调整 2024-05-19 12:46:07 +08:00
Soulter
ea34c20198 perf: 优化人格和LVM的处理过程 2024-05-18 10:34:35 +08:00
Soulter
934ca94e62 refactor: 重写 LLM OpenAI 模块 2024-05-17 22:56:44 +08:00
Soulter
1775327c2e chore: refact openai official 2024-05-17 09:07:11 +08:00
Soulter
707fcad8b4 feat: gpt 模型列表查看指令 models 2024-05-17 00:06:49 +08:00
Soulter
f143c5afc6 fix: 修复 plugin v 子指令报错的问题 2024-05-16 23:11:07 +08:00
Soulter
99f94b2611 fix: 修复无法调用某些指令的问题 2024-05-16 23:04:47 +08:00
Soulter
e39c1f9116 remove: 移除自动更换多模态模型的功能 2024-05-16 22:46:50 +08:00
Soulter
235e0b9b8f fix: gocq logging 2024-05-09 13:24:31 +08:00
Soulter
d5a9bed8a4 fix(updator): IterableList object has no
attribute origin
2024-05-08 19:18:21 +08:00
Soulter
d7dc8a7612 chore: 添加一些日志;更新版本 2024-05-08 19:12:23 +08:00
Soulter
08cd3ca40c perf: 更好的日志输出;
fix: 修复可视化面板刷新404
2024-05-08 19:01:36 +08:00
Soulter
a13562dcea fix: 修复启动器启动加载带有配置的插件时提示配置文件缺失的问题 2024-05-08 16:28:30 +08:00
Soulter
d7a0c0d1d0 Update requirements.txt 2024-05-07 15:58:51 +08:00
Soulter
c0729b2d29 fix: 修复插件重载相关问题 2024-04-22 19:04:15 +08:00
Soulter
a80f474290 fix: 修复更新插件时的报错 2024-04-22 18:36:56 +08:00
Soulter
699207dd54 update: version 2024-04-21 22:41:48 +08:00
Soulter
e7708010c9 fix: 修复 gocq 平台下无法回复消息的问题 2024-04-21 22:39:09 +08:00
Soulter
f66091e08f 🎨: clean codes 2024-04-21 22:20:23 +08:00
Soulter
03bb932f8f fix: 修复可视化面板报错 2024-04-21 22:16:42 +08:00
Soulter
fbf8b349e0 update: helloworld 2024-04-21 22:13:27 +08:00
Soulter
e9278fce6a !! delete: 移除对逆向 ChatGPT 的所有支持。 2024-04-21 22:12:09 +08:00
Soulter
9a7db956d5 fix: 修复 3.10.x readibility 依赖导致的报错 2024-04-21 16:40:02 +08:00
Soulter
13196dd667 perf: 修改包路径 2024-03-15 14:49:44 +08:00
Soulter
52b80e24d2 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-03-15 14:29:48 +08:00
Soulter
7dff87e65d fix: 修复无法更新到指定版本的问题 2024-03-15 14:29:28 +08:00
Soulter
31ee64d1b2 Update docker-image.yml 2024-03-15 14:11:57 +08:00
Soulter
8e865b6918 fix: 修复无LLM情况下update不管用的问题 2024-03-15 14:05:16 +08:00
Soulter
66f91e5832 update: 更新版本号 2024-03-15 13:50:57 +08:00
Soulter
cd2d368f9c fix: 修复可视化面板指定版本更新失效的问题 2024-03-15 13:48:14 +08:00
Soulter
7736c1c9bd feat: QQ机器人官方API 支持可选择是否接收Q群消息 2024-03-15 13:44:18 +08:00
Soulter
6728c0b7b5 chore: 改变包名 2024-03-15 13:37:51 +08:00
Soulter
344f92e0e7 perf: 将内部基本消息对象统一为 AstrBotMessage
feat: 支持官方qq接口的Q群消息
2024-03-14 13:56:32 +08:00
Soulter
fdabfef6a7 update: version 2024-03-13 21:28:18 +08:00
Soulter
6c5718f134 fix: 修复画图的报错 2024-03-13 21:27:48 +08:00
Soulter
edfde51434 fix: 修复频道平台下未找到平台 qqchan 的实例的错误 2024-03-13 19:53:36 +08:00
Soulter
3fc1347bba fix: plugin register management 2024-03-12 20:00:02 +08:00
Soulter
e643eea365 perf: 结构化插件的表示格式; 优化插件开发接口 2024-03-12 18:50:50 +08:00
Soulter
1af481f5f9 fix: function call with newer version 2024-03-07 17:35:21 +08:00
Soulter
317d1c4c41 fix: onebot protocol connection error 2024-03-05 14:03:46 +08:00
Soulter
a703860512 fix: plugin call 2024-03-05 13:52:44 +08:00
Soulter
1cd1c8ea0d feat: 异步重写
perf: 优化网页搜索回答规范
2024-03-03 18:54:50 +08:00
Soulter
53ef3bbf4f fix: 修复修改cqhttp端口后仍检测失败的问题 2024-02-19 19:04:40 +08:00
Soulter
ab7b8aad7c chore: delete llms 2024-02-12 23:28:12 +08:00
Soulter
c49213282b Merge remote-tracking branch 'refs/remotes/origin/master' 2024-02-12 23:18:11 +08:00
Soulter
3c87fc5b31 perf: clean codes; 将 keyword 功能转移至 helloworld 插件下 2024-02-12 23:17:55 +08:00
Soulter
9684508e1d Update README.md 2024-02-11 13:47:09 +08:00
Soulter
bb0edae200 Update README.md 2024-02-08 00:40:48 +08:00
Soulter
acb68a4a1e chore: 更新版本标识 2024-02-08 00:31:08 +08:00
Soulter
46dd6f3243 fix: 1. 修复可视化面板无法保存配置的问题;修复help指令无法生成图片的问题
feat: 支持更多插件标准接口
2024-02-08 00:29:37 +08:00
Soulter
ecab072890 chore: 改变版本号;清理一些无用变量 2024-02-07 17:41:10 +08:00
Soulter
148534d3c2 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-02-07 16:45:11 +08:00
Soulter
1278f16973 feat: 可视化面板完整支持插件配置 2024-02-07 16:44:38 +08:00
Soulter
7d9b3c6c5c Update docker-image.yml 2024-02-07 13:05:52 +08:00
Soulter
83dcb5165c perf: 优化可视化面板配置显示;
feat: 新增面向插件的配置接口
2024-02-07 12:19:52 +08:00
Soulter
30862bb82f perf: 优化更新速度和更新流程 2024-02-06 19:18:53 +08:00
Soulter
6c0bda8feb Update README.md 2024-02-06 18:30:56 +08:00
Soulter
e14dece206 perf: 优化项目更新逻辑 2024-02-06 17:45:02 +08:00
Soulter
680593d636 fix: 修复web指令前缀失效问题 2024-02-06 15:42:29 +08:00
Soulter
144440214f fix: 修复画图报错的问题 2024-02-06 12:56:41 +08:00
Soulter
6667b58a3f fix: 修复容器出现的一些问题 2024-02-06 12:48:57 +08:00
Soulter
b55d9533be chore: 清理了一些无用代码 2024-02-05 23:46:46 +08:00
Soulter
3484fc60e6 fix: dashboard fake dead 2024-02-05 14:51:19 +08:00
Soulter
eac0265522 fix: 修复频道私聊且独立会话下的报错 2024-02-05 14:45:32 +08:00
Soulter
ac74431633 fix: 修复远程连接可视化面板下控制台不能正常显示的问题 2024-02-05 14:12:38 +08:00
Soulter
4c098200be fix: 修复docker环境下ws server的报错 2024-02-05 13:54:45 +08:00
Soulter
2cf18972f3 fix: 修复面板保存配置时报错的问题;修复频道私聊报错的问题
perf: 改善日志
2024-02-05 13:18:34 +08:00
Soulter
d522d2a6a9 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-02-04 21:29:58 +08:00
Soulter
7079ce096f feat: 可视化面板支持日志显示
chore: 减少了一些日志输出
2024-02-04 21:28:03 +08:00
Soulter
5e8c5067b1 Update README.md 2024-01-16 00:32:07 +08:00
Soulter
570ff4e8b6 perf: 优化bing网页搜索 2024-01-10 16:48:46 +08:00
Soulter
e2f1362a1f fix: 修复myid指令在gocq平台上不可用的情况 2024-01-09 22:25:52 +08:00
Soulter
3519e38211 perf: 移除默认prompt 2024-01-07 14:51:43 +08:00
Soulter
08734250f7 feat: 支持频道上的文字转图片 2024-01-05 18:59:02 +08:00
Soulter
e8407f6449 feat: 添加逆向语言模型服务相关配置到面板 2024-01-05 17:13:52 +08:00
Soulter
04f3400f83 perf: 改善插件搜集流程 2024-01-03 20:19:31 +08:00
Soulter
89c8b3e7fc fix: 修复 gocq 环境下 at 机器人报错的问题 2024-01-03 16:32:08 +08:00
Soulter
66294100ec fix: typo fix 2024-01-03 16:26:58 +08:00
Soulter
8ed8a23c8b fix: 修复 gocq 环境下消息响应的一些问题 2024-01-03 16:15:37 +08:00
Soulter
449b0b03b5 fix: 修复报错 nick_qq 的问题 2024-01-03 16:00:51 +08:00
Soulter
d93754bf1d Update cmd_config.py 2024-01-03 15:46:11 +08:00
Soulter
a007a61ecc Update docker-image.yml 2024-01-02 16:44:58 +08:00
Soulter
e481377317 fix: 修复 update 的一些问题 2024-01-01 12:46:22 +08:00
Soulter
4c5831c7b4 remove: 删除 simhei 字体资源 2024-01-01 12:03:21 +08:00
Soulter
fc54b5237f feat: 支持设置 llm 唤醒词 2024-01-01 11:48:55 +08:00
Soulter
f8f42678d1 fix: 修复 消息 send() 不能够正常使用的问题 2024-01-01 11:34:56 +08:00
Soulter
38b1f4128c Merge pull request #145 from Soulter/dev_platform_refact
重构与消息平台有关的部分代码
2024-01-01 11:07:13 +08:00
Soulter
04fb4f88ad feat: 重构代码 2023-12-30 20:08:28 +08:00
Soulter
4675f5df08 Create stale.yml 2023-12-28 14:12:25 +08:00
Soulter
34ee358d40 Update README.md 2023-12-28 14:01:53 +08:00
Soulter
c4cfd1a3e2 Update README.md 2023-12-28 13:18:47 +08:00
Soulter
f5857aaa0c Merge branch 'master' into dev 2023-12-02 16:26:25 +08:00
Soulter
f4222e0923 bugfixes 2023-11-21 22:37:35 +08:00
Soulter
f0caea9026 feat: 针对 OneBot 和 NoneBot 的消息兼容层和插件的初步适配 2023-11-21 14:23:47 +08:00
156 changed files with 7499 additions and 5961 deletions

3
.codecov.yml Normal file
View File

@@ -0,0 +1,3 @@
comment:
layout: "condensed_header, condensed_files, condensed_footer"
hide_project_coverage: TRUE

5
.coveragerc Normal file
View File

@@ -0,0 +1,5 @@
[run]
omit =
*/site-packages/*
*/dist-packages/*
your_package_name/tests/*

18
.dockerignore Normal file
View File

@@ -0,0 +1,18 @@
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# github acions
.github/
.*ignore
.git/
# User-specific stuff
.idea/
# Byte-compiled / optimized / DLL files
__pycache__/
# Environments
.env
.venv
env/
venv*/
ENV/
.conda/
README*.md

82
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@@ -0,0 +1,82 @@
name: '🐛 报告 Bug'
title: '[Bug]'
description: 提交报告帮助我们改进。
labels: [ 'bug' ]
body:
- type: markdown
attributes:
value: |
感谢您抽出时间报告问题!请准确解释您的问题。如果可能,请提供一个可复现的片段(这有助于更快地解决问题)。
- type: textarea
attributes:
label: 发生了什么
description: 描述你遇到的异常
placeholder: >
一个清晰且具体的描述这个异常是什么。
validations:
required: true
- type: textarea
attributes:
label: 如何复现?
description: >
复现该问题的步骤
placeholder: >
如: 1. 打开 '...'
validations:
required: true
- type: textarea
attributes:
label: AstrBot 版本与部署方式
description: >
请提供您的 AstrBot 版本和部署方式。
placeholder: >
如: 3.1.8 Docker, 3.1.7 Windows启动器
validations:
required: true
- type: dropdown
attributes:
label: 操作系统
description: |
你在哪个操作系统上遇到了这个问题?
multiple: false
options:
- 'Windows'
- 'macOS'
- 'Linux'
- 'Other'
- 'Not sure'
validations:
required: true
- type: textarea
attributes:
label: 额外信息
description: >
任何额外信息,如报错日志、截图等。
placeholder: >
请提供完整的报错日志或截图。
validations:
required: true
- type: checkboxes
attributes:
label: 你愿意提交 PR 吗?
description: >
这绝对不是必需的,但我们很乐意在贡献过程中为您提供指导特别是如果你已经很好地理解了如何实现修复。
options:
- label: 是的,我愿意提交 PR!
- type: checkboxes
attributes:
label: Code of Conduct
options:
- label: >
我已阅读并同意遵守该项目的 [行为准则](https://docs.github.com/zh/site-policy/github-terms/github-community-code-of-conduct)。
required: true
- type: markdown
attributes:
value: "感谢您填写我们的表单!"

View File

@@ -0,0 +1,42 @@
name: '🎉 功能建议'
title: "[Feature]"
description: 提交建议帮助我们改进。
labels: [ "enhancement" ]
body:
- type: markdown
attributes:
value: |
感谢您抽出时间提出新功能建议,请准确解释您的想法。
- type: textarea
attributes:
label: 描述
description: 简短描述您的功能建议。
- type: textarea
attributes:
label: 使用场景
description: 你想要发生什么?
placeholder: >
一个清晰且具体的描述这个功能的使用场景。
- type: checkboxes
attributes:
label: 你愿意提交PR吗?
description: >
这不是必须的,但我们欢迎您的贡献。
options:
- label: 是的, 我愿意提交PR!
- type: checkboxes
attributes:
label: Code of Conduct
options:
- label: >
我已阅读并同意遵守该项目的 [行为准则](https://docs.github.com/zh/site-policy/github-terms/github-community-code-of-conduct)。
required: true
- type: markdown
attributes:
value: "感谢您填写我们的表单!"

10
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,10 @@
<!-- 如果有的话,指定这个 PR 要解决的 ISSUE -->
修复了 #XYZ
### Motivation
<!--解释为什么要改动-->
### Modifications
<!--简单解释你的改动-->

93
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,93 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
schedule:
- cron: '21 15 * * 5'
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
timeout-minutes: ${{ (matrix.language == 'swift' && 120) || 360 }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: python
build-mode: none
# CodeQL supports the following values keywords for 'language': 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

39
.github/workflows/coverage_test.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
name: Run tests and upload coverage
on:
push
jobs:
test:
name: Run tests and collect coverage
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v4
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest pytest-cov pytest-asyncio
mkdir data
mkdir data/plugins
mkdir data/config
mkdir temp
- name: Run tests
run: |
export LLM_MODEL=${{ secrets.LLM_MODEL }}
export OPENAI_API_BASE=${{ secrets.OPENAI_API_BASE }}
export OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }}
PYTHONPATH=./ pytest --cov=. tests/ -v
- name: Upload results to Codecov
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -1,25 +1,42 @@
name: Docker Image CI/CD
on:
push:
branches:
- master
- dev_dashboard
paths-ignore:
- '**/*.md'
release:
types: [published]
workflow_dispatch:
jobs:
publish-latest-docker-image:
publish-docker:
runs-on: ubuntu-latest
name: Build and publish docker image
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Build image
run: |
docker build -t ${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:v1 .
- name: Publish image
run: |
docker login -u ${{ secrets.DOCKER_HUB_USERNAME }} -p ${{ secrets.DOCKER_HUB_PASSWORD }}
docker push ${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:v1
- name: 拉取源码
uses: actions/checkout@v3
with:
fetch-depth: 1
- name: 设置 QEMU
uses: docker/setup-qemu-action@v3
- name: 设置 Docker Buildx
uses: docker/setup-buildx-action@v3
- name: 登录到 DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: 构建和推送 Docker hub
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:latest
${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:${{ github.event.release.tag_name }}
- name: Post build notifications
run: echo "Docker image has been built and pushed successfully"

27
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
#
# You can adjust the behavior by modifying this file.
# For more information, see:
# https://github.com/actions/stale
name: Mark stale issues and pull requests
on:
schedule:
- cron: '21 23 * * *'
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'Stale issue message'
stale-pr-message: 'Stale pull request message'
stale-issue-label: 'no-issue-activity'
stale-pr-label: 'no-pr-activity'

6
.gitignore vendored
View File

@@ -6,3 +6,9 @@ configs/session
configs/config.yaml
**/.DS_Store
temp
cmd_config.json
data/*
cookies.json
logs/
addons/plugins
.coverage

View File

@@ -1,8 +1,20 @@
FROM python:3.10.13-bullseye
FROM python:3.10-slim
WORKDIR /AstrBot
COPY . /AstrBot/
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
build-essential \
python3-dev \
libffi-dev \
libssl-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install -r requirements.txt
EXPOSE 6185
EXPOSE 6186
CMD [ "python", "main.py" ]

188
README.md
View File

@@ -1,185 +1,67 @@
<p align="center">
<img src="https://github.com/Soulter/AstrBot/assets/37870767/b1686114-f3aa-4963-b07f-28bf83dc0a10" alt="QQChannelChatGPT" width="200" />
<img width="750" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/c6f057d9-46d7-4144-8116-00a962941746">
</p>
<div align="center">
# AstrBot
*✨ 2024 - 希望成为一个跨平台、极易上手、稳定安全的机器人项目。✨*
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/Soulter/AstrBot)](https://github.com/Soulter/AstrBot/releases/latest)
<img src="https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/34412545-2e37-400f-bedc-42348713ac1f.svg" alt="wakatime">
<img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="python">
<a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg"/></a>
[![codecov](https://codecov.io/gh/Soulter/AstrBot/graph/badge.svg?token=FF3P5967B8)](https://codecov.io/gh/Soulter/AstrBot)
<a href="https://qm.qq.com/cgi-bin/qm/qr?k=EYGsuUTfe00_iOu9JTXS7_TEpMkXOvwv&jump_from=webapi&authKey=uUEMKCROfsseS+8IzqPjzV3y1tzy4AkykwTib2jNkOFdzezF9s9XknqnIaf3CDft">
<img alt="Static Badge" src="https://img.shields.io/badge/QQ群-322154837-purple">
</a>
<img alt="Static Badge" src="https://img.shields.io/badge/频道-x42d56aki2-purple">
<a href="https://astrbot.soulter.top/center">项目主页(开发中)</a>
<a href="https://github.com/Soulter/QQChannelChatGPT/wiki">部署文档</a>
<a href="https://github.com/Soulter/QQChannelChatGPT/issues">问题提交</a>
<a href="https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91">插件开发(最少只需 25 行,真不难!)</a>
<a href="https://astrbot.soulter.top/docs/main">快速开始</a>
<a href="https://github.com/Soulter/AstrBot/issues">问题提交</a>
<a href="https://astrbot.soulter.top/docs/develop/plugin4p">插件开发</a>
</div>
## 🛠️ 功能
🌍 支持的消息平台
- QQ 群、QQ 频道OneBot、QQ 官方接口)
- Telegram[astrbot_plugin_telegram](https://github.com/Soulter/astrbot_plugin_telegram) 插件)
## 🤔您可能想了解的
- **如何部署?** [帮助文档](https://github.com/Soulter/QQChannelChatGPT/wiki) (部署不成功欢迎进群捞人解决<3)
- **go-cqhttp启动不成功报登录失败** [在这里搜索解决方法](https://github.com/Mrs4s/go-cqhttp/issues)
- **程序闪退/机器人启动不成功** [提交issue或加群反馈](https://github.com/Soulter/QQChannelChatGPT/issues)
- **如何开启ChatGPTBardClaude等语言模型** [查看帮助](https://github.com/Soulter/QQChannelChatGPT/wiki/%E8%A1%A5%E5%85%85%EF%BC%9A%E5%A6%82%E4%BD%95%E5%BC%80%E5%90%AFChatGPT%E3%80%81Bard%E3%80%81Claude%E7%AD%89%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%EF%BC%9F)
🌍 支持的大模型/底座:
## 🧩功能:
- OpenAI GPT、DallE 系列
- Claude由[LLMs插件](https://github.com/Soulter/llms)支持)
- HuggingChat由[LLMs插件](https://github.com/Soulter/llms)支持)
- Gemini由[LLMs插件](https://github.com/Soulter/llms)支持)
- Ollama
- 几乎所有已知模型(可接入 [OneAPI](https://astrbot.soulter.top/docs/docs/adavanced/one-api)
最近功能
1. 支持切换代码分支输入`/update checkout <分支名>`即可切换代码分支
2. 正在测试可视化面板输入`/update checkout dev_dashboard`后根据提示即可体验
🌍 机器人支持的能力一览:
- 大模型对话、人格、网页搜索
- 可视化仪表盘
- 同时处理多平台消息
- 精确到个人的会话隔离
- 插件支持
- 文本转图片回复Markdown
🌍支持的AI语言模型一览
## 🧩 插件
**文字模型/图片理解**
有关插件的使用和列表请移步:[AstrBot 文档 - 插件](https://astrbot.soulter.top/docs/get-started/plugin)
- OpenAI GPT-3原生支持
- OpenAI GPT-3.5原生支持
- OpenAI GPT-4原生支持
- Claude免费[LLMs插件](https://github.com/Soulter/llms)支持
- HuggingChat免费[LLMs插件](https://github.com/Soulter/llms)支持
## 云部署
**图片生成**
[![Run on Repl.it](https://repl.it/badge/github/Soulter/AstrBot)](https://repl.it/github/Soulter/AstrBot)
- NovelAI/Naifu (免费[AIDraw插件](https://github.com/Soulter/aidraw)支持)
## ❤️ 贡献
欢迎任何 Issues/Pull Requests只需要将你的更改提交到此项目 )
🌍机器人支持的能力一览
- 可视化面板beta
- 同时部署机器人到 QQ QQ 频道
- 大模型对话
- 大模型网页搜索能力 **(目前仅支持OpenAI系模型最新版本下使用 web on 指令打开)**
- 插件在QQ或QQ频道聊天框内输入 `plugin` 了解详情
- 回复文字图片渲染以图片markdown格式回复**大幅度降低被风控概率**需手动在`cmd_config.json`内开启qq_pic_mode
- 人格设置
- 关键词回复
- 热更新更新本项目时**仅需**在QQ或QQ频道聊天框内输入`update latest r`
- Windows一键部署 https://github.com/Soulter/QQChatGPTLauncher/releases/latest
对于新功能的添加,请先通过 Issue 进行讨论。
<!--
### 基本功能
<details>
<summary>✅ 回复符合上下文</summary>
## 🔭 展望
- 程序向API发送近多次对话内容模型根据上下文生成回复
- 你可在`configs/config.yaml`中修改`total_token_limit`来近似控制缓存大小。
</details>
<details>
<summary>✅ 超额自动切换</summary>
- 超额时程序自动切换openai的key方便快捷
</details>
<details>
<summary>✅ 支持统计频道、消息数量等信息</summary>
- 实现了简单的统计功能
</details>
<details>
<summary>✅ 多并发处理,回复速度快</summary>
- 使用了协程理论最高可以支持每个子频道每秒回复5条信息
</details>
<details>
<summary>✅ 持久化转储历史记录,重启不丢失</summary>
- 使用内置的sqlite数据库存储历史记录到本地
- 方式为定时转储,可在`config.yaml`下修改`dump_history_interval`来修改间隔时间,单位为分钟。
</details>
<details>
<summary>✅ 支持多种指令控制</summary>
- 详见下方`指令功能`
</details>
<details>
<summary>✅ 官方API稳定</summary>
- 不使用ChatGPT逆向接口而使用官方API接口稳定方便。
- QQ频道机器人框架为QQ官方开源的框架稳定。
</details> -->
<!-- > 关于tokentoken就相当于是AI中的单词数但是不等于单词数`text-davinci-003`模型中最大可以支持`4097`个token。在发送信息时这个机器人会将用户的历史聊天记录打包发送给ChatGPT因此`token`也会相应的累加为了保证聊天的上下文的逻辑性就有了缓存token。 -->
### 🛠️ 插件支持
本项目支持接入插件。
> 使用`plugin i 插件GitHub链接`即可安装。
插件开发教程https://github.com/Soulter/QQChannelChatGPT/wiki/%E5%9B%9B%E3%80%81%E5%BC%80%E5%8F%91%E6%8F%92%E4%BB%B6
部分插件:
- `LLMS`: https://github.com/Soulter/llms | Claude, HuggingChat 大语言模型接入。
- `GoodPlugins`: https://github.com/Soulter/goodplugins | 随机动漫图片、搜番、喜报生成器等等
- `sysstat`: https://github.com/Soulter/sysstatqcbot | 查看系统状态
- `BiliMonitor`: https://github.com/Soulter/BiliMonitor | 订阅B站动态
- `liferestart`: https://github.com/Soulter/liferestart | 人生重开模拟器
- [ ] 更多、更开放的 LLM Agent 能力
## ✨ Demo
<img width="900" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/824d1ff3-7b85-481c-b795-8e62dedb9fd7">
<!--
### 指令
#### OpenAI官方API
在频道内需要先`@`机器人之后再输入指令在QQ中暂时需要在消息前加上`ai `,不需要@
- `/reset`重置prompt
- `/his`查看历史记录(每个用户都有独立的会话)
- `/his [页码数]`查看不同页码的历史记录。例如`/his 2`查看第2页
- `/token`查看当前缓存的总token数
- `/count` 查看统计
- `/status` 查看chatGPT的配置
- `/help` 查看帮助
- `/key` 动态添加key
- `/set` 人格设置面板
- `/keyword nihao 你好` 设置关键词回复。nihao->你好
- `/bing` 切换为bing
- `/revgpt` 切换为ChatGPT逆向库
- `/画` 画画
#### 逆向ChatGPT库语言模型
- `/gpt` 切换为OpenAI官方API
- `/bing` 切换为bing
* 切换模型指令支持临时回复。如`/bing 你好`将会临时使用一次bing模型 -->
<!--
## 🙇‍感谢
本项目使用了一下项目:
[ChatGPT by acheong08](https://github.com/acheong08/ChatGPT)
[EdgeGPT by acheong08](https://github.com/acheong08/EdgeGPT)
[go-cqhttp by Mrs4s](https://github.com/Mrs4s/go-cqhttp)
[nakuru-project by Lxns-Network](https://github.com/Lxns-Network/nakuru-project) -->

View File

@@ -1 +0,0 @@
import{x as e,o as a,c as t,w as o,a as s,B as n,X as r,T as c}from"./index-7c8bc001.js";const f=e({__name:"BlankLayout",setup(p){return(u,_)=>(a(),t(c,null,{default:o(()=>[s(n(r))]),_:1}))}});export{f as default};

View File

@@ -1 +0,0 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-89ca5198.js";import{_}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{x as p,D as a,o as r,s,a as e,w as t,f as o,V as i,F as n,u as g,c as h,_ as b,e as x,t as y}from"./index-7c8bc001.js";const P=p({__name:"ColorPage",setup(C){const c=a({title:"Colors Page"}),d=a([{title:"Utilities",disabled:!1,href:"#"},{title:"Colors",disabled:!0,href:"#"}]),u=a(["primary","lightprimary","secondary","lightsecondary","info","success","accent","warning","error","darkText","lightText","borderLight","inputBorder","containerBg"]);return(V,k)=>(r(),s(n,null,[e(m,{title:c.value.title,breadcrumbs:d.value},null,8,["title","breadcrumbs"]),e(i,null,{default:t(()=>[e(o,{cols:"12",md:"12"},{default:t(()=>[e(_,{title:"Color Palette"},{default:t(()=>[e(i,null,{default:t(()=>[(r(!0),s(n,null,g(u.value,(l,f)=>(r(),h(o,{md:"3",cols:"12",key:f},{default:t(()=>[e(b,{rounded:"md",class:"align-center justify-center d-flex",height:"100",width:"100%",color:l},{default:t(()=>[x("class: "+y(l),1)]),_:2},1032,["color"])]),_:2},1024))),128))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{P as default};

View File

@@ -1 +0,0 @@
import{_ as h}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{o as a,s as t,a as n,w as i,f as b,F as d,u as g,V as C,d as U,e as x,t as c,a8 as B,R as _,c as r,a9 as w,O as v,b as V,aa as N,i as F,q as P,k as f,A as S}from"./index-7c8bc001.js";const D={name:"ConfigPage",components:{UiParentCard:h},data(){return{config_data:{data:[]},save_message_snack:!1,save_message:"",save_message_success:""}},mounted(){this.getConfig()},methods:{getConfig(){_.get("/api/configs").then(o=>{this.config_data=o.data.data,console.log(this.config_data)})},updateConfig(){_.post("/api/configs",this.config_data).then(o=>{console.log(this.config_data),o.data.status==="success"?(this.save_message=o.data.message,this.save_message_snack=!0,this.save_message_success="success"):(this.save_message=o.data.message,this.save_message_snack=!0,this.save_message_success="error")})}}},$=Object.assign(D,{setup(o){return(s,m)=>(a(),t(d,null,[n(C,null,{default:i(()=>[n(b,{cols:"12",md:"12"},{default:i(()=>[(a(!0),t(d,null,g(s.config_data.data,u=>(a(),r(h,{key:u.name,title:u.name,style:{"margin-bottom":"16px"}},{default:i(()=>[(a(!0),t(d,null,g(u.body,e=>(a(),t(d,null,[e.config_type==="item"?(a(),t(d,{key:0},[e.val_type==="bool"?(a(),r(w,{key:0,modelValue:e.value,"onUpdate:modelValue":l=>e.value=l,label:e.name,hint:e.description,color:"primary",inset:""},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="string"?(a(),r(v,{key:1,modelValue:e.value,"onUpdate:modelValue":l=>e.value=l,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="int"?(a(),r(v,{key:2,modelValue:e.value,"onUpdate:modelValue":l=>e.value=l,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="list"?(a(),t(d,{key:3},[V("span",null,c(e.name),1),n(N,{modelValue:e.value,"onUpdate:modelValue":l=>e.value=l,chips:"",clearable:"",label:"请添加",multiple:"","prepend-icon":"mdi-tag-multiple-outline"},{selection:i(({attrs:l,item:p,select:k,selected:y})=>[n(F,P(l,{"model-value":y,closable:"",onClick:k,"onClick:close":O=>s.remove(p)}),{default:i(()=>[V("strong",null,c(p),1)]),_:2},1040,["model-value","onClick","onClick:close"])]),_:2},1032,["modelValue","onUpdate:modelValue"])],64)):f("",!0)],64)):e.config_type==="divider"?(a(),r(S,{key:1,style:{"margin-top":"8px","margin-bottom":"8px"}})):f("",!0)],64))),256))]),_:2},1032,["title"]))),128))]),_:1})]),_:1}),n(U,{icon:"mdi-content-save",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary",onClick:s.updateConfig},null,8,["onClick"]),n(B,{timeout:2e3,elevation:"24",color:s.save_message_success,modelValue:s.save_message_snack,"onUpdate:modelValue":m[0]||(m[0]=u=>s.save_message_snack=u)},{default:i(()=>[x(c(s.save_message),1)]),_:1},8,["color","modelValue"])],64))}});export{$ as default};

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
import{x as b,o as d,c as h,w as e,a,a6 as C,b as i,K as x,e as o,t as u,G as m,d as r,A as E,L as V,a7 as y,J as w,s as p,f as c,F as f,u as $,V as k,q as S,N as B,O as N,P as T,H as j,a8 as D,R as g,j as F}from"./index-7c8bc001.js";const G={class:"d-sm-flex align-center justify-space-between"},v=b({__name:"ExtensionCard",props:{title:String,link:String},setup(n){const s=n,l=t=>{window.open(t,"_blank")};return(t,_)=>(d(),h(w,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(C,{style:{padding:"10px 20px"}},{default:e(()=>[i("div",G,[a(x,null,{default:e(()=>[o(u(s.title),1)]),_:1}),a(m),a(r,{icon:"mdi-link",variant:"plain",onClick:_[0]||(_[0]=z=>l(s.link))})])]),_:1}),a(E),a(V,null,{default:e(()=>[y(t.$slots,"default")]),_:3})]),_:3}))}}),P=i("div",{style:{"background-color":"white",width:"100%",padding:"16px","border-radius":"10px"}},[i("h3",null,"🧩 已安装的插件")],-1),U={style:{"min-height":"180px","max-height":"180px",overflow:"hidden"}},q={class:"d-flex align-center gap-3"},A=i("div",{style:{"background-color":"white",width:"100%",padding:"16px","border-radius":"10px"}},[i("h3",null,"🧩 插件市场 [待开发]")],-1),I=i("span",{class:"text-h5"},"从 Git 仓库链接安装插件",-1),L=i("small",null,"github, gitee, gitlab 等公开的仓库都行。",-1),O=i("br",null,null,-1),R={name:"ExtensionPage",components:{ExtensionCard:v},data(){return{extension_data:{data:[]},save_message_snack:!1,save_message:"",save_message_success:"",extension_url:"",status:"",dialog:!1,snack_message:"",snack_show:!1,snack_success:"success",install_loading:!1,uninstall_loading:!1}},mounted(){this.getExtensions()},methods:{getExtensions(){g.get("/api/extensions").then(n=>{this.extension_data.data=n.data.data,console.log(this.extension_data)})},newExtension(){this.install_loading=!0,console.log(this.install_loading),g.post("/api/extensions/install",{url:this.extension_url}).then(n=>{if(this.install_loading=!1,n.data.status==="error"){this.snack_message=n.data.message,this.snack_show=!0,this.snack_success="error";return}this.extension_data.data=n.data.data,console.log(this.extension_data),this.extension_url="",this.snack_message=n.data.message,this.snack_show=!0,this.snack_success="success",this.dialog=!1,this.getExtensions()}).catch(n=>{this.install_loading=!1,this.snack_message=n,this.snack_show=!0,this.snack_success="error"})},uninstallExtension(n){this.uninstall_loading=!0,g.post("/api/extensions/uninstall",{name:n}).then(s=>{if(this.uninstall_loading=!1,s.data.status==="error"){this.snack_message=s.data.message,this.snack_show=!0,this.snack_success="error";return}this.extension_data.data=s.data.data,console.log(this.extension_data),this.snack_message=s.data.message,this.snack_show=!0,this.snack_success="success",this.dialog=!1,this.getExtensions()}).catch(s=>{this.uninstall_loading=!1,this.snack_message=s,this.snack_show=!0,this.snack_success="error"})}}},J=Object.assign(R,{setup(n){return(s,l)=>(d(),p(f,null,[a(k,null,{default:e(()=>[a(c,{cols:"12",md:"12"},{default:e(()=>[P]),_:1}),(d(!0),p(f,null,$(s.extension_data.data,t=>(d(),h(c,{cols:"12",md:"6",lg:"4"},{default:e(()=>[(d(),h(v,{key:t.name,title:t.name,link:t.repo,style:{"margin-bottom":"16px"}},{default:e(()=>[i("p",U,u(t.desc),1),i("div",q,[a(F,null,{default:e(()=>[o("mdi-account")]),_:1}),i("span",null,u(t.author),1),a(m),a(r,{variant:"plain",onClick:_=>s.uninstallExtension(t.name),loading:s.uninstall_loading},{default:e(()=>[o("卸 载")]),_:2},1032,["onClick","loading"])])]),_:2},1032,["title","link"]))]),_:2},1024))),256)),a(c,{cols:"12",md:"12"},{default:e(()=>[A]),_:1})]),_:1}),a(j,{modelValue:s.dialog,"onUpdate:modelValue":l[3]||(l[3]=t=>s.dialog=t),persistent:"",width:"700"},{activator:e(({props:t})=>[a(r,S(t,{icon:"mdi-plus",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary"}),null,16)]),default:e(()=>[a(w,null,{default:e(()=>[a(x,null,{default:e(()=>[I]),_:1}),a(V,null,{default:e(()=>[a(B,null,{default:e(()=>[a(k,null,{default:e(()=>[a(c,{cols:"12"},{default:e(()=>[a(N,{label:"Git 库链接",modelValue:s.extension_url,"onUpdate:modelValue":l[0]||(l[0]=t=>s.extension_url=t),required:""},null,8,["modelValue"])]),_:1})]),_:1})]),_:1}),L,O,i("small",null,u(s.status),1)]),_:1}),a(T,null,{default:e(()=>[a(m),a(r,{color:"blue-darken-1",variant:"text",onClick:l[1]||(l[1]=t=>s.dialog=!1)},{default:e(()=>[o(" 关闭 ")]),_:1}),a(r,{color:"blue-darken-1",variant:"text",loading:s.install_loading,onClick:l[2]||(l[2]=t=>s.newExtension(s.extension_url))},{default:e(()=>[o(" 安装 ")]),_:1},8,["loading"])]),_:1})]),_:1})]),_:1},8,["modelValue"]),a(D,{timeout:2e3,elevation:"24",color:s.snack_success,modelValue:s.snack_show,"onUpdate:modelValue":l[4]||(l[4]=t=>s.snack_show=t)},{default:e(()=>[o(u(s.snack_message),1)]),_:1},8,["color","modelValue"])],64))}});export{J as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
import{at as _,x as d,D as n,o as c,s as m,a as f,w as p,au as r,b as a,av as o,B as t,aw as h}from"./index-7c8bc001.js";const s={Sidebar_drawer:!0,Customizer_drawer:!1,mini_sidebar:!1,fontTheme:"Roboto",inputBg:!1},l=_({id:"customizer",state:()=>({Sidebar_drawer:s.Sidebar_drawer,Customizer_drawer:s.Customizer_drawer,mini_sidebar:s.mini_sidebar,fontTheme:"Poppins",inputBg:s.inputBg}),getters:{},actions:{SET_SIDEBAR_DRAWER(){this.Sidebar_drawer=!this.Sidebar_drawer},SET_MINI_SIDEBAR(e){this.mini_sidebar=e},SET_FONT(e){this.fontTheme=e}}}),u={class:"logo",style:{display:"flex","align-items":"center"}},b={style:{"font-size":"24px","font-weight":"1000"}},w={style:{"font-size":"20px","font-weight":"1000"}},S={style:{"font-size":"20px"}},z=d({__name:"LogoDark",setup(e){n("rgb(var(--v-theme-primary))"),n("rgb(var(--v-theme-secondary))");const i=l();return(g,B)=>(c(),m("div",u,[f(t(h),{to:"/",style:{"text-decoration":"none",color:"black"}},{default:p(()=>[r(a("span",b,"AstrBot 仪表盘",512),[[o,!t(i).mini_sidebar]]),r(a("span",w,"Astr",512),[[o,t(i).mini_sidebar]]),r(a("span",S,"Bot",512),[[o,t(i).mini_sidebar]])]),_:1})]))}});export{z as _,l as u};

View File

@@ -1 +0,0 @@
import{_ as B}from"./LogoDark.vue_vue_type_script_setup_true_lang-4faa128a.js";import{x as y,D as o,o as b,s as U,a as e,w as a,b as n,B as $,d as u,f as d,A as _,e as f,V as r,O as m,an as A,as as E,F,c as T,N as q,J as V,L as P}from"./index-7c8bc001.js";const z="/assets/social-google-a359a253.svg",N=["src"],S=n("span",{class:"ml-2"},"Sign up with Google",-1),D=n("h5",{class:"text-h5 text-center my-4 mb-8"},"Sign up with Email address",-1),G={class:"d-sm-inline-flex align-center mt-2 mb-7 mb-sm-0 font-weight-bold"},L=n("a",{href:"#",class:"ml-1 text-lightText"},"Terms and Condition",-1),O={class:"mt-5 text-right"},j=y({__name:"AuthRegister",setup(w){const c=o(!1),i=o(!1),p=o(""),v=o(""),g=o(),h=o(""),x=o(""),k=o([s=>!!s||"Password is required",s=>s&&s.length<=10||"Password must be less than 10 characters"]),C=o([s=>!!s||"E-mail is required",s=>/.+@.+\..+/.test(s)||"E-mail must be valid"]);function R(){g.value.validate()}return(s,l)=>(b(),U(F,null,[e(u,{block:"",color:"primary",variant:"outlined",class:"text-lightText googleBtn"},{default:a(()=>[n("img",{src:$(z),alt:"google"},null,8,N),S]),_:1}),e(r,null,{default:a(()=>[e(d,{class:"d-flex align-center"},{default:a(()=>[e(_,{class:"custom-devider"}),e(u,{variant:"outlined",class:"orbtn",rounded:"md",size:"small"},{default:a(()=>[f("OR")]),_:1}),e(_,{class:"custom-devider"})]),_:1})]),_:1}),D,e(E,{ref_key:"Regform",ref:g,"lazy-validation":"",action:"/dashboards/analytical",class:"mt-7 loginForm"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:h.value,"onUpdate:modelValue":l[0]||(l[0]=t=>h.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Firstname"},null,8,["modelValue"])]),_:1}),e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:x.value,"onUpdate:modelValue":l[1]||(l[1]=t=>x.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Lastname"},null,8,["modelValue"])]),_:1})]),_:1}),e(m,{modelValue:v.value,"onUpdate:modelValue":l[2]||(l[2]=t=>v.value=t),rules:C.value,label:"Email Address / Username",class:"mt-4 mb-4",required:"",density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary"},null,8,["modelValue","rules"]),e(m,{modelValue:p.value,"onUpdate:modelValue":l[3]||(l[3]=t=>p.value=t),rules:k.value,label:"Password",required:"",density:"comfortable",variant:"outlined",color:"primary","hide-details":"auto","append-icon":i.value?"mdi-eye":"mdi-eye-off",type:i.value?"text":"password","onClick:append":l[4]||(l[4]=t=>i.value=!i.value),class:"pwdInput"},null,8,["modelValue","rules","append-icon","type"]),n("div",G,[e(A,{modelValue:c.value,"onUpdate:modelValue":l[5]||(l[5]=t=>c.value=t),rules:[t=>!!t||"You must agree to continue!"],label:"Agree with?",required:"",color:"primary",class:"ms-n2","hide-details":""},null,8,["modelValue","rules"]),L]),e(u,{color:"secondary",block:"",class:"mt-2",variant:"flat",size:"large",onClick:l[6]||(l[6]=t=>R())},{default:a(()=>[f("Sign Up")]),_:1})]),_:1},512),n("div",O,[e(_),e(u,{variant:"plain",to:"/auth/login",class:"mt-2 text-capitalize mr-n2"},{default:a(()=>[f("Already have an account?")]),_:1})])],64))}});const I={class:"pa-7 pa-sm-12"},J=n("h2",{class:"text-secondary text-h2 mt-8"},"Sign up",-1),Y=n("h4",{class:"text-disabled text-h4 mt-3"},"Enter credentials to continue",-1),M=y({__name:"RegisterPage",setup(w){return(c,i)=>(b(),T(r,{class:"h-100vh","no-gutters":""},{default:a(()=>[e(d,{cols:"12",class:"d-flex align-center bg-lightprimary"},{default:a(()=>[e(q,null,{default:a(()=>[n("div",I,[e(r,{justify:"center"},{default:a(()=>[e(d,{cols:"12",lg:"10",xl:"6",md:"7"},{default:a(()=>[e(V,{elevation:"0",class:"loginBox"},{default:a(()=>[e(V,{variant:"outlined"},{default:a(()=>[e(P,{class:"pa-9"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",class:"text-center"},{default:a(()=>[e(B),J,Y]),_:1})]),_:1}),e(j)]),_:1})]),_:1})]),_:1})]),_:1})]),_:1})])]),_:1})]),_:1})]),_:1}))}});export{M as default};

View File

@@ -1,526 +0,0 @@
from addons.dashboard.server import AstrBotDashBoard, DashBoardData
from pydantic import BaseModel
from typing import Union, Optional
import uuid
from util import general_utils as gu
from util.cmd_config import CmdConfig
from dataclasses import dataclass
import sys
import os
import threading
import time
def shutdown_bot(delay_s: int):
time.sleep(delay_s)
py = sys.executable
os.execl(py, py, *sys.argv)
@dataclass
class DashBoardConfig():
config_type: str
name: Optional[str] = None
description: Optional[str] = None
path: Optional[str] = None # 仅 item 才需要
body: Optional[list['DashBoardConfig']] = None # 仅 group 才需要
value: Optional[Union[list, dict, str, int, bool]] = None # 仅 item 才需要
val_type: Optional[str] = None # 仅 item 才需要
class DashBoardHelper():
def __init__(self, dashboard_data: DashBoardData, config: dict):
dashboard_data.configs = {
"data": []
}
self.parse_default_config(dashboard_data, config)
self.dashboard_data: DashBoardData = dashboard_data
self.dashboard = AstrBotDashBoard(self.dashboard_data)
self.key_map = {} # key: uuid, value: config key name
self.cc = CmdConfig()
@self.dashboard.register("post_configs")
def on_post_configs(post_configs: dict):
try:
gu.log(f"收到配置更新请求", gu.LEVEL_INFO, tag="可视化面板")
self.save_config(post_configs)
self.parse_default_config(self.dashboard_data, self.cc.get_all())
# 重启
threading.Thread(target=shutdown_bot, args=(2,), daemon=True).start()
except Exception as e:
gu.log(f"在保存配置时发生错误:{e}", gu.LEVEL_ERROR, tag="可视化面板")
raise e
# 将 config.yaml、 中的配置解析到 dashboard_data.configs 中
def parse_default_config(self, dashboard_data: DashBoardData, config: dict):
try:
bot_platform_group = DashBoardConfig(
config_type="group",
name="机器人平台配置",
description="机器人平台配置描述",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用 QQ 频道平台",
description="就是你想到的那个 QQ 频道平台。详见 q.qq.com",
value=config['qqbot']['enable'],
path="qqbot.enable",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="QQ机器人APPID",
description="详见 q.qq.com",
value=config['qqbot']['appid'],
path="qqbot.appid",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="QQ机器人令牌",
description="详见 q.qq.com",
value=config['qqbot']['token'],
path="qqbot.token",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="QQ机器人 Secret",
description="详见 q.qq.com",
value=config['qqbot_secret'],
path="qqbot_secret",
),
DashBoardConfig(
config_type="divider"
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用 GO-CQHTTP 平台",
description="gocq 是一个基于 HTTP 协议的 CQHTTP 协议的实现。详见 github.com/Mrs4s/go-cqhttp",
value=config['gocqbot']['enable'],
path="gocqbot.enable",
)
]
)
proxy_group = DashBoardConfig(
config_type="group",
name="代理配置",
description="代理配置描述",
body=[
DashBoardConfig(
config_type="item",
val_type="string",
name="HTTP 代理地址",
description="建议上下一致",
value=config['http_proxy'],
path="proxy",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="HTTPS 代理地址",
description="建议上下一致",
value=config['https_proxy'],
path="proxy",
)
]
)
general_platform_detail_group = DashBoardConfig(
config_type="group",
name="通用平台配置",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启动消息文字转图片",
description="启动后,机器人会将消息转换为图片发送,以降低风控风险。",
value=config['qq_pic_mode'],
path="qq_pic_mode",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="消息限制时间",
description="在此时间内,机器人不会回复同一个用户的消息。单位:秒",
value=config['limit']['time'],
path="limit.time",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="消息限制次数",
description="在上面的时间内,如果用户发送消息超过此次数,则机器人不会回复。单位:次",
value=config['limit']['count'],
path="limit.count",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="回复前缀",
description="[xxxx] 你好! 其中xxxx是你可以填写的前缀。如果为空则不显示。",
value=config['reply_prefix'],
path="reply_prefix",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="管理员用户 ID",
description="对机器人 !myid 即可获得。如果此功能不可用,请加群 322154837",
value=config['gocq_qqchan_admin'],
path="gocq_qqchan_admin",
),
DashBoardConfig(
config_type="item",
val_type="list",
name="通用管理员用户 ID同上此项支持多个管理员",
description="",
value=config['other_admins'],
path="other_admins",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="独立会话",
description="是否启用独立会话模式,即 1 个用户自然账号 1 个会话。",
value=config['uniqueSessionMode'],
path="uniqueSessionMode",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否允许 QQ 频道私聊",
description="仅针对 QQ 频道 SDK而非 GO-CQHTTP。如果启用那么机器人会响应私聊消息。",
value=config['direct_message_mode'],
path="direct_message_mode",
),
]
)
gocq_platform_detail_group = DashBoardConfig(
config_type="group",
name="GO-CQHTTP 平台配置",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="string",
name="HTTP 服务器地址",
description="",
value=config['gocq_host'],
path="gocq_host",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="HTTP 服务器端口",
description="",
value=config['gocq_http_port'],
path="gocq_http_port",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="WebSocket 服务器端口",
description="",
value=config['gocq_websocket_port'],
path="gocq_websocket_port",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应群消息",
description="",
value=config['gocq_react_group'],
path="gocq_react_group",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应私聊消息",
description="",
value=config['gocq_react_friend'],
path="gocq_react_friend",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应群成员增加消息",
description="",
value=config['gocq_react_group_increase'],
path="gocq_react_group_increase",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应频道消息",
description="",
value=config['gocq_react_guild'],
path="gocq_react_guild",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="转发阈值(字符数)",
description="机器人回复的消息长度超出这个值后,会被折叠成转发卡片发出以减少刷屏。",
value=config['qq_forward_threshold'],
path="qq_forward_threshold",
),
]
)
llm_group = DashBoardConfig(
config_type="group",
name="LLM 配置",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="list",
name="OpenAI API KEY",
description="OpenAI API 的 KEY。支持使用非官方但是兼容的 API。",
value=config['openai']['key'],
path="openai.key",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI API 节点地址",
description="OpenAI API 的节点地址,配合非官方 API 使用。如果不想填写,那么请填写 none",
value=config['openai']['api_base'],
path="openai.api_base",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI 模型",
description="OpenAI 模型。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['model'],
path="openai.chatGPTConfigs.model",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="OpenAI 最大生成长度",
description="OpenAI 最大生成长度。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['max_tokens'],
path="openai.chatGPTConfigs.max_tokens",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI 温度",
description="OpenAI 温度。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['temperature'],
path="openai.chatGPTConfigs.temperature",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI top_p",
description="OpenAI top_p。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['top_p'],
path="openai.chatGPTConfigs.top_p",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI frequency_penalty",
description="OpenAI frequency_penalty。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['frequency_penalty'],
path="openai.chatGPTConfigs.frequency_penalty",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI presence_penalty",
description="OpenAI presence_penalty。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['presence_penalty'],
path="openai.chatGPTConfigs.presence_penalty",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="OpenAI 总生成长度限制",
description="OpenAI 总生成长度限制。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['total_tokens_limit'],
path="openai.total_tokens_limit",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI 图像生成模型",
description="OpenAI 图像生成模型。",
value=config['openai_image_generate']['model'],
path="openai_image_generate.model",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI 图像生成大小",
description="OpenAI 图像生成大小。",
value=config['openai_image_generate']['size'],
path="openai_image_generate.size",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI 图像生成风格",
description="OpenAI 图像生成风格。修改前请参考 OpenAI 官方文档",
value=config['openai_image_generate']['style'],
path="openai_image_generate.style",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI 图像生成质量",
description="OpenAI 图像生成质量。修改前请参考 OpenAI 官方文档",
value=config['openai_image_generate']['quality'],
path="openai_image_generate.quality",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="大语言模型问题题首提示词",
description="如果填写了此项,在每个对大语言模型的请求中,都会在问题前加上此提示词。",
value=config['llm_env_prompt'],
path="llm_env_prompt",
),
]
)
baidu_aip_group = DashBoardConfig(
config_type="group",
name="百度内容审核",
description="需要去申请",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启动百度内容审核服务",
description="",
value=config['baidu_aip']['enable'],
path="baidu_aip.enable"
),
# "app_id": null,
# "api_key": null,
# "secret_key": null
DashBoardConfig(
config_type="item",
val_type="string",
name="APP ID",
description="",
value=config['baidu_aip']['app_id'],
path="baidu_aip.app_id"
),
DashBoardConfig(
config_type="item",
val_type="string",
name="API KEY",
description="",
value=config['baidu_aip']['api_key'],
path="baidu_aip.api_key"
),
DashBoardConfig(
config_type="item",
val_type="string",
name="SECRET KEY",
description="",
value=config['baidu_aip']['secret_key'],
path="baidu_aip.secret_key"
)
]
)
other_group = DashBoardConfig(
config_type="group",
name="其他配置",
description="其他配置描述",
body=[
# 人格
DashBoardConfig(
config_type="item",
val_type="string",
name="默认人格文本",
description="默认人格文本",
value=config['default_personality_str'],
path="default_personality_str",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="面板用户名",
description="是的,就是你理解的这个面板的用户名",
value=config['dashboard_username'],
path="dashboard_username",
),
]
)
dashboard_data.configs['data'] = [
bot_platform_group,
general_platform_detail_group,
gocq_platform_detail_group,
proxy_group,
llm_group,
other_group,
baidu_aip_group
]
except Exception as e:
gu.log(f"配置文件解析错误:{e}", gu.LEVEL_ERROR)
raise e
def save_config(self, post_config: dict):
'''
根据 path 解析并保存配置
'''
queue = []
for config in post_config['data']:
queue.append(config)
while len(queue) > 0:
config = queue.pop(0)
if config['config_type'] == "group":
for item in config['body']:
queue.append(item)
elif config['config_type'] == "item":
if config['path'] is None or config['path'] == "":
continue
path = config['path'].split('.')
if len(path) == 0:
continue
if config['val_type'] == "bool":
self.cc.put_by_dot_str(config['path'], config['value'])
elif config['val_type'] == "string":
self.cc.put_by_dot_str(config['path'], config['value'])
elif config['val_type'] == "int":
try:
self.cc.put_by_dot_str(config['path'], int(config['value']))
except:
raise ValueError(f"配置项 {config['name']} 的值必须是整数")
elif config['val_type'] == "float":
try:
self.cc.put_by_dot_str(config['path'], float(config['value']))
except:
raise ValueError(f"配置项 {config['name']} 的值必须是浮点数")
elif config['val_type'] == "list":
if config['value'] is None:
self.cc.put_by_dot_str(config['path'], [])
elif not isinstance(config['value'], list):
raise ValueError(f"配置项 {config['name']} 的值必须是列表")
self.cc.put_by_dot_str(config['path'], config['value'])
else:
raise NotImplementedError(f"未知或者未实现的的配置项类型:{config['val_type']}")
def run(self):
self.dashboard.run()

View File

@@ -1,233 +0,0 @@
from flask import Flask, request
from flask.logging import default_handler
from werkzeug.serving import make_server
import datetime
from util import general_utils as gu
from dataclasses import dataclass
import logging
from cores.database.conn import dbConn
from util.cmd_config import CmdConfig
import util.plugin_util as putil
@dataclass
class DashBoardData():
stats: dict
configs: dict
logs: dict
plugins: list[dict]
@dataclass
class Response():
status: str
message: str
data: dict
class AstrBotDashBoard():
def __init__(self, dashboard_data: DashBoardData):
self.dashboard_data = dashboard_data
self.dashboard_be = Flask(__name__, static_folder="dist", static_url_path="/")
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
self.funcs = {}
self.cc = CmdConfig()
@self.dashboard_be.get("/")
def index():
# 返回页面
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.post("/api/authenticate")
def authenticate():
username = self.cc.get("dashboard_username", "")
password = self.cc.get("dashboard_password", "")
# 获得请求体
post_data = request.json
if post_data["username"] == username and post_data["password"] == password:
return Response(
status="success",
message="登录成功。",
data={
"token": "astrbot-test-token",
"username": username
}
).__dict__
else:
return Response(
status="error",
message="用户名或密码错误。",
data=None
).__dict__
@self.dashboard_be.post("/api/change_password")
def change_password():
password = self.cc.get("dashboard_password", "")
# 获得请求体
post_data = request.json
if post_data["password"] == password:
self.cc.put("dashboard_password", post_data["new_password"])
return Response(
status="success",
message="修改成功。",
data=None
).__dict__
else:
return Response(
status="error",
message="原密码错误。",
data=None
).__dict__
@self.dashboard_be.get("/api/stats")
def get_stats():
db_inst = dbConn()
all_session = db_inst.get_all_stat_session()
last_24_message = db_inst.get_last_24h_stat_message()
# last_24_platform = db_inst.get_last_24h_stat_platform()
platforms = db_inst.get_platform_cnt_total()
self.dashboard_data.stats["session"] = []
self.dashboard_data.stats["session_total"] = db_inst.get_session_cnt_total()
self.dashboard_data.stats["message"] = last_24_message
self.dashboard_data.stats["message_total"] = db_inst.get_message_cnt_total()
self.dashboard_data.stats["platform"] = platforms
return Response(
status="success",
message="",
data=self.dashboard_data.stats
).__dict__
@self.dashboard_be.get("/api/configs")
def get_configs():
return Response(
status="success",
message="",
data=self.dashboard_data.configs
).__dict__
@self.dashboard_be.post("/api/configs")
def post_configs():
post_configs = request.json
try:
self.funcs["post_configs"](post_configs)
return Response(
status="success",
message="保存成功~ 机器人将在 2 秒内重启以应用新的配置。",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=self.dashboard_data.configs
).__dict__
@self.dashboard_be.get("/api/logs")
def get_logs():
return Response(
status="success",
message="",
data=self.dashboard_data.logs
).__dict__
@self.dashboard_be.get("/api/extensions")
def get_plugins():
"""
{
"name": "GoodPlugins",
"repo": "https://gitee.com/soulter/goodplugins",
"author": "soulter",
"desc": "一些好用的插件",
"version": "1.0"
}
"""
_plugin_resp = []
for plugin in self.dashboard_data.plugins:
_p = self.dashboard_data.plugins[plugin]
_t = {
"name": _p["info"]["name"],
"repo": '' if "repo" not in _p["info"] else _p["info"]["repo"],
"author": _p["info"]["author"],
"desc": _p["info"]["desc"],
"version": _p["info"]["version"]
}
_plugin_resp.append(_t)
return Response(
status="success",
message="",
data=_plugin_resp
).__dict__
@self.dashboard_be.post("/api/extensions/install")
def install_plugin():
post_data = request.json
repo_url = post_data["url"]
try:
gu.log(f"正在安装插件 {repo_url}", tag="可视化面板")
putil.install_plugin(repo_url, self.dashboard_data.plugins)
gu.log(f"安装插件 {repo_url} 成功", tag="可视化面板")
return Response(
status="success",
message="安装成功~",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/uninstall")
def uninstall_plugin():
post_data = request.json
plugin_name = post_data["name"]
try:
gu.log(f"正在卸载插件 {plugin_name}", tag="可视化面板")
putil.uninstall_plugin(plugin_name, self.dashboard_data.plugins)
gu.log(f"卸载插件 {plugin_name} 成功", tag="可视化面板")
return Response(
status="success",
message="卸载成功~",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/update")
def update_plugin():
post_data = request.json
plugin_name = post_data["name"]
try:
gu.log(f"正在更新插件 {plugin_name}", tag="可视化面板")
putil.update_plugin(plugin_name, self.dashboard_data.plugins)
gu.log(f"更新插件 {plugin_name} 成功", tag="可视化面板")
return Response(
status="success",
message="更新成功~",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
def register(self, name: str):
def decorator(func):
self.funcs[name] = func
return func
return decorator
def run(self):
ip_address = gu.get_local_ip_addresses()
ip_str = f"http://{ip_address}:6185\n\thttp://localhost:6185"
gu.log(f"\n\n==================\n您可以访问:\n\n\t{ip_str}\n\n来登录可视化面板。\n注意: 所有配置项现已全量迁移至 cmd_config.json 文件下。您可以登录可视化面板在线修改配置。\n==================\n\n", tag="可视化面板")
# self.dashboard_be.run(host="0.0.0.0", port=6185)
http_server = make_server('0.0.0.0', 6185, self.dashboard_be)
http_server.serve_forever()

View File

@@ -1,5 +0,0 @@
# helloworld
QQChannelChatGPT项目的测试插件
A test plugin for QQChannelChatGPT plugin feature

View File

@@ -1,66 +0,0 @@
from nakuru.entities.components import *
from nakuru import (
GroupMessage,
FriendMessage
)
from botpy.message import Message, DirectMessage
from model.platform.qq import QQ
from cores.qqbot.global_object import (
AstrMessageEvent,
CommandResult
)
'''
注意改插件名噢格式XXXPlugin 或 Main
小提示:把此模板仓库 fork 之后 clone 到机器人文件夹下的 addons/plugins/ 目录下,然后用 Pycharm/VSC 等工具打开可获更棒的编程体验(自动补全等)
'''
class HelloWorldPlugin:
"""
初始化函数, 可以选择直接pass
"""
def __init__(self) -> None:
print("hello, world!")
"""
机器人程序会调用此函数。
返回规范: bool: 插件是否响应该消息 (所有的消息均会调用每一个载入的插件, 如果不响应, 则应返回 False)
Tuple: Non e或者长度为 3 的元组。如果不响应, 返回 None 如果响应, 第 1 个参数为指令是否调用成功, 第 2 个参数为返回的消息链列表, 第 3 个参数为指令名称
例子:一个名为"yuanshen"的插件;当接收到消息为“原神 可莉”, 如果不想要处理此消息则返回False, None如果想要处理但是执行失败了返回True, tuple([False, "请求失败。", "yuanshen"]) 执行成功了返回True, tuple([True, "结果文本", "yuanshen"])
"""
def run(self, ame: AstrMessageEvent):
if ame.message_str == "helloworld":
# return True, tuple([True, "Hello World!!", "helloworld"])
return CommandResult(
hit=True,
success=True,
message_chain=[Plain("Hello World!!")],
command_name="helloworld"
)
else:
return CommandResult(
hit=False,
success=False,
message_chain=None,
command_name=None
)
"""
插件元信息。
当用户输入 plugin v 插件名称 时,会调用此函数,返回帮助信息。
返回参数要求(必填)dict{
"name": str, # 插件名称
"desc": str, # 插件简短描述
"help": str, # 插件帮助信息
"version": str, # 插件版本
"author": str, # 插件作者
"repo": str, # 插件仓库地址 [ 可选 ]
"homepage": str, # 插件主页 [ 可选 ]
}
"""
def info(self):
return {
"name": "helloworld",
"desc": "测试插件",
"help": "测试插件, 回复 helloworld 即可触发",
"version": "v1.2",
"author": "Soulter"
}

132
astrbot/bootstrap.py Normal file
View File

@@ -0,0 +1,132 @@
import asyncio
import traceback
import os
from astrbot.message.handler import MessageHandler
from astrbot.persist.helper import dbConn
from dashboard.server import AstrBotDashBoard
from model.command.manager import CommandManager
from model.command.internal_handler import InternalCommandHandler
from model.plugin.manager import PluginManager
from model.platform.manager import PlatformManager
from typing import Union
from type.types import Context
from type.config import VERSION
from SparkleLogging.utils.core import LogManager
from logging import Logger
from util.cmd_config import AstrBotConfig, try_migrate
from util.metrics import MetricUploader
from util.updator.astrbot_updator import AstrBotUpdator
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class AstrBotBootstrap():
def __init__(self) -> None:
self.context = Context()
# load configs and ensure the backward compatibility
try_migrate()
self.config_helper = AstrBotConfig()
self.context.config_helper = self.config_helper
logger.info("AstrBot v" + VERSION)
# apply proxy settings
http_proxy = self.context.config_helper.http_proxy
https_proxy = self.context.config_helper.https_proxy
if http_proxy:
os.environ['HTTP_PROXY'] = http_proxy
if https_proxy:
os.environ['HTTPS_PROXY'] = https_proxy
os.environ['NO_PROXY'] = 'https://api.sgroup.qq.com'
if http_proxy and https_proxy:
logger.info(f"使用代理: {http_proxy}, {https_proxy}")
else:
logger.info("未使用代理。")
self.test_mode = os.environ.get('TEST_MODE', 'off') == 'on'
async def run(self):
self.command_manager = CommandManager()
self.plugin_manager = PluginManager(self.context)
self.updator = AstrBotUpdator()
self.cmd_handler = InternalCommandHandler(self.command_manager, self.plugin_manager)
self.db_conn_helper = dbConn()
# load llm provider
self.load_llm()
self.message_handler = MessageHandler(self.context, self.command_manager, self.db_conn_helper)
self.platfrom_manager = PlatformManager(self.context, self.message_handler)
self.dashboard = AstrBotDashBoard(self.context, plugin_manager=self.plugin_manager, astrbot_updator=self.updator)
self.metrics_uploader = MetricUploader(self.context)
self.context.metrics_uploader = self.metrics_uploader
self.context.updator = self.updator
self.context.plugin_updator = self.plugin_manager.updator
self.context.message_handler = self.message_handler
self.context.command_manager = self.command_manager
# load dashboard
self.dashboard.run_http_server()
dashboard_task = asyncio.create_task(self.dashboard.ws_server(), name="dashboard")
if self.test_mode:
return
# load plugins, plugins' commands.
self.load_plugins()
self.command_manager.register_from_pcb(self.context.plugin_command_bridge)
# load platforms
platform_tasks = self.load_platform()
# load metrics uploader
metrics_upload_task = asyncio.create_task(self.metrics_uploader.upload_metrics(), name="metrics-uploader")
tasks = [metrics_upload_task, dashboard_task, *platform_tasks, *self.context.ext_tasks]
tasks = [self.handle_task(task) for task in tasks]
await asyncio.gather(*tasks)
async def handle_task(self, task: Union[asyncio.Task, asyncio.Future]):
while True:
try:
result = await task
return result
except asyncio.CancelledError:
logger.info(f"{task.get_name()} 任务已取消。")
return
except Exception as e:
logger.error(traceback.format_exc())
logger.error(f"{task.get_name()} 任务发生错误。")
return
def load_llm(self):
f = False
llms = self.context.config_helper.llm
logger.info(f"加载 {len(llms)} 个 LLM Provider...")
for llm in llms:
if llm.enable:
if llm.name == "openai" and llm.key and llm.enable:
self.load_openai(llm)
f = True
logger.info(f"已启用 OpenAI API 支持。")
else:
logger.warn(f"未知的 LLM Provider: {llm.name}")
if f:
from model.command.openai_official_handler import OpenAIOfficialCommandHandler
self.openai_command_handler = OpenAIOfficialCommandHandler(self.command_manager)
self.openai_command_handler.set_provider(self.context.llms[0].llm_instance)
def load_openai(self, llm_config):
from model.provider.openai_official import ProviderOpenAIOfficial
inst = ProviderOpenAIOfficial(llm_config)
self.context.register_provider("internal_openai", inst)
def load_plugins(self):
self.plugin_manager.plugin_reload()
def load_platform(self):
platforms = self.platfrom_manager.load_platforms()
if not platforms:
logger.warn("未启用任何消息平台。")
return platforms

View File

@@ -1,14 +1,16 @@
from aip import AipContentCensor
from util.cmd_config import BaiduAIPConfig
class BaiduJudge:
def __init__(self, baidu_configs) -> None:
if 'app_id' in baidu_configs and 'api_key' in baidu_configs and 'secret_key' in baidu_configs:
self.app_id = str(baidu_configs['app_id'])
self.api_key = baidu_configs['api_key']
self.secret_key = baidu_configs['secret_key']
self.client = AipContentCensor(self.app_id, self.api_key, self.secret_key)
else:
raise ValueError("Baidu configs error! 请填写百度内容审核服务相关配置!")
def __init__(self, baidu_configs: BaiduAIPConfig) -> None:
self.app_id = baidu_configs.app_id
self.api_key = baidu_configs.api_key
self.secret_key = baidu_configs.secret_key
self.client = AipContentCensor(self.app_id,
self.api_key,
self.secret_key)
def judge(self, text):
res = self.client.textCensorUserDefined(text)
if 'conclusionType' not in res:
@@ -23,4 +25,4 @@ class BaiduJudge:
for i in res['data']:
info += f"{i['msg']}\n"
info += "\n判断结果:"+res['conclusion']
return False, info
return False, info

286
astrbot/message/handler.py Normal file
View File

@@ -0,0 +1,286 @@
import time, json
import re, os
import asyncio
import traceback
import astrbot.message.unfit_words as uw
from typing import Dict
from astrbot.persist.helper import dbConn
from model.provider.provider import Provider
from model.command.manager import CommandManager
from type.message_event import AstrMessageEvent, MessageResult
from type.types import Context
from type.command import CommandResult
from SparkleLogging.utils.core import LogManager
from logging import Logger
from nakuru.entities.components import Image
from util.agent.func_call import FuncCall
import util.agent.web_searcher as web_searcher
from openai._exceptions import *
from openai.types.chat.chat_completion_message_tool_call import Function
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class RateLimitHelper():
def __init__(self, context: Context) -> None:
self.user_rate_limit: Dict[int, int] = {}
rl = context.config_helper.platform_settings.rate_limit
self.rate_limit_time: int = rl.time
self.rate_limit_count: int = rl.count
self.user_frequency = {}
def check_frequency(self, session_id: str) -> bool:
'''
检查发言频率
'''
ts = int(time.time())
if session_id in self.user_frequency:
if ts-self.user_frequency[session_id]['time'] > self.rate_limit_time:
self.user_frequency[session_id]['time'] = ts
self.user_frequency[session_id]['count'] = 1
return True
else:
if self.user_frequency[session_id]['count'] >= self.rate_limit_count:
return False
else:
self.user_frequency[session_id]['count'] += 1
return True
else:
t = {'time': ts, 'count': 1}
self.user_frequency[session_id] = t
return True
class ContentSafetyHelper():
def __init__(self, context: Context) -> None:
self.baidu_judge = None
aip = context.config_helper.content_safety.baidu_aip
if aip.enable:
try:
from astrbot.message.baidu_aip_judge import BaiduJudge
self.baidu_judge = BaiduJudge(aip)
logger.info("已启用百度 AI 内容审核。")
except ImportError as e:
logger.error("检测到库依赖不完整,将不会启用百度 AI 内容审核。请先使用 pip 安装 `baidu_aip` 包。")
logger.error(e)
except BaseException as e:
logger.error("百度 AI 内容审核初始化失败。")
logger.error(e)
async def check_content(self, content: str) -> bool:
'''
检查文本内容是否合法
'''
for i in uw.unfit_words_q:
matches = re.match(i, content.strip(), re.I | re.M)
if matches:
return False
if self.baidu_judge != None:
check, msg = await asyncio.to_thread(self.baidu_judge.judge, content)
if not check:
logger.info(f"百度 AI 内容审核发现以下违规:{msg}")
return False
return True
def filter_content(self, content: str) -> str:
'''
过滤文本内容
'''
for i in uw.unfit_words_q:
content = re.sub(i, "*", content, flags=re.I)
return content
def baidu_check(self, content: str) -> bool:
'''
使用百度 AI 内容审核检查文本内容是否合法
'''
if self.baidu_judge != None:
check, msg = self.baidu_judge.judge(content)
if not check:
logger.info(f"百度 AI 内容审核发现以下违规:{msg}")
return False
return True
class MessageHandler():
def __init__(self, context: Context,
command_manager: CommandManager,
persist_manager: dbConn) -> None:
self.context = context
self.command_manager = command_manager
self.persist_manager = persist_manager
self.rate_limit_helper = RateLimitHelper(context)
self.content_safety_helper = ContentSafetyHelper(context)
self.llm_wake_prefix = self.context.config_helper.llm_settings.wake_prefix
if self.llm_wake_prefix:
self.llm_wake_prefix = self.llm_wake_prefix.strip()
self.provider = self.context.llms[0].llm_instance if len(self.context.llms) > 0 else None
self.reply_prefix = str(self.context.config_helper.platform_settings.reply_prefix)
self.llm_tools = FuncCall(self.provider)
def set_provider(self, provider: Provider):
self.provider = provider
async def handle(self, message: AstrMessageEvent, llm_provider: Provider = None) -> MessageResult:
'''
Handle the message event, including commands, plugins, etc.
`llm_provider`: the provider to use for LLM. If None, use the default provider
'''
msg_plain = message.message_str.strip()
provider = llm_provider if llm_provider else self.provider
if os.environ.get('TEST_MODE', 'off') != 'on':
self.persist_manager.record_message(message.platform.platform_name, message.session_id)
# TODO: this should be configurable
# if not message.message_str:
# return MessageResult("Hi~")
# check the rate limit
if not self.rate_limit_helper.check_frequency(message.message_obj.sender.user_id):
logger.warning(f"用户 {message.message_obj.sender.user_id} 的发言频率超过限制,已忽略。")
return
# remove the nick prefix
for nick in self.context.config_helper.wake_prefix:
if msg_plain.startswith(nick):
msg_plain = msg_plain.removeprefix(nick)
break
message.message_str = msg_plain
# scan candidate commands
cmd_res = await self.command_manager.scan_command(message, self.context)
if cmd_res:
assert(isinstance(cmd_res, CommandResult))
return MessageResult(
cmd_res.message_chain,
is_command_call=True,
use_t2i=cmd_res.is_use_t2i
)
# middlewares
for middleware in self.context.middlewares:
try:
logger.info(f"执行中间件 {middleware.origin}/{middleware.name}...")
await middleware.func(message, self.context)
except BaseException as e:
logger.error(f"中间件 {middleware.origin}/{middleware.name} 处理消息时发生异常:{e},跳过。")
logger.error(traceback.format_exc())
if message.only_command:
return
# next is the LLM part
# check if the message is a llm-wake-up command
if self.llm_wake_prefix and not msg_plain.startswith(self.llm_wake_prefix):
logger.debug(f"消息 `{msg_plain}` 没有以 LLM 唤醒前缀 `{self.llm_wake_prefix}` 开头,忽略。")
return
if not provider:
logger.debug("没有任何 LLM 可用,忽略。")
return
# check the content safety
if not await self.content_safety_helper.check_content(msg_plain):
return MessageResult("信息包含违规内容,由于机器人管理者开启内容安全审核,你的此条消息已被停止继续处理。")
image_url = None
for comp in message.message_obj.message:
if isinstance(comp, Image):
image_url = comp.url if comp.url else comp.file
break
try:
if not self.llm_tools.empty():
# tools-use
tool_use_flag = True
llm_result = await provider.text_chat(
prompt=msg_plain,
session_id=message.session_id,
tools=self.llm_tools.get_func()
)
if isinstance(llm_result, Function):
logger.debug(f"function-calling: {llm_result}")
func_obj = None
for i in self.llm_tools.func_list:
if i["name"] == llm_result.name:
func_obj = i["func_obj"]
break
if not func_obj:
return MessageResult("AstrBot Function-calling 异常:未找到请求的函数调用。")
try:
args = json.loads(llm_result.arguments)
args['ame'] = message
args['context'] = self.context
try:
cmd_res = await func_obj(**args)
except TypeError as e:
args.pop('ame')
args.pop('context')
cmd_res = await func_obj(**args)
if isinstance(cmd_res, CommandResult):
return MessageResult(
cmd_res.message_chain,
is_command_call=True,
use_t2i=cmd_res.is_use_t2i
)
elif isinstance(cmd_res, str):
return MessageResult(cmd_res)
elif not cmd_res:
return
else:
return MessageResult(f"AstrBot Function-calling 异常:调用:{llm_result} 时,返回了未知的返回值类型。")
except BaseException as e:
traceback.print_exc()
return MessageResult("AstrBot Function-calling 异常:" + str(e))
else:
return MessageResult(llm_result)
else:
# normal chat
tool_use_flag = False
llm_result = await provider.text_chat(
prompt=msg_plain,
session_id=message.session_id,
image_url=image_url
)
except BadRequestError as e:
if tool_use_flag:
# seems like the model don't support function-calling
logger.error(f"error: {e}. Using local function-calling implementation")
try:
# use local function-calling implementation
args = {
'question': llm_result,
'func_definition': self.llm_tools.func_dump(),
}
_, has_func = await self.llm_tools.func_call(**args)
if not has_func:
# normal chat
llm_result = await provider.text_chat(
prompt=msg_plain,
session_id=message.session_id,
image_url=image_url
)
except BaseException as e:
logger.error(traceback.format_exc())
return CommandResult("AstrBot Function-calling 异常:" + str(e))
except BaseException as e:
logger.error(traceback.format_exc())
logger.error(f"LLM 调用失败。")
return MessageResult("AstrBot 请求 LLM 资源失败:" + str(e))
# concatenate reply prefix
if self.reply_prefix:
llm_result = self.reply_prefix + llm_result
# mask unsafe content
llm_result = self.content_safety_helper.filter_content(llm_result)
check = self.content_safety_helper.baidu_check(llm_result)
if not check:
return MessageResult("LLM 输出的信息包含违规内容,由于机器人管理者开启了内容安全审核,该条消息已拦截。")
return MessageResult(llm_result)

View File

@@ -0,0 +1,114 @@
# 网上搜集的。
# 来源词拦截
# 支持正则
unfit_words_q = ["习近平",
"胡锦涛",
"江泽民",
"温家宝",
"李克强",
"李长春",
"毛泽东",
"邓小平",
"周恩来",
"社会主义",
"共产党",
"共产主义",
"大陆官方",
"北京政权",
"中华帝国",
"中国政府",
"共狗",
"六四事件",
"天安门",
"六四",
"政治局常委",
"学潮",
"八九",
"二十大",
"民进党",
"台独",
"台湾独立",
"台湾国",
"国民党",
"台湾民国",
"中华民国",
"pornhub",
"Pornhub",
"作爱",
"做爱",
"性交",
"自慰",
"阴茎",
"淫妇",
"肛交",
"交配",
"性关系",
"性活动",
"色情",
"色图",
"裸体",
"小穴",
"淫荡",
"性爱",
"港独",
"法轮功",
"六四"]
# 回复词过滤
unfit_words = ["习近平",
"胡锦涛",
"江泽民",
"温家宝",
"李克强",
"李长春",
"毛泽东",
"邓小平",
"周恩来",
"社会主义",
"共产党",
"共产主义",
"大陆官方",
"北京政权",
"中华帝国",
"中国政府",
"共狗",
"六四事件",
"天安门",
"六四",
"政治局常委",
"学潮",
"八九",
"二十大",
"民进党",
"台独",
"台湾独立",
"台湾国",
"国民党",
"台湾民国",
"中华民国",
"pornhub",
"Pornhub",
"作爱",
"做爱",
"性交",
"自慰",
"阴茎",
"淫妇",
"肛交",
"交配",
"性关系",
"性活动",
"色情",
"色图",
"涩图",
"裸体",
"小穴",
"淫荡",
"性爱",
"中华人民共和国",
"党中央",
"中央军委主席",
"台湾",
"港独",
"法轮功",
"PRC"]

View File

@@ -1,51 +1,28 @@
import sqlite3
import yaml
import os
import shutil
import time
from typing import Tuple
class dbConn():
def __init__(self):
# 读取参数,并支持中文
conn = sqlite3.connect("data.db")
conn.text_factory=str
self.conn = conn
c = conn.cursor()
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_session(
qq_id VARCHAR(32) PRIMARY KEY,
history TEXT
);
'''
)
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_stat_session(
platform VARCHAR(32),
session_id VARCHAR(32),
cnt INTEGER
);
'''
)
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_stat_message(
ts INTEGER,
cnt INTEGER
);
'''
)
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_stat_platform(
ts INTEGER,
platform VARCHAR(32),
cnt INTEGER
);
'''
)
db_path = "data/data.db"
if os.path.exists("data.db"):
shutil.copy("data.db", db_path)
with open(os.path.dirname(__file__) + "/initialization.sql", "r") as f:
sql = f.read()
self.conn = sqlite3.connect(db_path)
self.conn.text_factory = str
c = self.conn.cursor()
c.executescript(sql)
self.conn.commit()
conn.commit()
def record_message(self, platform, session_id):
curr_ts = int(time.time())
self.increment_stat_session(platform, session_id, 1)
self.increment_stat_message(curr_ts, 1)
self.increment_stat_platform(curr_ts, platform, 1)
def insert_session(self, qq_id, history):
conn = self.conn
@@ -76,7 +53,7 @@ class dbConn():
''', (qq_id, )
)
return c.fetchone()
def get_all_session(self):
conn = self.conn
c = conn.cursor()
@@ -86,7 +63,7 @@ class dbConn():
'''
)
return c.fetchall()
def check_session(self, qq_id):
conn = self.conn
c = conn.cursor()
@@ -107,7 +84,6 @@ class dbConn():
)
conn.commit()
def increment_stat_session(self, platform, session_id, cnt):
# if not exist, insert
conn = self.conn
@@ -137,7 +113,7 @@ class dbConn():
''', (platform, session_id)
)
return c.fetchone() is not None
def get_all_stat_session(self):
conn = self.conn
c = conn.cursor()
@@ -147,7 +123,7 @@ class dbConn():
'''
)
return c.fetchall()
def get_session_cnt_total(self):
conn = self.conn
c = conn.cursor()
@@ -157,7 +133,7 @@ class dbConn():
'''
)
return c.fetchone()[0]
def increment_stat_message(self, ts, cnt):
# 以一个小时为单位。ts的单位是秒。
# 找到最近的一个小时,如果没有,就插入
@@ -197,7 +173,7 @@ class dbConn():
return True, ts
else:
return False, ts
def get_last_24h_stat_message(self):
# 获取最近24小时的消息统计
conn = self.conn
@@ -208,7 +184,7 @@ class dbConn():
''', (time.time() - 86400, )
)
return c.fetchall()
def get_message_cnt_total(self) -> int:
conn = self.conn
c = conn.cursor()
@@ -258,7 +234,7 @@ class dbConn():
return True, ts
else:
return False, ts
def get_last_24h_stat_platform(self):
# 获取最近24小时的消息统计
conn = self.conn
@@ -269,7 +245,7 @@ class dbConn():
''', (time.time() - 86400, )
)
return c.fetchall()
def get_platform_cnt_total(self) -> int:
conn = self.conn
c = conn.cursor()
@@ -291,4 +267,3 @@ class dbConn():
def close(self):
self.conn.close()

View File

@@ -0,0 +1,18 @@
CREATE TABLE IF NOT EXISTS tb_session(
qq_id VARCHAR(32) PRIMARY KEY,
history TEXT
);
CREATE TABLE IF NOT EXISTS tb_stat_session(
platform VARCHAR(32),
session_id VARCHAR(32),
cnt INTEGER
);
CREATE TABLE IF NOT EXISTS tb_stat_message(
ts INTEGER,
cnt INTEGER
);
CREATE TABLE IF NOT EXISTS tb_stat_platform(
ts INTEGER,
platform VARCHAR(32),
cnt INTEGER
);

View File

@@ -1,137 +0,0 @@
# 如果你不知道怎么部署请查看https://soulter.top/posts/qpdg.html
# 不一定需要key了如果你没有key但有openAI账号或者必应账号可以考虑使用下面的逆向库
###############平台设置#################
# QQ频道机器人
# QQ开放平台的appid和令牌
# q.qq.com
# enable为true则启用false则不启用
qqbot:
enable: true
appid:
token:
# QQ机器人
# enable为true则启用false则不启用
# 需要安装GO-CQHTTP配合使用。
# 文档https://docs.go-cqhttp.org/
# 请将go-cqhttp的配置文件的sever部分粘贴为以下内容否则无法使用
# 请先启动go-cqhttp再启动本程序
#
# servers:
# - http:
# host: 127.0.0.1
# version: 0
# port: 5700
# timeout: 5
# - ws:
# address: 127.0.0.1:6700
# middlewares:
# <<: *default
gocqbot:
enable: false
# 设置是否一个人一个会话
uniqueSessionMode: false
# QChannelBot 的版本请勿修改此字段否则可能产生一些bug
version: 3.0
# [Beta] 转储历史记录时间间隔(分钟)
dump_history_interval: 10
# 一个用户只能在time秒内发送count条消息
limit:
time: 60
count: 5
# 公告
notice: "此机器人由Github项目QQChannelChatGPT驱动。"
# 是否打开私信功能
# 设置为true则频道成员可以私聊机器人。
# 设置为false则频道成员不能私聊机器人。
direct_message_mode: true
# 系统代理
# http_proxy: http://localhost:7890
# https_proxy: http://localhost:7890
# 自定义回复前缀,如[Rev]或其他务必加引号以防止不必要的bug。
reply_prefix:
openai_official: "[GPT]"
rev_chatgpt: "[Rev]"
rev_edgegpt: "[RevBing]"
# 百度内容审核服务
# 新用户免费5万次调用。https://cloud.baidu.com/doc/ANTIPORN/index.html
baidu_aip:
enable: false
app_id:
api_key:
secret_key:
###############语言模型设置#################
# OpenAI官方API
# 注意已支持多key自动切换方法
# key:
# - sk-xxxxxx
# - sk-xxxxxx
# 在下方非注释的地方使用以上格式
# 关于api_base可以使用一些云函数如腾讯、阿里来避免国内被墙的问题。
# 详见:
# https://github.com/Ice-Hazymoon/openai-scf-proxy
# https://github.com/Soulter/QQChannelChatGPT/issues/42
# 设置为none则表示使用官方默认api地址
openai:
key:
-
api_base: none
# 这里是GPT配置语言模型默认使用gpt-3.5-turbo
chatGPTConfigs:
model: gpt-3.5-turbo
max_tokens: 3000
temperature: 0.9
top_p: 1
frequency_penalty: 0
presence_penalty: 0
total_tokens_limit: 5000
# 逆向文心一言【暂时不可用,请勿使用】
rev_ernie:
enable: false
# 逆向New Bing
# 需要在项目根目录下创建cookies.json并粘贴cookies进去。
# 详见https://soulter.top/posts/qpdg.html
rev_edgegpt:
enable: false
# 逆向ChatGPT库
# https://github.com/acheong08/ChatGPT
# 优点:免费(无免费额度限制);
# 缺点速度相对慢。OpenAI 速率限制:免费帐户每小时 50 个请求。您可以通过多帐户循环来绕过它
# enable设置为true后将会停止使用上面正常的官方API调用而使用本逆向项目
#
# 多账户可以保证每个请求都能得到及时的回复。
# 关于account的格式
# account:
# - email: 第1个账户
# password: 第1个账户密码
# - email: 第2个账户
# password: 第2个账户密码
# - ....
# 支持使用access_token登录
# 例:
# - session_token: xxxxx
# - access_token: xxxx
# 请严格按照上面这个格式填写。
# 逆向ChatGPT库的email-password登录方式不工作建议使用access_token登录
# 获取access_token的方法详见https://soulter.top/posts/qpdg.html
rev_ChatGPT:
enable: false
account:
- access_token:

View File

View File

View File

@@ -1,23 +0,0 @@
'''
监测机器性能
- Bot 内存使用量
- CPU 占用率
'''
import psutil
from cores.qqbot.global_object import GlobalObject
import time
def run_monitor(global_object: GlobalObject):
'''运行监测'''
start_time = time.time()
while True:
stat = global_object.dashboard_data.stats
# 程序占用的内存大小
mem = psutil.Process().memory_info().rss / 1024 / 1024 # MB
stat['sys_perf'] = {
'memory': mem,
'cpu': psutil.cpu_percent()
}
stat['sys_start_time'] = start_time
time.sleep(30)

View File

@@ -1,937 +0,0 @@
import botpy
from botpy.message import Message, DirectMessage
import re
import json
import threading
import asyncio
import time
import requests
import util.unfit_words as uw
import os
import sys
from cores.qqbot.personality import personalities
from addons.baidu_aip_judge import BaiduJudge
from model.platform.qqchan import QQChan, NakuruGuildMember, NakuruGuildMessage
from model.platform.qq import QQ
from model.platform.qqgroup import (
UnofficialQQBotSDK,
Event as QQEvent,
Message as QQMessage,
MessageChain,
PlainText
)
from nakuru import (
CQHTTP,
GroupMessage,
GroupMemberIncrease,
FriendMessage,
GuildMessage,
Notify
)
from nakuru.entities.components import Plain,At,Image
from model.provider.provider import Provider
from model.command.command import Command
from util import general_utils as gu
from util.cmd_config import CmdConfig as cc
import util.function_calling.gplugin as gplugin
import util.plugin_util as putil
from PIL import Image as PILImage
import io
import traceback
from . global_object import GlobalObject
from typing import Union, Callable
from addons.dashboard.helper import DashBoardHelper
from addons.dashboard.server import DashBoardData
from cores.monitor.perf import run_monitor
from cores.database.conn import dbConn
# 缓存的会话
session_dict = {}
# 统计信息
count = {}
# 统计信息
stat_file = ''
# 用户发言频率
user_frequency = {}
# 时间默认值
frequency_time = 60
# 计数默认值
frequency_count = 2
# 公告(可自定义):
announcement = ""
# 机器人私聊模式
direct_message_mode = True
# 版本
version = '3.1.0'
# 语言模型
REV_CHATGPT = 'rev_chatgpt'
OPENAI_OFFICIAL = 'openai_official'
REV_ERNIE = 'rev_ernie'
REV_EDGEGPT = 'rev_edgegpt'
NONE_LLM = 'none_llm'
chosen_provider = None
# 语言模型对象
llm_instance: dict[str, Provider] = {}
llm_command_instance: dict[str, Command] = {}
# 百度内容审核实例
baidu_judge = None
# 关键词回复
keywords = {}
# QQ频道机器人
qqchannel_bot: QQChan = None
PLATFORM_QQCHAN = 'qqchan'
qqchan_loop = None
client = None
# QQ群机器人
PLATFROM_QQBOT = 'qqbot'
# CLI
PLATFORM_CLI = 'cli'
# 加载默认配置
cc.init_attributes("qq_forward_threshold", 200)
cc.init_attributes("qq_welcome", "欢迎加入本群!\n欢迎给https://github.com/Soulter/QQChannelChatGPT项目一个Star😊~\n输入help查看帮助~\n")
cc.init_attributes("bing_proxy", "")
cc.init_attributes("qq_pic_mode", False)
cc.init_attributes("rev_chatgpt_model", "")
cc.init_attributes("rev_chatgpt_plugin_ids", [])
cc.init_attributes("rev_chatgpt_PUID", "")
cc.init_attributes("rev_chatgpt_unverified_plugin_domains", [])
cc.init_attributes("gocq_host", "127.0.0.1")
cc.init_attributes("gocq_http_port", 5700)
cc.init_attributes("gocq_websocket_port", 6700)
cc.init_attributes("gocq_react_group", True)
cc.init_attributes("gocq_react_guild", True)
cc.init_attributes("gocq_react_friend", True)
cc.init_attributes("gocq_react_group_increase", True)
cc.init_attributes("gocq_qqchan_admin", "")
cc.init_attributes("other_admins", [])
cc.init_attributes("CHATGPT_BASE_URL", "")
cc.init_attributes("qqbot_appid", "")
cc.init_attributes("qqbot_secret", "")
cc.init_attributes("llm_env_prompt", "> hint: 末尾根据内容和心情添加 1-2 个emoji")
cc.init_attributes("default_personality_str", "")
cc.init_attributes("openai_image_generate", {
"model": "dall-e-3",
"size": "1024x1024",
"style": "vivid",
"quality": "standard",
})
cc.init_attributes("http_proxy", "")
cc.init_attributes("https_proxy", "")
cc.init_attributes("dashboard_username", "")
cc.init_attributes("dashboard_password", "")
# cc.init_attributes(["qq_forward_mode"], False)
# QQ机器人
gocq_bot = None
PLATFORM_GOCQ = 'gocq'
gocq_app = CQHTTP(
host=cc.get("gocq_host", "127.0.0.1"),
port=cc.get("gocq_websocket_port", 6700),
http_port=cc.get("gocq_http_port", 5700),
)
qq_bot: UnofficialQQBotSDK = UnofficialQQBotSDK(
cc.get("qqbot_appid", None),
cc.get("qqbot_secret", None)
)
gocq_loop: asyncio.AbstractEventLoop = None
qqbot_loop: asyncio.AbstractEventLoop = None
# 全局对象
_global_object: GlobalObject = None
def new_sub_thread(func, args=()):
thread = threading.Thread(target=_runner, args=(func, args), daemon=True)
thread.start()
def _runner(func: Callable, args: tuple):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(func(*args))
loop.close()
# 统计消息数据
def upload():
global version, gocq_bot, qqchannel_bot
while True:
addr = ''
addr_ip = ''
session_dict_dump = '{}'
try:
addr = requests.get('http://myip.ipip.net', timeout=5).text
addr_ip = re.findall(r'\d+.\d+.\d+.\d+', addr)[0]
except BaseException as e:
pass
try:
gocq_cnt = 0
qqchan_cnt = 0
if gocq_bot is not None:
gocq_cnt = gocq_bot.get_cnt()
if qqchannel_bot is not None:
qqchan_cnt = qqchannel_bot.get_cnt()
o = {"cnt_total": _global_object.cnt_total,"admin": _global_object.admin_qq,"addr": addr, 's': session_dict_dump}
o_j = json.dumps(o)
res = {"version": version, "count": gocq_cnt+qqchan_cnt, "ip": addr_ip, "others": o_j, "cntqc": qqchan_cnt, "cntgc": gocq_cnt}
gu.log(res, gu.LEVEL_DEBUG, tag="Upload", fg = gu.FG_COLORS['yellow'], bg=gu.BG_COLORS['black'])
resp = requests.post('https://api.soulter.top/upload', data=json.dumps(res), timeout=5)
# print(resp.text)
if resp.status_code == 200:
ok = resp.json()
if ok['status'] == 'ok':
_global_object.cnt_total = 0
if gocq_bot is not None:
gocq_cnt = gocq_bot.set_cnt(0)
if qqchannel_bot is not None:
qqchan_cnt = qqchannel_bot.set_cnt(0)
except BaseException as e:
gu.log("上传统计信息时出现错误: " + str(e), gu.LEVEL_ERROR, tag="Upload")
pass
time.sleep(10*60)
# 语言模型选择
def privider_chooser(cfg):
l = []
if 'rev_ChatGPT' in cfg and cfg['rev_ChatGPT']['enable']:
l.append('rev_chatgpt')
if 'rev_ernie' in cfg and cfg['rev_ernie']['enable']:
l.append('rev_ernie')
if 'rev_edgegpt' in cfg and cfg['rev_edgegpt']['enable']:
l.append('rev_edgegpt')
if 'openai' in cfg and len(cfg['openai']['key']) > 0 and cfg['openai']['key'][0] is not None:
l.append('openai_official')
return l
'''
初始化机器人
'''
def initBot(cfg):
global llm_instance, llm_command_instance
global baidu_judge, chosen_provider
global frequency_count, frequency_time, announcement, direct_message_mode
global keywords, _global_object
# 迁移旧配置
gu.try_migrate_config(cfg)
# 使用新配置
cfg = cc.get_all()
_event_loop = asyncio.new_event_loop()
asyncio.set_event_loop(_event_loop)
# 初始化 global_object
_global_object = GlobalObject()
_global_object.base_config = cfg
_global_object.stat['session'] = {}
_global_object.stat['message'] = {}
_global_object.stat['platform'] = {}
if 'reply_prefix' in cfg:
# 适配旧版配置
if isinstance(cfg['reply_prefix'], dict):
for k in cfg['reply_prefix']:
_global_object.reply_prefix = cfg['reply_prefix'][k]
break
else:
_global_object.reply_prefix = cfg['reply_prefix']
# 语言模型提供商
gu.log("--------加载语言模型--------", gu.LEVEL_INFO, fg=gu.FG_COLORS['yellow'])
prov = privider_chooser(cfg)
if REV_CHATGPT in prov:
gu.log("- 逆向ChatGPT库 -", gu.LEVEL_INFO)
if cfg['rev_ChatGPT']['enable']:
if 'account' in cfg['rev_ChatGPT']:
from model.provider.rev_chatgpt import ProviderRevChatGPT
from model.command.rev_chatgpt import CommandRevChatGPT
llm_instance[REV_CHATGPT] = ProviderRevChatGPT(cfg['rev_ChatGPT'], base_url=cc.get("CHATGPT_BASE_URL", None))
llm_command_instance[REV_CHATGPT] = CommandRevChatGPT(llm_instance[REV_CHATGPT], _global_object)
chosen_provider = REV_CHATGPT
else:
input("[System-err] 请退出本程序, 然后在配置文件中填写rev_ChatGPT相关配置")
if REV_EDGEGPT in prov:
gu.log("- New Bing -", gu.LEVEL_INFO)
if not os.path.exists('./cookies.json'):
input("[System-err] 导入Bing模型时发生错误, 没有找到cookies文件或者cookies文件放置位置错误。windows启动器启动的用户请把cookies.json文件放到和启动器相同的目录下。\n如何获取请看https://github.com/Soulter/QQChannelChatGPT仓库介绍。")
else:
if cfg['rev_edgegpt']['enable']:
try:
from model.provider.rev_edgegpt import ProviderRevEdgeGPT
from model.command.rev_edgegpt import CommandRevEdgeGPT
llm_instance[REV_EDGEGPT] = ProviderRevEdgeGPT()
llm_command_instance[REV_EDGEGPT] = CommandRevEdgeGPT(llm_instance[REV_EDGEGPT], _global_object)
chosen_provider = REV_EDGEGPT
except BaseException as e:
print(traceback.format_exc())
gu.log("加载Bing模型时发生错误, 请检查1. cookies文件是否正确放置 2. 是否设置了代理(梯子)。", gu.LEVEL_ERROR, max_len=60)
if OPENAI_OFFICIAL in prov:
gu.log("- OpenAI官方 -", gu.LEVEL_INFO)
if cfg['openai']['key'] is not None and cfg['openai']['key'] != [None]:
from model.provider.openai_official import ProviderOpenAIOfficial
from model.command.openai_official import CommandOpenAIOfficial
llm_instance[OPENAI_OFFICIAL] = ProviderOpenAIOfficial(cfg['openai'])
llm_command_instance[OPENAI_OFFICIAL] = CommandOpenAIOfficial(llm_instance[OPENAI_OFFICIAL], _global_object)
chosen_provider = OPENAI_OFFICIAL
gu.log("--------加载配置--------", gu.LEVEL_INFO, fg=gu.FG_COLORS['yellow'])
# 得到关键词
if os.path.exists("keyword.json"):
with open("keyword.json", 'r', encoding='utf-8') as f:
keywords = json.load(f)
# 检查provider设置偏好
p = cc.get("chosen_provider", None)
if p is not None and p in llm_instance:
chosen_provider = p
gu.log(f"将使用 {chosen_provider} 语言模型。", gu.LEVEL_INFO)
# 百度内容审核
if 'baidu_aip' in cfg and 'enable' in cfg['baidu_aip'] and cfg['baidu_aip']['enable']:
try:
baidu_judge = BaiduJudge(cfg['baidu_aip'])
gu.log("百度内容审核初始化成功", gu.LEVEL_INFO)
except BaseException as e:
gu.log("百度内容审核初始化失败", gu.LEVEL_ERROR)
threading.Thread(target=upload, daemon=True).start()
# 得到私聊模式配置
if 'direct_message_mode' in cfg:
direct_message_mode = cfg['direct_message_mode']
gu.log("私聊功能: "+str(direct_message_mode), gu.LEVEL_INFO)
# 得到发言频率配置
if 'limit' in cfg:
gu.log("发言频率配置: "+str(cfg['limit']), gu.LEVEL_INFO)
if 'count' in cfg['limit']:
frequency_count = cfg['limit']['count']
if 'time' in cfg['limit']:
frequency_time = cfg['limit']['time']
# 得到公告配置
if 'notice' in cfg:
if cc.get("qq_welcome", None) != None and cfg['notice'] == '此机器人由Github项目QQChannelChatGPT驱动。':
announcement = cc.get("qq_welcome", None)
else:
announcement = cfg['notice']
gu.log("公告配置: " + announcement, gu.LEVEL_INFO)
try:
if 'uniqueSessionMode' in cfg and cfg['uniqueSessionMode']:
_global_object.uniqueSession = True
else:
_global_object.uniqueSession = False
gu.log("独立会话: "+str(_global_object.uniqueSession), gu.LEVEL_INFO)
except BaseException as e:
gu.log("独立会话配置错误: "+str(e), gu.LEVEL_ERROR)
gu.log(f"QQ开放平台AppID: {cfg['qqbot']['appid']} 令牌: {cfg['qqbot']['token']}")
if chosen_provider is None:
gu.log("检测到没有启动任何语言模型。", gu.LEVEL_CRITICAL)
nick_qq = cc.get("nick_qq", None)
if nick_qq == None:
nick_qq = ("ai","!","")
if isinstance(nick_qq, str):
nick_qq = (nick_qq,)
if isinstance(nick_qq, list):
nick_qq = tuple(nick_qq)
_global_object.nick = nick_qq
thread_inst = None
gu.log("--------加载插件--------", gu.LEVEL_INFO, fg=gu.FG_COLORS['yellow'])
# 加载插件
_command = Command(None, _global_object)
ok, err = putil.plugin_reload(_global_object.cached_plugins)
if ok:
gu.log("加载插件完成", gu.LEVEL_INFO)
else:
gu.log(err, gu.LEVEL_ERROR)
if chosen_provider is None:
llm_command_instance[NONE_LLM] = _command
chosen_provider = NONE_LLM
gu.log("--------加载机器人平台--------", gu.LEVEL_INFO, fg=gu.FG_COLORS['yellow'])
admin_qq = cc.get('admin_qq', None)
admin_qqchan = cc.get('admin_qqchan', None)
if admin_qq == None:
gu.log("未设置管理者QQ号(管理者才能使用update/plugin等指令),如需设置,请编辑 cmd_config.json 文件", gu.LEVEL_WARNING)
if admin_qqchan == None:
gu.log("未设置管理者QQ频道用户号(管理者才能使用update/plugin等指令),如需设置,请编辑 cmd_config.json 文件。可在频道发送指令 !myid 获取", gu.LEVEL_WARNING)
_global_object.admin_qq = admin_qq
_global_object.admin_qqchan = admin_qqchan
global qq_bot, qqbot_loop
qqbot_loop = asyncio.new_event_loop()
if cc.get("qqbot_appid", '') != '' and cc.get("qqbot_secret", '') != '':
gu.log("- 启用QQ群机器人 -", gu.LEVEL_INFO)
thread_inst = threading.Thread(target=run_qqbot, args=(qqbot_loop, qq_bot,), daemon=True)
thread_inst.start()
# GOCQ
global gocq_bot
if 'gocqbot' in cfg and cfg['gocqbot']['enable']:
gu.log("- 启用QQ机器人 -", gu.LEVEL_INFO)
global gocq_app, gocq_loop
gocq_loop = asyncio.new_event_loop()
gocq_bot = QQ(True, cc, gocq_loop)
thread_inst = threading.Thread(target=run_gocq_bot, args=(gocq_loop, gocq_bot, gocq_app), daemon=True)
thread_inst.start()
else:
gocq_bot = QQ(False)
_global_object.platform_qq = gocq_bot
gu.log("机器人部署教程: https://github.com/Soulter/QQChannelChatGPT/wiki/", gu.LEVEL_INFO, fg=gu.FG_COLORS['yellow'])
gu.log("如果有任何问题, 请在 https://github.com/Soulter/QQChannelChatGPT 上提交 issue 或加群 322154837", gu.LEVEL_INFO, fg=gu.FG_COLORS['yellow'])
gu.log("请给 https://github.com/Soulter/QQChannelChatGPT 点个 star!", gu.LEVEL_INFO, fg=gu.FG_COLORS['yellow'])
# QQ频道
if 'qqbot' in cfg and cfg['qqbot']['enable']:
gu.log("- 启用QQ频道机器人 -", gu.LEVEL_INFO)
global qqchannel_bot, qqchan_loop
qqchannel_bot = QQChan()
qqchan_loop = asyncio.new_event_loop()
_global_object.platform_qqchan = qqchannel_bot
thread_inst = threading.Thread(target=run_qqchan_bot, args=(cfg, qqchan_loop, qqchannel_bot), daemon=True)
thread_inst.start()
# thread.join()
if thread_inst == None:
gu.log("没有启用/成功启用任何机器人平台", gu.LEVEL_CRITICAL)
default_personality_str = cc.get("default_personality_str", "")
if default_personality_str == "":
_global_object.default_personality = None
else:
_global_object.default_personality = {
"name": "default",
"prompt": default_personality_str,
}
# 初始化dashboard
_global_object.dashboard_data = DashBoardData(
stats={},
configs={},
logs={},
plugins=_global_object.cached_plugins,
)
dashboard_helper = DashBoardHelper(_global_object.dashboard_data, config=cc.get_all())
dashboard_thread = threading.Thread(target=dashboard_helper.run, daemon=True)
dashboard_thread.start()
# 运行 monitor
threading.Thread(target=run_monitor, args=(_global_object,), daemon=False).start()
gu.log("🎉 项目启动完成。")
# asyncio.get_event_loop().run_until_complete(cli())
dashboard_thread.join()
async def cli():
time.sleep(1)
while True:
try:
prompt = input(">>> ")
if prompt == "":
continue
ngm = await cli_pack_message(prompt)
await oper_msg(ngm, True, PLATFORM_CLI)
except EOFError:
return
async def cli_pack_message(prompt: str) -> NakuruGuildMessage:
ngm = NakuruGuildMessage()
ngm.channel_id = 6180
ngm.user_id = 6180
ngm.message = [Plain(prompt)]
ngm.type = "GuildMessage"
ngm.self_id = 6180
ngm.self_tiny_id = 6180
ngm.guild_id = 6180
ngm.sender = NakuruGuildMember()
ngm.sender.tiny_id = 6180
ngm.sender.user_id = 6180
ngm.sender.nickname = "CLI"
ngm.sender.role = 0
return ngm
'''
运行QQ频道机器人
'''
def run_qqchan_bot(cfg, loop, qqchannel_bot: QQChan):
asyncio.set_event_loop(loop)
intents = botpy.Intents(public_guild_messages=True, direct_message=True)
global client
client = botClient(
intents=intents,
bot_log=False
)
try:
qqchannel_bot.run_bot(client, cfg['qqbot']['appid'], cfg['qqbot']['token'])
except BaseException as e:
gu.log("启动QQ频道机器人时出现错误, 原因如下: " + str(e), gu.LEVEL_CRITICAL, tag="QQ频道")
gu.log(r"如果您是初次启动请修改配置文件QQChannelChatGPT/config.yaml详情请看https://github.com/Soulter/QQChannelChatGPT/wiki。" + str(e), gu.LEVEL_CRITICAL, tag="System")
i = input("按回车退出程序。\n")
'''
运行GOCQ机器人
'''
def run_gocq_bot(loop, gocq_bot, gocq_app):
asyncio.set_event_loop(loop)
gu.log("正在检查本地GO-CQHTTP连接...端口5700, 6700", tag="QQ")
noticed = False
while True:
if not gu.port_checker(5700, cc.get("gocq_host", "127.0.0.1")) or not gu.port_checker(6700, cc.get("gocq_host", "127.0.0.1")):
if not noticed:
noticed = True
gu.log("与GO-CQHTTP通信失败, 请检查GO-CQHTTP是否启动并正确配置。程序会每隔 5s 自动重试。", gu.LEVEL_CRITICAL, tag="QQ")
time.sleep(5)
else:
gu.log("检查完毕,未发现问题。", tag="QQ")
break
global gocq_client
gocq_client = gocqClient()
try:
gocq_bot.run_bot(gocq_app)
except BaseException as e:
input("启动QQ机器人出现错误"+str(e))
'''
启动QQ群机器人(官方接口)
'''
def run_qqbot(loop: asyncio.AbstractEventLoop, qq_bot: UnofficialQQBotSDK):
asyncio.set_event_loop(loop)
QQBotClient()
qq_bot.run_bot()
'''
检查发言频率
'''
def check_frequency(id) -> bool:
ts = int(time.time())
if id in user_frequency:
if ts-user_frequency[id]['time'] > frequency_time:
user_frequency[id]['time'] = ts
user_frequency[id]['count'] = 1
return True
else:
if user_frequency[id]['count'] >= frequency_count:
return False
else:
user_frequency[id]['count']+=1
return True
else:
t = {'time':ts,'count':1}
user_frequency[id] = t
return True
'''
通用消息回复
'''
async def send_message(platform, message, res, session_id = None):
global qqchannel_bot, qqchannel_bot, gocq_loop, session_dict
# 统计会话信息
if session_id is not None:
if session_id not in session_dict:
session_dict[session_id] = {'cnt': 1}
else:
session_dict[session_id]['cnt'] += 1
else:
session_dict[session_id]['cnt'] += 1
# TODO: 这里会非常吃资源。然而 sqlite3 不支持多线程,所以暂时这样写。
curr_ts = int(time.time())
db_inst = dbConn()
db_inst.increment_stat_session(platform, session_id, 1)
db_inst.increment_stat_message(curr_ts, 1)
db_inst.increment_stat_platform(curr_ts, platform, 1)
if platform == PLATFORM_QQCHAN:
qqchannel_bot.send_qq_msg(message, res)
elif platform == PLATFORM_GOCQ:
await gocq_bot.send_qq_msg(message, res)
elif platform == PLATFROM_QQBOT:
message_chain = MessageChain()
message_chain.parse_from_nakuru(res)
await qq_bot.send(message, message_chain)
elif platform == PLATFORM_CLI:
print(res)
async def oper_msg(message: Union[GroupMessage, FriendMessage, GuildMessage, NakuruGuildMessage],
group: bool=False,
platform: str = None):
"""
处理消息。
group: 群聊模式,
message: 频道是频道的消息对象, QQ是nakuru-gocq的消息对象
msg_ref: 引用消息(频道)
platform: 平台(gocq, qqchan)
"""
global chosen_provider, keywords, qqchannel_bot, gocq_bot
global _global_object
qq_msg = ''
session_id = ''
user_id = ''
role = "member" # 角色, member或admin
hit = False # 是否命中指令
command_result = () # 调用指令返回的结果
_global_object.cnt_total += 1
with_tag = False # 是否带有昵称
if platform == PLATFORM_QQCHAN or platform == PLATFROM_QQBOT or platform == PLATFORM_CLI:
with_tag = True
_len = 0
for i in message.message:
if isinstance(i, Plain) or isinstance(i, PlainText):
qq_msg += str(i.text).strip()
if isinstance(i, At):
if message.type == "GuildMessage":
if i.qq == message.user_id or i.qq == message.self_tiny_id:
with_tag = True
if message.type == "FriendMessage":
if i.qq == message.self_id:
with_tag = True
if message.type == "GroupMessage":
if i.qq == message.self_id:
with_tag = True
for i in _global_object.nick:
if i != '' and qq_msg.startswith(i):
_len = len(i)
with_tag = True
break
qq_msg = qq_msg[_len:].strip()
gu.log(f"收到消息:{qq_msg}", gu.LEVEL_INFO, tag="QQ")
user_id = message.user_id
if group:
# 适配GO-CQHTTP的频道功能
if message.type == "GuildMessage":
session_id = message.channel_id
else:
session_id = message.group_id
else:
with_tag = True
session_id = message.user_id
if message.type == "GuildMessage":
sender_id = str(message.sender.tiny_id)
else:
sender_id = str(message.sender.user_id)
if sender_id == _global_object.admin_qq or \
sender_id == _global_object.admin_qqchan or \
sender_id in cc.get("other_admins", []) or \
sender_id == cc.get("gocq_qqchan_admin", "") or \
platform == PLATFORM_CLI:
role = "admin"
if _global_object.uniqueSession:
# 独立会话时,一个用户一个 session
session_id = sender_id
if qq_msg == "":
await send_message(platform, message, f"Hi~", session_id=session_id)
return
if with_tag:
# 检查发言频率
if not check_frequency(user_id):
await send_message(platform, message, f'你的发言超过频率限制(╯▔皿▔)╯。\n管理员设置{frequency_time}秒内只能提问{frequency_count}次。', session_id=session_id)
return
# logf.write("[GOCQBOT] "+ qq_msg+'\n')
# logf.flush()
# 关键词回复
for k in keywords:
if qq_msg == k:
plain_text = ""
if 'plain_text' in keywords[k]:
plain_text = keywords[k]['plain_text']
else:
plain_text = keywords[k]
image_url = ""
if 'image_url' in keywords[k]:
image_url = keywords[k]['image_url']
if image_url != "":
res = [Plain(plain_text), Image.fromURL(image_url)]
await send_message(platform, message, res, session_id=session_id)
else:
await send_message(platform, message, plain_text, session_id=session_id)
return
# 检查是否是更换语言模型的请求
temp_switch = ""
if qq_msg.startswith('/bing') or qq_msg.startswith('/gpt') or qq_msg.startswith('/revgpt'):
target = chosen_provider
if qq_msg.startswith('/bing'):
target = REV_EDGEGPT
elif qq_msg.startswith('/gpt'):
target = OPENAI_OFFICIAL
elif qq_msg.startswith('/revgpt'):
target = REV_CHATGPT
l = qq_msg.split(' ')
if len(l) > 1 and l[1] != "":
# 临时对话模式,先记录下之前的语言模型,回答完毕后再切回
temp_switch = chosen_provider
chosen_provider = target
qq_msg = l[1]
else:
chosen_provider = target
cc.put("chosen_provider", chosen_provider)
await send_message(platform, message, f"已切换至【{chosen_provider}", session_id=session_id)
return
chatgpt_res = ""
# 如果是等待回复的消息
if platform == PLATFORM_GOCQ and session_id in gocq_bot.waiting and gocq_bot.waiting[session_id] == '':
gocq_bot.waiting[session_id] = message
return
if platform == PLATFORM_QQCHAN and session_id in qqchannel_bot.waiting and qqchannel_bot.waiting[session_id] == '':
qqchannel_bot.waiting[session_id] = message
return
hit, command_result = llm_command_instance[chosen_provider].check_command(
qq_msg,
session_id,
role,
platform,
message,
)
# 没触发指令
if not hit:
if not with_tag:
return
# 关键词拦截
for i in uw.unfit_words_q:
matches = re.match(i, qq_msg.strip(), re.I | re.M)
if matches:
await send_message(platform, message, f"你的提问得到的回复未通过【自有关键词拦截】服务, 不予回复。", session_id=session_id)
return
if baidu_judge != None:
check, msg = baidu_judge.judge(qq_msg)
if not check:
await send_message(platform, message, f"你的提问得到的回复未通过【百度AI内容审核】服务, 不予回复。\n\n{msg}", session_id=session_id)
return
if chosen_provider == None:
await send_message(platform, message, f"管理员未启动任何语言模型或者语言模型初始化时失败。", session_id=session_id)
return
try:
# check image url
image_url = None
for comp in message.message:
if isinstance(comp, Image):
if comp.url is None:
image_url = comp.file
break
else:
image_url = comp.url
break
# web search keyword
web_sch_flag = False
if qq_msg.startswith("ws ") and qq_msg != "ws ":
qq_msg = qq_msg[3:]
web_sch_flag = True
else:
qq_msg += " " + cc.get("llm_env_prompt", "")
if chosen_provider == REV_CHATGPT or chosen_provider == OPENAI_OFFICIAL:
if _global_object.web_search or web_sch_flag:
official_fc = chosen_provider == OPENAI_OFFICIAL
chatgpt_res = gplugin.web_search(qq_msg, llm_instance[chosen_provider], session_id, official_fc)
else:
chatgpt_res = str(llm_instance[chosen_provider].text_chat(qq_msg, session_id, image_url, default_personality = _global_object.default_personality))
elif chosen_provider == REV_EDGEGPT:
res, res_code = await llm_instance[chosen_provider].text_chat(qq_msg, platform)
if res_code == 0: # bing不想继续话题重置会话后重试。
await send_message(platform, message, "Bing不想继续话题了, 正在自动重置会话并重试。", session_id=session_id)
await llm_instance[chosen_provider].forget()
res, res_code = await llm_instance[chosen_provider].text_chat(qq_msg, platform)
if res_code == 0: # bing还是不想继续话题大概率说明提问有问题。
await llm_instance[chosen_provider].forget()
await send_message(platform, message, "Bing仍然不想继续话题, 会话已重置, 请检查您的提问后重试。", session_id=session_id)
res = ""
chatgpt_res = str(res)
chatgpt_res = _global_object.reply_prefix + chatgpt_res
except BaseException as e:
gu.log(f"调用异常:{traceback.format_exc()}", gu.LEVEL_ERROR, max_len=100000)
gu.log("调用语言模型例程时出现异常。原因: "+str(e), gu.LEVEL_ERROR)
await send_message(platform, message, "调用语言模型例程时出现异常。原因: "+str(e), session_id=session_id)
return
# 切换回原来的语言模型
if temp_switch != "":
chosen_provider = temp_switch
# 指令回复
if hit:
# 检查指令. command_result是一个元组(指令调用是否成功, 指令返回的文本结果, 指令类型)
if command_result == None:
return
command = command_result[2]
if command == "keyword":
if os.path.exists("keyword.json"):
with open("keyword.json", "r", encoding="utf-8") as f:
keywords = json.load(f)
else:
try:
await send_message(platform, message, command_result[1], session_id=session_id)
except BaseException as e:
await send_message(platform, message, f"回复消息出错: {str(e)}", session_id=session_id)
if command == "update latest r":
await send_message(platform, message, command_result[1] + "\n\n即将自动重启。", session_id=session_id)
py = sys.executable
os.execl(py, py, *sys.argv)
if not command_result[0]:
await send_message(platform, message, f"指令调用错误: \n{str(command_result[1])}", session_id=session_id)
return
# 画图指令
if isinstance(command_result[1], list) and len(command_result) == 3 and command == 'draw':
for i in command_result[1]:
# i is a link
# 保存到本地
pic_res = requests.get(i, stream = True)
if pic_res.status_code == 200:
image = PILImage.open(io.BytesIO(pic_res.content))
await send_message(platform, message, [Image.fromFileSystem(gu.save_temp_img(image))], session_id=session_id)
# 其他指令
else:
try:
await send_message(platform, message, command_result[1], session_id=session_id)
except BaseException as e:
await send_message(platform, message, f"回复消息出错: {str(e)}", session_id=session_id)
return
# 记录日志
# logf.write(f"{reply_prefix} {str(chatgpt_res)}\n")
# logf.flush()
# 敏感过滤
# 过滤不合适的词
for i in uw.unfit_words:
chatgpt_res = re.sub(i, "***", chatgpt_res)
# 百度内容审核服务二次审核
if baidu_judge != None:
check, msg = baidu_judge.judge(chatgpt_res)
if not check:
await send_message(platform, message, f"你的提问得到的回复【百度内容审核】未通过,不予回复。\n\n{msg}", session_id=session_id)
return
# 发送信息
try:
await send_message(platform, message, chatgpt_res, session_id=session_id)
except BaseException as e:
gu.log("回复消息错误: \n"+str(e), gu.LEVEL_ERROR)
# QQ频道机器人
class botClient(botpy.Client):
# 收到频道消息
async def on_at_message_create(self, message: Message):
gu.log(str(message), gu.LEVEL_DEBUG, max_len=9999)
# 转换层
nakuru_guild_message = qqchannel_bot.gocq_compatible_receive(message)
gu.log(f"转换后: {str(nakuru_guild_message)}", gu.LEVEL_DEBUG, max_len=9999)
new_sub_thread(oper_msg, (nakuru_guild_message, True, PLATFORM_QQCHAN))
# 收到私聊消息
async def on_direct_message_create(self, message: DirectMessage):
if direct_message_mode:
# 转换层
nakuru_guild_message = qqchannel_bot.gocq_compatible_receive(message)
gu.log(f"转换后: {str(nakuru_guild_message)}", gu.LEVEL_DEBUG, max_len=9999)
new_sub_thread(oper_msg, (nakuru_guild_message, False, PLATFORM_QQCHAN))
# QQ机器人
class gocqClient():
# 收到群聊消息
@gocq_app.receiver("GroupMessage")
async def _(app: CQHTTP, source: GroupMessage):
if cc.get("gocq_react_group", True):
if isinstance(source.message[0], Plain):
new_sub_thread(oper_msg, (source, True, PLATFORM_GOCQ))
if isinstance(source.message[0], At):
if source.message[0].qq == source.self_id:
new_sub_thread(oper_msg, (source, True, PLATFORM_GOCQ))
else:
return
@gocq_app.receiver("FriendMessage")
async def _(app: CQHTTP, source: FriendMessage):
if cc.get("gocq_react_friend", True):
if isinstance(source.message[0], Plain):
new_sub_thread(oper_msg, (source, False, PLATFORM_GOCQ))
else:
return
@gocq_app.receiver("GroupMemberIncrease")
async def _(app: CQHTTP, source: GroupMemberIncrease):
if cc.get("gocq_react_group_increase", True):
global announcement
await app.sendGroupMessage(source.group_id, [
Plain(text = announcement),
])
@gocq_app.receiver("Notify")
async def _(app: CQHTTP, source: Notify):
print(source)
if source.sub_type == "poke" and source.target_id == source.self_id:
new_sub_thread(oper_msg, (source, False, PLATFORM_GOCQ))
@gocq_app.receiver("GuildMessage")
async def _(app: CQHTTP, source: GuildMessage):
if cc.get("gocq_react_guild", True):
if isinstance(source.message[0], Plain):
new_sub_thread(oper_msg, (source, True, PLATFORM_GOCQ))
if isinstance(source.message[0], At):
if source.message[0].qq == source.self_tiny_id:
new_sub_thread(oper_msg, (source, True, PLATFORM_GOCQ))
else:
return
class QQBotClient():
@qq_bot.on('GroupMessage')
async def _(bot: UnofficialQQBotSDK, message: QQMessage):
print(message)
new_sub_thread(oper_msg, (message, True, PLATFROM_QQBOT))

View File

@@ -1,113 +0,0 @@
from model.platform.qqchan import QQChan, NakuruGuildMember, NakuruGuildMessage
from model.platform.qq import QQ
from model.provider.provider import Provider
from addons.dashboard.server import DashBoardData
from nakuru import (
CQHTTP,
GroupMessage,
GroupMemberIncrease,
FriendMessage,
GuildMessage,
Notify
)
from typing import Union
class GlobalObject:
'''
存放一些公用的数据,用于在不同模块(如core与command)之间传递
'''
nick: str # gocq 的昵称
base_config: dict # config.yaml
cached_plugins: dict # 缓存的插件
web_search: bool # 是否开启了网页搜索
reply_prefix: str
admin_qq: str
admin_qqchan: str
uniqueSession: bool
cnt_total: int
platform_qq: QQ
platform_qqchan: QQChan
default_personality: dict
dashboard_data: DashBoardData
stat: dict
def __init__(self):
self.nick = None # gocq 的昵称
self.base_config = None # config.yaml
self.cached_plugins = {} # 缓存的插件
self.web_search = False # 是否开启了网页搜索
self.reply_prefix = None
self.admin_qq = "123456"
self.admin_qqchan = "123456"
self.uniqueSession = False
self.cnt_total = 0
self.platform_qq = None
self.platform_qqchan = None
self.default_personality = None
self.dashboard_data = None
self.stat = {}
'''
{
"config": {},
"session": [
{
"platform": "qq",
"session_id": 123456,
"cnt": 0
},
{...}
],
"message": [
// 以一小时为单位
{
"ts": 1234567,
"cnt": 0
}
]
}
'''
class AstrMessageEvent():
message_str: str # 纯消息字符串
message_obj: Union[GroupMessage, FriendMessage, GuildMessage, NakuruGuildMessage] # 消息对象
gocq_platform: QQ
qq_sdk_platform: QQChan
platform: str # `gocq` 或 `qqchan`
role: str # `admin` 或 `member`
global_object: GlobalObject # 一些公用数据
session_id: int # 会话id (可能是群id也可能是某个user的id。取决于是否开启了 uniqueSession)
def __init__(self, message_str: str,
message_obj: Union[GroupMessage, FriendMessage, GuildMessage, NakuruGuildMessage],
gocq_platform: QQ,
qq_sdk_platform: QQChan,
platform: str,
role: str,
global_object: GlobalObject,
llm_provider: Provider = None,
session_id: int = None):
self.message_str = message_str
self.message_obj = message_obj
self.gocq_platform = gocq_platform
self.qq_sdk_platform = qq_sdk_platform
self.platform = platform
self.role = role
self.global_object = global_object
self.llm_provider = llm_provider
self.session_id = session_id
class CommandResult():
'''
用于在Command中返回多个值
'''
def __init__(self, hit: bool, success: bool, message_chain: list, command_name: str = "unknown_command") -> None:
self.hit = hit
self.success = success
self.message_chain = message_chain
self.command_name = command_name
def _result_tuple(self):
return (self.success, self.message_chain, self.command_name)

10
dashboard/__init__.py Normal file
View File

@@ -0,0 +1,10 @@
from dataclasses import dataclass
class DashBoardData():
stats: dict = {}
@dataclass
class Response():
status: str
message: str
data: dict

View File

@@ -1 +1 @@
import{x as i,o as l,c as _,w as s,a as e,f as a,J as m,V as r,b as t,t as u,ab as p,B as n,ac as o,j as f}from"./index-7c8bc001.js";const b={class:"text-h3"},h={class:"d-flex align-center"},g={class:"d-flex align-center"},V=i({__name:"BaseBreadcrumb",props:{title:String,breadcrumbs:Array,icon:String},setup(d){const c=d;return(x,B)=>(l(),_(r,{class:"page-breadcrumb mb-1 mt-1"},{default:s(()=>[e(a,{cols:"12",md:"12"},{default:s(()=>[e(m,{variant:"outlined",elevation:"0",class:"px-4 py-3 withbg"},{default:s(()=>[e(r,{"no-gutters":"",class:"align-center"},{default:s(()=>[e(a,{md:"5"},{default:s(()=>[t("h3",b,u(c.title),1)]),_:1}),e(a,{md:"7",sm:"12",cols:"12"},{default:s(()=>[e(p,{items:c.breadcrumbs,class:"text-h5 justify-md-end pa-1"},{divider:s(()=>[t("div",h,[e(n(o),{size:"17"})])]),prepend:s(()=>[e(f,{size:"small",icon:"mdi-home",class:"text-secondary mr-2"}),t("div",g,[e(n(o),{size:"17"})])]),_:1},8,["items"])]),_:1})]),_:1})]),_:1})]),_:1})]),_:1}))}});export{V as _};
import{x as i,o as l,c as _,w as s,a as e,f as a,J as m,V as c,b as t,t as u,ae as p,B as n,af as o,j as f}from"./index-25639696.js";const b={class:"text-h3"},h={class:"d-flex align-center"},g={class:"d-flex align-center"},V=i({__name:"BaseBreadcrumb",props:{title:String,breadcrumbs:Array,icon:String},setup(d){const r=d;return(x,B)=>(l(),_(c,{class:"page-breadcrumb mb-1 mt-1"},{default:s(()=>[e(a,{cols:"12",md:"12"},{default:s(()=>[e(m,{variant:"outlined",elevation:"0",class:"px-4 py-3 withbg"},{default:s(()=>[e(c,{"no-gutters":"",class:"align-center"},{default:s(()=>[e(a,{md:"5"},{default:s(()=>[t("h3",b,u(r.title),1)]),_:1}),e(a,{md:"7",sm:"12",cols:"12"},{default:s(()=>[e(p,{items:r.breadcrumbs,class:"text-h5 justify-md-end pa-1"},{divider:s(()=>[t("div",h,[e(n(o),{size:"17"})])]),prepend:s(()=>[e(f,{size:"small",icon:"mdi-home",class:"text-secondary mr-2"}),t("div",g,[e(n(o),{size:"17"})])]),_:1},8,["items"])]),_:1})]),_:1})]),_:1})]),_:1})]),_:1}))}});export{V as _};

View File

@@ -0,0 +1 @@
import{x as e,o as a,c as t,w as o,a as s,B as n,Z as r,W as c}from"./index-25639696.js";const f=e({__name:"BlankLayout",setup(p){return(u,_)=>(a(),t(c,null,{default:o(()=>[s(n(r))]),_:1}))}});export{f as default};

View File

@@ -0,0 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-cae6d9fb.js";import{_}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b010c672.js";import{x as p,D as a,o as r,s,a as e,w as t,f as o,V as i,F as n,u as g,c as h,a0 as b,e as x,t as y}from"./index-25639696.js";const P=p({__name:"ColorPage",setup(C){const c=a({title:"Colors Page"}),d=a([{title:"Utilities",disabled:!1,href:"#"},{title:"Colors",disabled:!0,href:"#"}]),u=a(["primary","lightprimary","secondary","lightsecondary","info","success","accent","warning","error","darkText","lightText","borderLight","inputBorder","containerBg"]);return(V,k)=>(r(),s(n,null,[e(m,{title:c.value.title,breadcrumbs:d.value},null,8,["title","breadcrumbs"]),e(i,null,{default:t(()=>[e(o,{cols:"12",md:"12"},{default:t(()=>[e(_,{title:"Color Palette"},{default:t(()=>[e(i,null,{default:t(()=>[(r(!0),s(n,null,g(u.value,(l,f)=>(r(),h(o,{md:"3",cols:"12",key:f},{default:t(()=>[e(b,{rounded:"md",class:"align-center justify-center d-flex",height:"100",width:"100%",color:l},{default:t(()=>[x("class: "+y(l),1)]),_:2},1032,["color"])]),_:2},1024))),128))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{P as default};

View File

@@ -0,0 +1 @@
import{o as l,s as o,u as c,c as n,w as u,Q as g,b as d,R as k,F as t,ac as h,O as p,t as m,a as V,ad as f,i as C,q as x,k as v,A as U}from"./index-25639696.js";import{_ as w}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b010c672.js";const S={__name:"ConfigDetailCard",props:{config:Array},setup(s){return(y,B)=>(l(!0),o(t,null,c(s.config,r=>(l(),n(w,{key:r.name,title:r.name,style:{"margin-bottom":"16px"}},{default:u(()=>[g(d("a",null,"No data",512),[[k,s.config.length===0]]),(l(!0),o(t,null,c(r.body,e=>(l(),o(t,null,[e.config_type==="item"?(l(),o(t,{key:0},[e.val_type==="bool"?(l(),n(h,{key:0,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,color:"primary",inset:""},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="str"?(l(),n(p,{key:1,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="int"?(l(),n(p,{key:2,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="list"?(l(),o(t,{key:3},[d("span",null,m(e.name),1),V(f,{modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,chips:"",clearable:"",label:"请添加",multiple:"","prepend-icon":"mdi-tag-multiple-outline"},{selection:u(({attrs:a,item:i,select:b,selected:_})=>[V(C,x(a,{"model-value":_,closable:"",onClick:b,"onClick:close":D=>y.remove(i)}),{default:u(()=>[d("strong",null,m(i),1)]),_:2},1040,["model-value","onClick","onClick:close"])]),_:2},1032,["modelValue","onUpdate:modelValue"])],64)):v("",!0)],64)):e.config_type==="divider"?(l(),n(U,{key:1,style:{"margin-top":"8px","margin-bottom":"8px"}})):v("",!0)],64))),256))]),_:2},1032,["title"]))),128))}};export{S as _};

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
.v-tab{text-transform:none!important}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,32 @@
/**
* Copyright (c) 2014 The xterm.js authors. All rights reserved.
* Copyright (c) 2012-2013, Christopher Jeffrey (MIT License)
* https://github.com/chjj/term.js
* @license MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*
* Originally forked from (with the author's permission):
* Fabrice Bellard's javascript vt100 for jslinux:
* http://bellard.org/jslinux/
* Copyright (c) 2011 Fabrice Bellard
* The original design remains. The terminal itself
* has been extended to include xterm CSI codes, among
* other features.
*/.xterm{cursor:text;position:relative;user-select:none;-ms-user-select:none;-webkit-user-select:none}.xterm.focus,.xterm:focus{outline:none}.xterm .xterm-helpers{position:absolute;top:0;z-index:5}.xterm .xterm-helper-textarea{padding:0;border:0;margin:0;position:absolute;opacity:0;left:-9999em;top:0;width:0;height:0;z-index:-5;white-space:nowrap;overflow:hidden;resize:none}.xterm .composition-view{background:#000;color:#fff;display:none;position:absolute;white-space:nowrap;z-index:1}.xterm .composition-view.active{display:block}.xterm .xterm-viewport{background-color:#000;overflow-y:scroll;cursor:default;position:absolute;right:0;left:0;top:0;bottom:0}.xterm .xterm-screen{position:relative}.xterm .xterm-screen canvas{position:absolute;left:0;top:0}.xterm .xterm-scroll-area{visibility:hidden}.xterm-char-measure-element{display:inline-block;visibility:hidden;position:absolute;top:0;left:-9999em;line-height:normal}.xterm.enable-mouse-events{cursor:default}.xterm.xterm-cursor-pointer,.xterm .xterm-cursor-pointer{cursor:pointer}.xterm.column-select.focus{cursor:crosshair}.xterm .xterm-accessibility,.xterm .xterm-message{position:absolute;left:0;top:0;bottom:0;right:0;z-index:10;color:transparent;pointer-events:none}.xterm .live-region{position:absolute;left:-9999px;width:1px;height:1px;overflow:hidden}.xterm-dim{opacity:1!important}.xterm-underline-1{text-decoration:underline}.xterm-underline-2{text-decoration:double underline}.xterm-underline-3{text-decoration:wavy underline}.xterm-underline-4{text-decoration:dotted underline}.xterm-underline-5{text-decoration:dashed underline}.xterm-overline{text-decoration:overline}.xterm-overline.xterm-underline-1{text-decoration:overline underline}.xterm-overline.xterm-underline-2{text-decoration:overline double underline}.xterm-overline.xterm-underline-3{text-decoration:overline wavy underline}.xterm-overline.xterm-underline-4{text-decoration:overline dotted underline}.xterm-overline.xterm-underline-5{text-decoration:overline dashed underline}.xterm-strikethrough{text-decoration:line-through}.xterm-screen .xterm-decoration-container .xterm-decoration{z-index:6;position:absolute}.xterm-screen .xterm-decoration-container .xterm-decoration.xterm-decoration-top-layer{z-index:7}.xterm-decoration-overview-ruler{z-index:8;position:absolute;top:0;right:0;pointer-events:none}.xterm-decoration-top{z-index:2;position:relative}

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
import{_ as t}from"./_plugin-vue_export-helper-c27b6911.js";import{o,c,w as s,V as i,a as r,b as e,d as l,e as a,f as d}from"./index-7c8bc001.js";const n="/assets/img-error-bg-ab6474a0.svg",_="/assets/img-error-blue-2675a7a9.svg",m="/assets/img-error-text-a6aebfa0.svg",g="/assets/img-error-purple-edee3fbc.svg";const p={},u={class:"text-center"},f=e("div",{class:"CardMediaWrapper"},[e("img",{src:n,alt:"grid",class:"w-100"}),e("img",{src:_,alt:"grid",class:"CardMediaParts"}),e("img",{src:m,alt:"build",class:"CardMediaBuild"}),e("img",{src:g,alt:"build",class:"CardMediaBuild"})],-1),h=e("h1",{class:"text-h1"},"Something is wrong",-1),v=e("p",null,[e("small",null,[a("The page you are looking was moved, removed, "),e("br"),a("renamed, or might never exist! ")])],-1);function x(b,V){return o(),c(i,{"no-gutters":"",class:"h-100vh"},{default:s(()=>[r(d,{class:"d-flex align-center justify-center"},{default:s(()=>[e("div",u,[f,h,v,r(l,{variant:"flat",color:"primary",class:"mt-4",to:"/","prepend-icon":"mdi-home"},{default:s(()=>[a(" Home")]),_:1})])]),_:1})]),_:1})}const C=t(p,[["render",x]]);export{C as default};
import{_ as t}from"./_plugin-vue_export-helper-c27b6911.js";import{o,c,w as s,V as i,a as r,b as e,d as l,e as a,f as d}from"./index-25639696.js";const n="/assets/img-error-bg-ab6474a0.svg",_="/assets/img-error-blue-2675a7a9.svg",m="/assets/img-error-text-a6aebfa0.svg",g="/assets/img-error-purple-edee3fbc.svg";const p={},u={class:"text-center"},f=e("div",{class:"CardMediaWrapper"},[e("img",{src:n,alt:"grid",class:"w-100"}),e("img",{src:_,alt:"grid",class:"CardMediaParts"}),e("img",{src:m,alt:"build",class:"CardMediaBuild"}),e("img",{src:g,alt:"build",class:"CardMediaBuild"})],-1),h=e("h1",{class:"text-h1"},"Something is wrong",-1),v=e("p",null,[e("small",null,[a("The page you are looking was moved, removed, "),e("br"),a("renamed, or might never exist! ")])],-1);function x(b,V){return o(),c(i,{"no-gutters":"",class:"h-100vh"},{default:s(()=>[r(d,{class:"d-flex align-center justify-center"},{default:s(()=>[e("div",u,[f,h,v,r(l,{variant:"flat",color:"primary",class:"mt-4",to:"/","prepend-icon":"mdi-home"},{default:s(()=>[a(" Home")]),_:1})])]),_:1})]),_:1})}const C=t(p,[["render",x]]);export{C as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
import{aw as _,x as d,D as n,o as c,s as m,a as f,w as p,Q as r,b as a,R as o,B as t,ax as h}from"./index-25639696.js";const s={Sidebar_drawer:!0,Customizer_drawer:!1,mini_sidebar:!1,fontTheme:"Roboto",inputBg:!1},l=_({id:"customizer",state:()=>({Sidebar_drawer:s.Sidebar_drawer,Customizer_drawer:s.Customizer_drawer,mini_sidebar:s.mini_sidebar,fontTheme:"Poppins",inputBg:s.inputBg}),getters:{},actions:{SET_SIDEBAR_DRAWER(){this.Sidebar_drawer=!this.Sidebar_drawer},SET_MINI_SIDEBAR(e){this.mini_sidebar=e},SET_FONT(e){this.fontTheme=e}}}),u={class:"logo",style:{display:"flex","align-items":"center"}},b={style:{"font-size":"24px","font-weight":"1000"}},w={style:{"font-size":"20px","font-weight":"1000"}},S={style:{"font-size":"20px"}},z=d({__name:"LogoDark",setup(e){n("rgb(var(--v-theme-primary))"),n("rgb(var(--v-theme-secondary))");const i=l();return(g,B)=>(c(),m("div",u,[f(t(h),{to:"/",style:{"text-decoration":"none",color:"black"}},{default:p(()=>[r(a("span",b,"AstrBot 仪表盘",512),[[o,!t(i).mini_sidebar]]),r(a("span",w,"Astr",512),[[o,t(i).mini_sidebar]]),r(a("span",S,"Bot",512),[[o,t(i).mini_sidebar]])]),_:1})]))}});export{z as _,l as u};

View File

@@ -1 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-89ca5198.js";import{_ as i}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{x as n,D as a,o as c,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-7c8bc001.js";const p=["innerHTML"],v=n({__name:"MaterialIcons",setup(b){const s=a({title:"Material Icons"}),r=a('<iframe src="https://materialdesignicons.com/" frameborder="0" width="100%" height="1000"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Material Icons",disabled:!0,href:"#"}]);return(h,M)=>(c(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(i,{title:"Material Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,p)]),_:1})]),_:1})]),_:1})],64))}});export{v as default};
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-cae6d9fb.js";import{_ as i}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b010c672.js";import{x as n,D as a,o as c,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-25639696.js";const p=["innerHTML"],v=n({__name:"MaterialIcons",setup(b){const s=a({title:"Material Icons"}),r=a('<iframe src="https://materialdesignicons.com/" frameborder="0" width="100%" height="1000"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Material Icons",disabled:!0,href:"#"}]);return(h,M)=>(c(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(i,{title:"Material Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,p)]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -0,0 +1 @@
import{_ as B}from"./LogoDark.vue_vue_type_script_setup_true_lang-b1d2f1af.js";import{x as y,D as o,o as b,s as U,a as e,w as a,b as n,B as $,d as u,f as d,A as _,e as f,V as r,O as m,aq as q,av as A,F as E,c as F,N as T,J as V,L as P}from"./index-25639696.js";const z="/assets/social-google-a359a253.svg",N=["src"],S=n("span",{class:"ml-2"},"Sign up with Google",-1),D=n("h5",{class:"text-h5 text-center my-4 mb-8"},"Sign up with Email address",-1),G={class:"d-sm-inline-flex align-center mt-2 mb-7 mb-sm-0 font-weight-bold"},L=n("a",{href:"#",class:"ml-1 text-lightText"},"Terms and Condition",-1),O={class:"mt-5 text-right"},j=y({__name:"AuthRegister",setup(w){const c=o(!1),i=o(!1),p=o(""),v=o(""),g=o(),h=o(""),x=o(""),k=o([s=>!!s||"Password is required",s=>s&&s.length<=10||"Password must be less than 10 characters"]),C=o([s=>!!s||"E-mail is required",s=>/.+@.+\..+/.test(s)||"E-mail must be valid"]);function R(){g.value.validate()}return(s,l)=>(b(),U(E,null,[e(u,{block:"",color:"primary",variant:"outlined",class:"text-lightText googleBtn"},{default:a(()=>[n("img",{src:$(z),alt:"google"},null,8,N),S]),_:1}),e(r,null,{default:a(()=>[e(d,{class:"d-flex align-center"},{default:a(()=>[e(_,{class:"custom-devider"}),e(u,{variant:"outlined",class:"orbtn",rounded:"md",size:"small"},{default:a(()=>[f("OR")]),_:1}),e(_,{class:"custom-devider"})]),_:1})]),_:1}),D,e(A,{ref_key:"Regform",ref:g,"lazy-validation":"",action:"/dashboards/analytical",class:"mt-7 loginForm"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:h.value,"onUpdate:modelValue":l[0]||(l[0]=t=>h.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Firstname"},null,8,["modelValue"])]),_:1}),e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:x.value,"onUpdate:modelValue":l[1]||(l[1]=t=>x.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Lastname"},null,8,["modelValue"])]),_:1})]),_:1}),e(m,{modelValue:v.value,"onUpdate:modelValue":l[2]||(l[2]=t=>v.value=t),rules:C.value,label:"Email Address / Username",class:"mt-4 mb-4",required:"",density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary"},null,8,["modelValue","rules"]),e(m,{modelValue:p.value,"onUpdate:modelValue":l[3]||(l[3]=t=>p.value=t),rules:k.value,label:"Password",required:"",density:"comfortable",variant:"outlined",color:"primary","hide-details":"auto","append-icon":i.value?"mdi-eye":"mdi-eye-off",type:i.value?"text":"password","onClick:append":l[4]||(l[4]=t=>i.value=!i.value),class:"pwdInput"},null,8,["modelValue","rules","append-icon","type"]),n("div",G,[e(q,{modelValue:c.value,"onUpdate:modelValue":l[5]||(l[5]=t=>c.value=t),rules:[t=>!!t||"You must agree to continue!"],label:"Agree with?",required:"",color:"primary",class:"ms-n2","hide-details":""},null,8,["modelValue","rules"]),L]),e(u,{color:"secondary",block:"",class:"mt-2",variant:"flat",size:"large",onClick:l[6]||(l[6]=t=>R())},{default:a(()=>[f("Sign Up")]),_:1})]),_:1},512),n("div",O,[e(_),e(u,{variant:"plain",to:"/auth/login",class:"mt-2 text-capitalize mr-n2"},{default:a(()=>[f("Already have an account?")]),_:1})])],64))}});const I={class:"pa-7 pa-sm-12"},J=n("h2",{class:"text-secondary text-h2 mt-8"},"Sign up",-1),Y=n("h4",{class:"text-disabled text-h4 mt-3"},"Enter credentials to continue",-1),M=y({__name:"RegisterPage",setup(w){return(c,i)=>(b(),F(r,{class:"h-100vh","no-gutters":""},{default:a(()=>[e(d,{cols:"12",class:"d-flex align-center bg-lightprimary"},{default:a(()=>[e(T,null,{default:a(()=>[n("div",I,[e(r,{justify:"center"},{default:a(()=>[e(d,{cols:"12",lg:"10",xl:"6",md:"7"},{default:a(()=>[e(V,{elevation:"0",class:"loginBox"},{default:a(()=>[e(V,{variant:"outlined"},{default:a(()=>[e(P,{class:"pa-9"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",class:"text-center"},{default:a(()=>[e(B),J,Y]),_:1})]),_:1}),e(j)]),_:1})]),_:1})]),_:1})]),_:1})]),_:1})])]),_:1})]),_:1})]),_:1}))}});export{M as default};

View File

@@ -1 +1 @@
import{_ as c}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-89ca5198.js";import{_ as f}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{x as m,D as s,o as l,s as r,a as e,w as a,f as i,V as o,F as d,u as _,J as p,U as b,b as h,t as g}from"./index-7c8bc001.js";const v=m({__name:"ShadowPage",setup(w){const n=s({title:"Shadow Page"}),u=s([{title:"Utilities",disabled:!1,href:"#"},{title:"Shadow",disabled:!0,href:"#"}]);return(V,x)=>(l(),r(d,null,[e(c,{title:n.value.title,breadcrumbs:u.value},null,8,["title","breadcrumbs"]),e(o,null,{default:a(()=>[e(i,{cols:"12",md:"12"},{default:a(()=>[e(f,{title:"Basic Shadow"},{default:a(()=>[e(o,{justify:"center"},{default:a(()=>[(l(),r(d,null,_(25,t=>e(i,{key:t,cols:"auto"},{default:a(()=>[e(p,{height:"100",width:"100",class:b(["mb-5",["d-flex justify-center align-center bg-primary",`elevation-${t}`]])},{default:a(()=>[h("div",null,g(t-1),1)]),_:2},1032,["class"])]),_:2},1024)),64))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{v as default};
import{_ as c}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-cae6d9fb.js";import{_ as f}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b010c672.js";import{x as m,D as s,o as l,s as r,a as e,w as a,f as i,V as o,F as d,u as _,J as p,X as b,b as h,t as g}from"./index-25639696.js";const v=m({__name:"ShadowPage",setup(w){const n=s({title:"Shadow Page"}),u=s([{title:"Utilities",disabled:!1,href:"#"},{title:"Shadow",disabled:!0,href:"#"}]);return(V,x)=>(l(),r(d,null,[e(c,{title:n.value.title,breadcrumbs:u.value},null,8,["title","breadcrumbs"]),e(o,null,{default:a(()=>[e(i,{cols:"12",md:"12"},{default:a(()=>[e(f,{title:"Basic Shadow"},{default:a(()=>[e(o,{justify:"center"},{default:a(()=>[(l(),r(d,null,_(25,t=>e(i,{key:t,cols:"auto"},{default:a(()=>[e(p,{height:"100",width:"100",class:b(["mb-5",["d-flex justify-center align-center bg-primary",`elevation-${t}`]])},{default:a(()=>[h("div",null,g(t-1),1)]),_:2},1032,["class"])]),_:2},1024)),64))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -1 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-89ca5198.js";import{_ as n}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{x as c,D as a,o as i,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-7c8bc001.js";const b=["innerHTML"],w=c({__name:"TablerIcons",setup(p){const s=a({title:"Tabler Icons"}),r=a('<iframe src="https://tablericons.com/" frameborder="0" width="100%" height="600"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Tabler Icons",disabled:!0,href:"#"}]);return(h,T)=>(i(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(n,{title:"Tabler Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,b)]),_:1})]),_:1})]),_:1})],64))}});export{w as default};
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-cae6d9fb.js";import{_ as n}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b010c672.js";import{x as c,D as a,o as i,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-25639696.js";const b=["innerHTML"],w=c({__name:"TablerIcons",setup(p){const s=a({title:"Tabler Icons"}),r=a('<iframe src="https://tablericons.com/" frameborder="0" width="100%" height="600"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Tabler Icons",disabled:!0,href:"#"}]);return(h,T)=>(i(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(n,{title:"Tabler Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,b)]),_:1})]),_:1})]),_:1})],64))}});export{w as default};

View File

@@ -1 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-89ca5198.js";import{_ as v}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{x as f,o as i,c as g,w as e,a,a6 as y,K as b,e as w,t as d,A as C,L as V,a7 as L,J as _,D as o,s as h,f as k,b as t,F as x,u as B,U as H,V as T}from"./index-7c8bc001.js";const s=f({__name:"UiChildCard",props:{title:String},setup(r){const l=r;return(n,c)=>(i(),g(_,{variant:"outlined"},{default:e(()=>[a(y,{class:"py-3"},{default:e(()=>[a(b,{class:"text-h5"},{default:e(()=>[w(d(l.title),1)]),_:1})]),_:1}),a(C),a(V,null,{default:e(()=>[L(n.$slots,"default")]),_:3})]),_:3}))}}),D={class:"d-flex flex-column gap-1"},S={class:"text-caption pa-2 bg-lightprimary"},z=t("div",{class:"text-grey"},"Class",-1),N={class:"font-weight-medium"},U=t("div",null,[t("p",{class:"text-left"},"Left aligned on all viewport sizes."),t("p",{class:"text-center"},"Center aligned on all viewport sizes."),t("p",{class:"text-right"},"Right aligned on all viewport sizes."),t("p",{class:"text-sm-left"},"Left aligned on viewports SM (small) or wider."),t("p",{class:"text-right text-md-left"},"Left aligned on viewports MD (medium) or wider."),t("p",{class:"text-right text-lg-left"},"Left aligned on viewports LG (large) or wider."),t("p",{class:"text-right text-xl-left"},"Left aligned on viewports XL (extra-large) or wider.")],-1),$=t("div",{class:"d-flex justify-space-between flex-row"},[t("a",{href:"#",class:"text-decoration-none"},"Non-underlined link"),t("div",{class:"text-decoration-line-through"},"Line-through text"),t("div",{class:"text-decoration-overline"},"Overline text"),t("div",{class:"text-decoration-underline"},"Underline text")],-1),M=t("div",null,[t("p",{class:"text-high-emphasis"},"High-emphasis has an opacity of 87% in light theme and 100% in dark."),t("p",{class:"text-medium-emphasis"},"Medium-emphasis text and hint text have opacities of 60% in light theme and 70% in dark."),t("p",{class:"text-disabled"},"Disabled text has an opacity of 38% in light theme and 50% in dark.")],-1),A=f({__name:"TypographyPage",setup(r){const l=o({title:"Typography Page"}),n=o([["Heading 1","text-h1"],["Heading 2","text-h2"],["Heading 3","text-h3"],["Heading 4","text-h4"],["Heading 5","text-h5"],["Heading 6","text-h6"],["Subtitle 1","text-subtitle-1"],["Subtitle 2","text-subtitle-2"],["Body 1","text-body-1"],["Body 2","text-body-2"],["Button","text-button"],["Caption","text-caption"],["Overline","text-overline"]]),c=o([{title:"Utilities",disabled:!1,href:"#"},{title:"Typography",disabled:!0,href:"#"}]);return(O,F)=>(i(),h(x,null,[a(m,{title:l.value.title,breadcrumbs:c.value},null,8,["title","breadcrumbs"]),a(T,null,{default:e(()=>[a(k,{cols:"12",md:"12"},{default:e(()=>[a(v,{title:"Basic Typography"},{default:e(()=>[a(s,{title:"Heading"},{default:e(()=>[t("div",D,[(i(!0),h(x,null,B(n.value,([p,u])=>(i(),g(_,{variant:"outlined",key:p,class:"my-4"},{default:e(()=>[t("div",{class:H([u,"pa-2"])},d(p),3),t("div",S,[z,t("div",N,d(u),1)])]),_:2},1024))),128))])]),_:1}),a(s,{title:"Text-alignment",class:"mt-8"},{default:e(()=>[U]),_:1}),a(s,{title:"Decoration",class:"mt-8"},{default:e(()=>[$]),_:1}),a(s,{title:"Opacity",class:"mt-8"},{default:e(()=>[M]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{A as default};
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-cae6d9fb.js";import{_ as v}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b010c672.js";import{x as f,o as i,c as g,w as e,a,a8 as y,K as b,e as w,t as d,A as C,L as V,a9 as L,J as _,D as o,s as h,f as k,b as t,F as x,u as B,X as H,V as T}from"./index-25639696.js";const s=f({__name:"UiChildCard",props:{title:String},setup(r){const l=r;return(n,c)=>(i(),g(_,{variant:"outlined"},{default:e(()=>[a(y,{class:"py-3"},{default:e(()=>[a(b,{class:"text-h5"},{default:e(()=>[w(d(l.title),1)]),_:1})]),_:1}),a(C),a(V,null,{default:e(()=>[L(n.$slots,"default")]),_:3})]),_:3}))}}),D={class:"d-flex flex-column gap-1"},S={class:"text-caption pa-2 bg-lightprimary"},z=t("div",{class:"text-grey"},"Class",-1),N={class:"font-weight-medium"},$=t("div",null,[t("p",{class:"text-left"},"Left aligned on all viewport sizes."),t("p",{class:"text-center"},"Center aligned on all viewport sizes."),t("p",{class:"text-right"},"Right aligned on all viewport sizes."),t("p",{class:"text-sm-left"},"Left aligned on viewports SM (small) or wider."),t("p",{class:"text-right text-md-left"},"Left aligned on viewports MD (medium) or wider."),t("p",{class:"text-right text-lg-left"},"Left aligned on viewports LG (large) or wider."),t("p",{class:"text-right text-xl-left"},"Left aligned on viewports XL (extra-large) or wider.")],-1),M=t("div",{class:"d-flex justify-space-between flex-row"},[t("a",{href:"#",class:"text-decoration-none"},"Non-underlined link"),t("div",{class:"text-decoration-line-through"},"Line-through text"),t("div",{class:"text-decoration-overline"},"Overline text"),t("div",{class:"text-decoration-underline"},"Underline text")],-1),O=t("div",null,[t("p",{class:"text-high-emphasis"},"High-emphasis has an opacity of 87% in light theme and 100% in dark."),t("p",{class:"text-medium-emphasis"},"Medium-emphasis text and hint text have opacities of 60% in light theme and 70% in dark."),t("p",{class:"text-disabled"},"Disabled text has an opacity of 38% in light theme and 50% in dark.")],-1),j=f({__name:"TypographyPage",setup(r){const l=o({title:"Typography Page"}),n=o([["Heading 1","text-h1"],["Heading 2","text-h2"],["Heading 3","text-h3"],["Heading 4","text-h4"],["Heading 5","text-h5"],["Heading 6","text-h6"],["Subtitle 1","text-subtitle-1"],["Subtitle 2","text-subtitle-2"],["Body 1","text-body-1"],["Body 2","text-body-2"],["Button","text-button"],["Caption","text-caption"],["Overline","text-overline"]]),c=o([{title:"Utilities",disabled:!1,href:"#"},{title:"Typography",disabled:!0,href:"#"}]);return(U,F)=>(i(),h(x,null,[a(m,{title:l.value.title,breadcrumbs:c.value},null,8,["title","breadcrumbs"]),a(T,null,{default:e(()=>[a(k,{cols:"12",md:"12"},{default:e(()=>[a(v,{title:"Basic Typography"},{default:e(()=>[a(s,{title:"Heading"},{default:e(()=>[t("div",D,[(i(!0),h(x,null,B(n.value,([p,u])=>(i(),g(_,{variant:"outlined",key:p,class:"my-4"},{default:e(()=>[t("div",{class:H([u,"pa-2"])},d(p),3),t("div",S,[z,t("div",N,d(u),1)])]),_:2},1024))),128))])]),_:1}),a(s,{title:"Text-alignment",class:"mt-8"},{default:e(()=>[$]),_:1}),a(s,{title:"Decoration",class:"mt-8"},{default:e(()=>[M]),_:1}),a(s,{title:"Opacity",class:"mt-8"},{default:e(()=>[O]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{j as default};

View File

@@ -1 +1 @@
import{x as n,o,c as i,w as e,a,a6 as d,b as c,K as u,e as p,t as _,a7 as s,A as f,L as V,J as m}from"./index-7c8bc001.js";const C={class:"d-sm-flex align-center justify-space-between"},h=n({__name:"UiParentCard",props:{title:String},setup(l){const r=l;return(t,x)=>(o(),i(m,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(d,null,{default:e(()=>[c("div",C,[a(u,null,{default:e(()=>[p(_(r.title),1)]),_:1}),s(t.$slots,"action")])]),_:3}),a(f),a(V,null,{default:e(()=>[s(t.$slots,"default")]),_:3})]),_:3}))}});export{h as _};
import{x as n,o,c as i,w as e,a,a8 as d,b as c,K as u,e as p,t as _,a9 as s,A as f,L as V,J as m}from"./index-25639696.js";const C={class:"d-sm-flex align-center justify-space-between"},h=n({__name:"UiParentCard",props:{title:String},setup(l){const r=l;return(t,x)=>(o(),i(m,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(d,null,{default:e(()=>[c("div",C,[a(u,null,{default:e(()=>[p(_(r.title),1)]),_:1}),s(t.$slots,"action")])]),_:3}),a(f),a(V,null,{default:e(()=>[s(t.$slots,"default")]),_:3})]),_:3}))}});export{h as _};

View File

Before

Width:  |  Height:  |  Size: 3.9 KiB

After

Width:  |  Height:  |  Size: 3.9 KiB

View File

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

Before

Width:  |  Height:  |  Size: 3.3 KiB

After

Width:  |  Height:  |  Size: 3.3 KiB

View File

Before

Width:  |  Height:  |  Size: 2.9 KiB

After

Width:  |  Height:  |  Size: 2.9 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

Before

Width:  |  Height:  |  Size: 1.2 KiB

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

Before

Width:  |  Height:  |  Size: 1.5 KiB

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@@ -11,7 +11,7 @@
href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&family=Poppins:wght@400;500;600;700&family=Roboto:wght@400;500;700&display=swap"
/>
<title>AstrBot - 仪表盘</title>
<script type="module" crossorigin src="/assets/index-7c8bc001.js"></script>
<script type="module" crossorigin src="/assets/index-25639696.js"></script>
<link rel="stylesheet" href="/assets/index-0f1523f3.css">
</head>
<body>

92
dashboard/helper.py Normal file
View File

@@ -0,0 +1,92 @@
from . import DashBoardData
from util.cmd_config import AstrBotConfig
from dataclasses import dataclass, asdict
from util.plugin_dev.api.v1.config import update_config
from SparkleLogging.utils.core import LogManager
from logging import Logger
from type.types import Context
from type.config import CONFIG_METADATA_2
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class DashBoardHelper():
def __init__(self, context: Context):
self.context = context
self.config_key_dont_show = ['dashboard', 'config_version']
def try_cast(self, value: str, type_: str):
if type_ == "int" and value.isdigit():
return int(value)
elif type_ == "float" and isinstance(value, str) \
and value.replace(".", "", 1).isdigit():
return float(value)
elif type_ == "float" and isinstance(value, int):
return float(value)
def validate_config(self, data):
errors = []
def validate(data, metadata=CONFIG_METADATA_2, path=""):
for key, meta in metadata.items():
if key not in data:
continue
value = data[key]
# 递归验证
if meta["type"] == "list" and isinstance(value, list):
for item in value:
validate(item, meta["items"], path=f"{path}{key}.")
elif meta["type"] == "object" and isinstance(value, dict):
validate(value, meta["items"], path=f"{path}{key}.")
if meta["type"] == "int" and not isinstance(value, int):
casted = self.try_cast(value, "int")
if casted is None:
errors.append(f"错误的类型 {path}{key}: 期望是 int, 得到了 {type(value).__name__}")
data[key] = casted
elif meta["type"] == "float" and not isinstance(value, float):
casted = self.try_cast(value, "float")
if casted is None:
errors.append(f"错误的类型 {path}{key}: 期望是 float, 得到了 {type(value).__name__}")
data[key] = casted
elif meta["type"] == "bool" and not isinstance(value, bool):
errors.append(f"错误的类型 {path}{key}: 期望是 bool, 得到了 {type(value).__name__}")
elif meta["type"] == "string" and not isinstance(value, str):
errors.append(f"错误的类型 {path}{key}: 期望是 string, 得到了 {type(value).__name__}")
elif meta["type"] == "list" and not isinstance(value, list):
errors.append(f"错误的类型 {path}{key}: 期望是 list, 得到了 {type(value).__name__}")
elif meta["type"] == "object" and not isinstance(value, dict):
errors.append(f"错误的类型 {path}{key}: 期望是 dict, 得到了 {type(value).__name__}")
validate(value, meta["items"], path=f"{path}{key}.")
validate(data)
# hardcode warning
data['config_version'] = self.context.config_helper.config_version
data['dashboard'] = asdict(self.context.config_helper.dashboard)
return errors
def save_astrbot_config(self, post_config: dict):
'''验证并保存配置'''
errors = self.validate_config(post_config)
if errors:
raise ValueError(f"格式校验未通过: {errors}")
self.context.config_helper.flush_config(post_config)
def save_extension_config(self, post_config: dict):
if 'namespace' not in post_config:
raise ValueError("Missing key: namespace")
if 'config' not in post_config:
raise ValueError("Missing key: config")
namespace = post_config['namespace']
config: list = post_config['config'][0]['body']
for item in config:
key = item['path']
value = item['value']
typ = item['val_type']
if typ == 'int':
if not value.isdigit():
raise ValueError(f"错误的类型 {namespace}.{key}: 期望是 int, 得到了 {type(value).__name__}")
value = int(value)
update_config(namespace, key, value)

463
dashboard/server.py Normal file
View File

@@ -0,0 +1,463 @@
import websockets
import json
import threading
import asyncio
import os
import uuid
import logging
import traceback
from . import DashBoardData, Response
from flask import Flask, request
from werkzeug.serving import make_server
from astrbot.persist.helper import dbConn
from type.types import Context
from typing import List
from SparkleLogging.utils.core import LogManager
from logging import Logger
from dashboard.helper import DashBoardHelper
from util.io import get_local_ip_addresses
from model.plugin.manager import PluginManager
from util.updator.astrbot_updator import AstrBotUpdator
from type.config import CONFIG_METADATA_2
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class AstrBotDashBoard():
def __init__(self, context: Context, plugin_manager: PluginManager, astrbot_updator: AstrBotUpdator):
self.context = context
self.plugin_manager = plugin_manager
self.astrbot_updator = astrbot_updator
self.dashboard_data = DashBoardData()
self.dashboard_helper = DashBoardHelper(self.context)
self.dashboard_be = Flask(__name__, static_folder="dist", static_url_path="/")
self.dashboard_be.json.sort_keys=False # 不按照字典排序
logging.getLogger('werkzeug').setLevel(logging.ERROR)
self.dashboard_be.logger.setLevel(logging.ERROR)
self.ws_clients = {} # remote_ip: ws
self.loop = asyncio.get_event_loop()
self.http_server_thread: threading.Thread = None
@self.dashboard_be.get("/")
def index():
# 返回页面
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/auth/login")
def _():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/config")
def rt_config():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/logs")
def rt_logs():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/extension")
def rt_extension():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/dashboard/default")
def rt_dashboard():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.post("/api/authenticate")
def authenticate():
username = self.context.config_helper.dashboard.username
password = self.context.config_helper.dashboard.password
# 获得请求体
post_data = request.json
if post_data["username"] == username and post_data["password"] == password:
return Response(
status="success",
message="登录成功。",
data={
"token": "astrbot-test-token",
"username": username
}
).__dict__
else:
return Response(
status="error",
message="用户名或密码错误。",
data=None
).__dict__
@self.dashboard_be.post("/api/change_password")
def change_password():
password = self.context.config_helper.dashboard.password
# 获得请求体
post_data = request.json
if post_data["password"] == password:
self.context.config_helper.dashboard.password = post_data['new_password']
return Response(
status="success",
message="修改成功。",
data=None
).__dict__
else:
return Response(
status="error",
message="原密码错误。",
data=None
).__dict__
@self.dashboard_be.get("/api/stats")
def get_stats():
db_inst = dbConn()
all_session = db_inst.get_all_stat_session()
last_24_message = db_inst.get_last_24h_stat_message()
# last_24_platform = db_inst.get_last_24h_stat_platform()
platforms = db_inst.get_platform_cnt_total()
self.dashboard_data.stats["session"] = []
self.dashboard_data.stats["session_total"] = db_inst.get_session_cnt_total(
)
self.dashboard_data.stats["message"] = last_24_message
self.dashboard_data.stats["message_total"] = db_inst.get_message_cnt_total(
)
self.dashboard_data.stats["platform"] = platforms
return Response(
status="success",
message="",
data=self.dashboard_data.stats
).__dict__
@self.dashboard_be.get("/api/configs")
def get_configs():
# namespace 为空时返回 AstrBot 配置
# 否则返回指定 namespace 的插件配置
namespace = "" if "namespace" not in request.args else request.args["namespace"]
if not namespace:
return Response(
status="success",
message="",
data=self._get_astrbot_config()
).__dict__
return Response(
status="success",
message="",
data=self._get_extension_config(namespace)
).__dict__
@self.dashboard_be.post("/api/astrbot-configs")
def post_astrbot_configs():
post_configs = request.json
try:
self.save_astrbot_configs(post_configs)
return Response(
status="success",
message="保存成功~ 机器人将在 3 秒内重启以应用新的配置。",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extension-configs")
def post_extension_configs():
post_configs = request.json
try:
self.save_extension_configs(post_configs)
return Response(
status="success",
message="保存成功~ 机器人将在 3 秒内重启以应用新的配置。",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.get("/api/extensions")
def get_plugins():
_plugin_resp = []
for plugin in self.context.cached_plugins:
_p = plugin.metadata
_t = {
"name": _p.plugin_name,
"repo": '' if _p.repo is None else _p.repo,
"author": _p.author,
"desc": _p.desc,
"version": _p.version
}
_plugin_resp.append(_t)
return Response(
status="success",
message="",
data=_plugin_resp
).__dict__
@self.dashboard_be.post("/api/extensions/install")
def install_plugin():
post_data = request.json
repo_url = post_data["url"]
try:
logger.info(f"正在安装插件 {repo_url}")
# self.plugin_manager.install_plugin(repo_url)
asyncio.run_coroutine_threadsafe(self.plugin_manager.install_plugin(repo_url), self.loop).result()
threading.Thread(target=self.astrbot_updator._reboot, args=(2, self.context)).start()
logger.info(f"安装插件 {repo_url} 成功2秒后重启")
return Response(
status="success",
message="安装成功,机器人将在 2 秒内重启。",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/install: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/upload-install")
def upload_install_plugin():
try:
file = request.files['file']
print(file.filename)
logger.info(f"正在安装用户上传的插件 {file.filename}")
file_path = f"data/temp/{uuid.uuid4()}.zip"
file.save(file_path)
self.plugin_manager.install_plugin_from_file(file_path)
logger.info(f"安装插件 {file.filename} 成功")
return Response(
status="success",
message="安装成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/upload-install: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/uninstall")
def uninstall_plugin():
post_data = request.json
plugin_name = post_data["name"]
try:
logger.info(f"正在卸载插件 {plugin_name}")
self.plugin_manager.uninstall_plugin(plugin_name)
logger.info(f"卸载插件 {plugin_name} 成功")
return Response(
status="success",
message="卸载成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/uninstall: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/update")
def update_plugin():
post_data = request.json
plugin_name = post_data["name"]
try:
logger.info(f"正在更新插件 {plugin_name}")
# self.plugin_manager.update_plugin(plugin_name)
asyncio.run_coroutine_threadsafe(self.plugin_manager.update_plugin(plugin_name), self.loop).result()
threading.Thread(target=self.astrbot_updator._reboot, args=(2, self.context)).start()
logger.info(f"更新插件 {plugin_name} 成功2秒后重启")
return Response(
status="success",
message="更新成功,机器人将在 2 秒内重启。",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/update: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/log")
def log():
for item in self.ws_clients:
try:
asyncio.run_coroutine_threadsafe(
self.ws_clients[item].send(request.data.decode()), self.loop).result()
except Exception as e:
pass
return 'ok'
@self.dashboard_be.get("/api/check_update")
def get_update_info():
try:
# ret = self.astrbot_updator.check_update(None, None)
ret = asyncio.run_coroutine_threadsafe(
self.astrbot_updator.check_update(None, None), self.loop).result()
return Response(
status="success",
message=str(ret) if ret is not None else "已经是最新版本了。",
data={
"has_new_version": ret is not None
}
).__dict__
except Exception as e:
logger.error(f"/api/check_update: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/update_project")
def update_project_api():
version = request.json['version']
if version == "" or version == "latest":
latest = True
version = ''
else:
latest = False
try:
# await self.astrbot_updator.update(latest=latest, version=version)
asyncio.run_coroutine_threadsafe(self.astrbot_updator.update(latest=latest, version=version), self.loop).result()
threading.Thread(target=self.astrbot_updator._reboot, args=(2, self.context)).start()
return Response(
status="success",
message="更新成功,机器人将在 3 秒内重启。",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/update_project: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.get("/api/llm/list")
def llm_list():
ret = []
for llm in self.context.llms:
ret.append(llm.llm_name)
return Response(
status="success",
message="",
data=ret
).__dict__
@self.dashboard_be.get("/api/llm")
def llm():
text = request.args["text"]
llm = request.args["llm"]
for llm_ in self.context.llms:
if llm_.llm_name == llm:
try:
ret = asyncio.run_coroutine_threadsafe(
llm_.llm_instance.text_chat(text), self.loop).result()
return Response(
status="success",
message="",
data=ret
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
return Response(
status="error",
message="LLM not found.",
data=None
).__dict__
def save_astrbot_configs(self, post_configs: dict):
try:
self.dashboard_helper.save_astrbot_config(post_configs)
threading.Thread(target=self.astrbot_updator._reboot, args=(3, self.context), daemon=True).start()
except Exception as e:
raise e
def save_extension_configs(self, post_configs: dict):
try:
self.dashboard_helper.save_extension_config(post_configs)
threading.Thread(target=self.astrbot_updator._reboot, args=(3, self.context), daemon=True).start()
except Exception as e:
raise e
def _get_astrbot_config(self):
config = self.context.config_helper.to_dict()
for key in self.dashboard_helper.config_key_dont_show:
if key in config:
del config[key]
return {
"metadata": CONFIG_METADATA_2,
"config": config,
}
def _get_extension_config(self, namespace: str):
path = f"data/config/{namespace}.json"
if not os.path.exists(path):
return []
with open(path, "r", encoding="utf-8-sig") as f:
return [{
"config_type": "group",
"name": namespace + " 插件配置",
"description": "",
"body": list(json.load(f).values())
},]
async def get_log_history(self):
try:
with open("logs/astrbot/astrbot.log", "r", encoding="utf-8") as f:
return f.readlines()[-100:]
except Exception as e:
logger.warning(f"读取日志历史失败: {e.__str__()}")
return []
async def __handle_msg(self, websocket, path):
address = websocket.remote_address
self.ws_clients[address] = websocket
data = await self.get_log_history()
data = ''.join(data).replace('\n', '\r\n')
await websocket.send(data)
while True:
try:
msg = await websocket.recv()
except websockets.exceptions.ConnectionClosedError:
# logger.info(f"和 {address} 的 websocket 连接已断开")
del self.ws_clients[address]
break
except Exception as e:
# logger.info(f"和 {path} 的 websocket 连接发生了错误: {e.__str__()}")
del self.ws_clients[address]
break
async def ws_server(self):
ws_server = websockets.serve(self.__handle_msg, "0.0.0.0", 6186)
logger.info("WebSocket 服务器已启动。")
await ws_server
def http_server(self):
http_server = make_server(
'0.0.0.0', 6185, self.dashboard_be, threaded=True)
http_server.serve_forever()
def run_http_server(self):
self.http_server_thread = threading.Thread(target=self.http_server, daemon=True).start()
ip_address = get_local_ip_addresses()
ip_str = f"http://{ip_address}:6185"
logger.info(f"HTTP 服务器已启动,可访问: {ip_str} 等来登录可视化面板。")

159
main.py
View File

@@ -1,120 +1,63 @@
import os, sys
from pip._internal import main as pipmain
import os
import asyncio
import sys
import warnings
import traceback
import threading
import logging
import mimetypes
from astrbot.bootstrap import AstrBotBootstrap
from SparkleLogging.utils.core import LogManager
from logging import Formatter
warnings.filterwarnings("ignore")
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
logo_tmpl = r"""
___ _______.___________..______ .______ ______ .___________.
/ \ / | || _ \ | _ \ / __ \ | |
/ ^ \ | (----`---| |----`| |_) | | |_) | | | | | `---| |----`
/ /_\ \ \ \ | | | / | _ < | | | | | |
/ _____ \ .----) | | | | |\ \----.| |_) | | `--' | | |
/__/ \__\ |_______/ |__| | _| `._____||______/ \______/ |__|
"""
def main():
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%H:%M:%S',
)
# config.yaml 配置文件加载和环境确认
global logger
try:
import cores.qqbot.core as qqBot
import yaml
from yaml.scanner import ScannerError
import util.general_utils as gu
ymlfile = open(abs_path+"configs/config.yaml", 'r', encoding='utf-8')
cfg = yaml.safe_load(ymlfile)
except ImportError as import_error:
print(import_error)
input("第三方库未完全安装完毕,请退出程序重试。")
except FileNotFoundError as file_not_found:
print(file_not_found)
input("配置文件不存在,请检查是否已经下载配置文件。")
except ScannerError as e:
print(traceback.format_exc())
input("config.yaml 配置文件格式错误,请遵守 yaml 格式。")
import botpy, logging
# delete qqbotpy's logger
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logger.info(logo_tmpl)
# 设置代理
if 'http_proxy' in cfg and cfg['http_proxy'] != '':
os.environ['HTTP_PROXY'] = cfg['http_proxy']
if 'https_proxy' in cfg and cfg['https_proxy'] != '':
os.environ['HTTPS_PROXY'] = cfg['https_proxy']
os.environ['NO_PROXY'] = 'cn.bing.com,https://api.sgroup.qq.com'
bootstrap = AstrBotBootstrap()
asyncio.run(bootstrap.run())
except KeyboardInterrupt:
logger.info("AstrBot 已退出。")
# 检查并创建 temp 文件夹
if not os.path.exists(abs_path + "temp"):
os.mkdir(abs_path+"temp")
# 启动主程序cores/qqbot/core.py
qqBot.initBot(cfg)
def check_env(ch_mirror=False):
if not (sys.version_info.major == 3 and sys.version_info.minor >= 9):
print("请使用Python3.9+运行本项目")
input("按任意键退出...")
exit()
if os.path.exists('requirements.txt'):
pth = 'requirements.txt'
else:
pth = 'QQChannelChatGPT'+ os.sep +'requirements.txt'
print("正在检查更新第三方库...")
try:
if ch_mirror:
print("使用阿里云镜像")
pipmain(['install', '-r', pth, '-i', 'https://mirrors.aliyun.com/pypi/simple/', '--quiet'])
else:
pipmain(['install', '-r', pth, '--quiet'])
except BaseException as e:
print(e)
while True:
res = input("安装失败。\n如报错ValueError: check_hostname requires server_hostname请尝试先关闭代理后重试。\n1.输入y回车重试\n2. 输入c回车使用国内镜像源下载\n3. 输入其他按键回车继续往下执行。")
if res == "y":
try:
pipmain(['install', '-r', pth])
break
except BaseException as e:
print(e)
continue
elif res == "c":
try:
pipmain(['install', '-r', pth, '-i', 'https://mirrors.aliyun.com/pypi/simple/'])
break
except BaseException as e:
print(e)
continue
else:
break
print("第三方库检查完毕。")
logger.error(traceback.format_exc())
def get_platform():
import platform
sys_platform = platform.platform().lower()
if "windows" in sys_platform:
return "win"
elif "macos" in sys_platform:
return "mac"
elif "linux" in sys_platform:
return "linux"
else:
print("other")
def check_env():
if not (sys.version_info.major == 3 and sys.version_info.minor >= 9):
logger.error("请使用 Python3.9+ 运行本项目。")
exit()
os.makedirs("data/config", exist_ok=True)
os.makedirs("data/plugins", exist_ok=True)
os.makedirs("data/temp", exist_ok=True)
if __name__ == "__main__":
args = sys.argv
if '-cn' in args:
check_env(True)
else:
check_env()
if '-replit' in args:
print("[System] 启动Replit Web保活服务...")
try:
from util.webapp_replit import keep_alive
keep_alive()
except BaseException as e:
print(e)
print(f"[System-err] Replit Web保活服务启动失败:{str(e)}")
# workaround for issue #181
mimetypes.add_type("text/javascript", ".js")
mimetypes.add_type("text/javascript", ".mjs")
mimetypes.add_type("application/json", ".json")
t = threading.Thread(target=main, daemon=False)
t.start()
t.join()
if __name__ == "__main__":
check_env()
logger = LogManager.GetLogger(
log_name='astrbot',
out_to_console=True,
custom_formatter=Formatter('[%(asctime)s| %(name)s - %(levelname)s|%(filename)s:%(lineno)d]: %(message)s', datefmt="%H:%M:%S")
)
main()

View File

@@ -1,431 +0,0 @@
import json
from util import general_utils as gu
has_git = True
try:
import git.exc
from git.repo import Repo
except BaseException as e:
gu.log("你正运行在无Git环境下暂时将无法使用插件、热更新功能。")
has_git = False
import os
import sys
import requests
from model.provider.provider import Provider
import json
import util.plugin_util as putil
import shutil
import importlib
from util.cmd_config import CmdConfig as cc
from model.platform.qq import QQ
import stat
from nakuru.entities.components import (
Plain,
Image
)
from PIL import Image as PILImage
from cores.qqbot.global_object import GlobalObject, AstrMessageEvent
from pip._internal import main as pipmain
from cores.qqbot.global_object import CommandResult
PLATFORM_QQCHAN = 'qqchan'
PLATFORM_GOCQ = 'gocq'
# 指令功能的基类,通用的(不区分语言模型)的指令就在这实现
class Command:
def __init__(self, provider: Provider, global_object: GlobalObject = None):
self.provider = provider
self.global_object = global_object
def check_command(self,
message,
session_id: str,
role,
platform,
message_obj):
# 插件
cached_plugins = self.global_object.cached_plugins
ame = AstrMessageEvent(
message_str=message,
message_obj=message_obj,
gocq_platform=self.global_object.platform_qq,
qq_sdk_platform=self.global_object.platform_qqchan,
platform=platform,
role=role,
global_object=self.global_object,
session_id = session_id
)
for k, v in cached_plugins.items():
try:
result = v["clsobj"].run(ame)
if isinstance(result, CommandResult):
hit = result.hit
res = result._result_tuple()
print(hit, res)
elif isinstance(result, tuple):
hit = result[0]
res = result[1]
else:
raise TypeError("插件返回值格式错误。")
if hit:
return True, res
except TypeError as e:
# 参数不匹配,尝试使用旧的参数方案
try:
hit, res = v["clsobj"].run(message, role, platform, message_obj, self.global_object.platform_qq)
if hit:
return True, res
except BaseException as e:
gu.log(f"{k}插件异常,原因: {str(e)}\n已安装插件: {cached_plugins.keys}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。", level=gu.LEVEL_WARNING)
except BaseException as e:
gu.log(f"{k} 插件异常,原因: {str(e)}\n已安装插件: {cached_plugins.keys}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。", level=gu.LEVEL_WARNING)
if self.command_start_with(message, "nick"):
return True, self.set_nick(message, platform, role)
if self.command_start_with(message, "plugin"):
return True, self.plugin_oper(message, role, cached_plugins, platform)
if self.command_start_with(message, "myid") or self.command_start_with(message, "!myid"):
return True, self.get_my_id(message_obj)
if self.command_start_with(message, "nconf") or self.command_start_with(message, "newconf"):
return True, self.get_new_conf(message, role)
if self.command_start_with(message, "web"): # 网页搜索
return True, self.web_search(message)
if self.command_start_with(message, "keyword"):
return True, self.keyword(message_obj, role)
if self.command_start_with(message, "ip"):
ip = requests.get("https://myip.ipip.net", timeout=5).text
return True, f"机器人 IP 信息:{ip}", "ip"
return False, None
def web_search(self, message):
if message == "web on":
self.global_object.web_search = True
return True, "已开启网页搜索", "web"
elif message == "web off":
self.global_object.web_search = False
return True, "已关闭网页搜索", "web"
return True, f"网页搜索功能当前状态: {self.global_object.web_search}", "web"
def get_my_id(self, message_obj):
return True, f"你的ID{str(message_obj.sender.tiny_id)}", "plugin"
def get_new_conf(self, message, role):
if role != "admin":
return False, f"你的身份组{role}没有权限使用此指令。", "newconf"
l = message.split(" ")
if len(l) <= 1:
obj = cc.get_all()
p = gu.create_text_image("【cmd_config.json】", json.dumps(obj, indent=4, ensure_ascii=False))
return True, [Image.fromFileSystem(p)], "newconf"
'''
插件指令
'''
def plugin_oper(self, message: str, role: str, cached_plugins: dict, platform: str):
if not has_git:
return False, "你正在运行在无Git环境下暂时将无法使用插件、热更新功能。", "plugin"
l = message.split(" ")
if len(l) < 2:
p = gu.create_text_image("【插件指令面板】", "安装插件: \nplugin i 插件Github地址\n卸载插件: \nplugin d 插件名 \n重载插件: \nplugin reload\n查看插件列表:\nplugin l\n更新插件: plugin u 插件名\n")
return True, [Image.fromFileSystem(p)], "plugin"
else:
if l[1] == "i":
if role != "admin":
return False, f"你的身份组{role}没有权限安装插件", "plugin"
try:
putil.install_plugin(l[2], cached_plugins)
return True, "插件拉取并载入成功~", "plugin"
except BaseException as e:
return False, f"拉取插件失败,原因: {str(e)}", "plugin"
elif l[1] == "d":
if role != "admin":
return False, f"你的身份组{role}没有权限删除插件", "plugin"
try:
putil.uninstall_plugin(l[2], cached_plugins)
return True, "插件卸载成功~", "plugin"
except BaseException as e:
return False, f"卸载插件失败,原因: {str(e)}", "plugin"
elif l[1] == "u":
try:
putil.update_plugin(l[2], cached_plugins)
return True, "\n更新插件成功!!", "plugin"
except BaseException as e:
return False, f"更新插件失败,原因: {str(e)}\n建议: 使用 plugin i 指令进行覆盖安装(插件数据可能会丢失)", "plugin"
elif l[1] == "l":
try:
plugin_list_info = "\n".join([f"{k}: \n名称: {v['info']['name']}\n简介: {v['info']['desc']}\n版本: {v['info']['version']}\n作者: {v['info']['author']}\n" for k, v in cached_plugins.items()])
p = gu.create_text_image("【已激活插件列表】", plugin_list_info + "\n使用plugin v 插件名 查看插件帮助\n")
return True, [Image.fromFileSystem(p)], "plugin"
except BaseException as e:
return False, f"获取插件列表失败,原因: {str(e)}", "plugin"
elif l[1] == "v":
try:
if l[2] in cached_plugins:
info = cached_plugins[l[2]]["info"]
p = gu.create_text_image(f"【插件信息】", f"名称: {info['name']}\n{info['desc']}\n版本: {info['version']}\n作者: {info['author']}\n\n帮助:\n{info['help']}")
return True, [Image.fromFileSystem(p)], "plugin"
else:
return False, "未找到该插件", "plugin"
except BaseException as e:
return False, f"获取插件信息失败,原因: {str(e)}", "plugin"
# elif l[1] == "reload":
# if role != "admin":
# return False, f"你的身份组{role}没有权限重载插件", "plugin"
# for plugin in cached_plugins:
# try:
# print(f"更新插件 {plugin} 依赖...")
# plugin_path = os.path.join(ppath, cached_plugins[plugin]["root_dir_name"])
# if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
# mm = pipmain(['install', '-r', os.path.join(plugin_path, "requirements.txt"), "--quiet"])
# if mm != 0:
# return False, "插件依赖安装失败需要您手动pip安装对应插件的依赖。", "plugin"
# except BaseException as e:
# print(f"插件{plugin}依赖安装失败,原因: {str(e)}")
# try:
# ok, err = self.plugin_reload(cached_plugins, all = True)
# if ok:
# return True, "\n重载插件成功~", "plugin"
# else:
# # if os.path.exists(plugin_path):
# # shutil.rmtree(plugin_path)
# return False, f"插件重载失败。\n跟踪: \n{err}", "plugin"
# except BaseException as e:
# return False, f"插件重载失败,原因: {str(e)}", "plugin"
elif l[1] == "dev":
if role != "admin":
return False, f"你的身份组{role}没有权限开发者模式", "plugin"
return True, "cached_plugins: \n" + str(cached_plugins), "plugin"
'''
nick: 存储机器人的昵称
'''
def set_nick(self, message: str, platform: str, role: str = "member"):
if role != "admin":
return True, "你无权使用该指令 :P", "nick"
if platform == PLATFORM_GOCQ:
l = message.split(" ")
if len(l) == 1:
return True, "【设置机器人昵称】示例:\n支持多昵称\nnick 昵称1 昵称2 昵称3", "nick"
nick = l[1:]
cc.put("nick_qq", nick)
self.global_object.nick = tuple(nick)
return True, f"设置成功!现在你可以叫我这些昵称来提问我啦~", "nick"
elif platform == PLATFORM_QQCHAN:
nick = message.split(" ")[2]
return False, "QQ频道平台不支持为机器人设置昵称。", "nick"
def general_commands(self):
return {
"help": "帮助",
"keyword": "设置关键词/关键指令回复",
"update": "更新面板",
"update latest": "更新到最新版本",
"update r": "重启机器人",
"reset": "重置会话",
"nick": "设置机器人昵称",
"plugin": "插件安装、卸载和重载",
"web on/off": "启动或关闭网页搜索能力",
"/bing": "切换到bing模型",
"/gpt": "切换到OpenAI ChatGPT API",
"/revgpt": "切换到网页版ChatGPT",
}
def help_messager(self, commands: dict, platform: str, cached_plugins: dict = None):
try:
resp = requests.get("https://soulter.top/channelbot/notice.json").text
notice = json.loads(resp)["notice"]
except BaseException as e:
notice = ""
msg = "# Help Center\n## 指令列表\n"
# msg = "Github项目名QQChannelChatGPT, 有问题提交issue, 欢迎Star\n【指令列表】\n"
for key, value in commands.items():
msg += f"`{key}` - {value}\n"
# plugins
if cached_plugins != None:
plugin_list_info = "\n".join([f"`{k}` {v['info']['name']}\n{v['info']['desc']}\n" for k, v in cached_plugins.items()])
if plugin_list_info.strip() != "":
msg += "\n## 插件列表\n> 使用plugin v 插件名 查看插件帮助\n"
msg += plugin_list_info
msg += notice
try:
# p = gu.create_text_image("【Help Center】", msg)
p = gu.create_markdown_image(msg)
return [Image.fromFileSystem(p)]
except BaseException as e:
gu.log(str(e))
finally:
return msg
# 接受可变参数
def command_start_with(self, message: str, *args):
for arg in args:
if message.startswith(arg) or message.startswith('/'+arg):
return True
return False
# keyword: 关键字
def keyword(self, message_obj, role: str):
if role != "admin":
return True, "你没有权限使用该指令", "keyword"
plain_text = ""
image_url = ""
for comp in message_obj.message:
if isinstance(comp, Plain):
plain_text += comp.text
elif isinstance(comp, Image) and image_url == "":
if comp.url is None:
image_url = comp.file
else:
image_url = comp.url
l = plain_text.split(" ")
if len(l) < 3 and image_url == "":
return True, """
【设置关键词回复】示例:
1. keyword hi 你好
当发送hi的时候会回复你好
2. keyword /hi 你好
当发送/hi时会回复你好
3. keyword d hi
删除hi关键词的回复
4. keyword hi <图片>
当发送hi时会回复图片
""", "keyword"
del_mode = False
if l[1] == "d":
del_mode = True
try:
if os.path.exists("keyword.json"):
with open("keyword.json", "r", encoding="utf-8") as f:
keyword = json.load(f)
if del_mode:
# 删除关键词
if l[2] not in keyword:
return False, "该关键词不存在", "keyword"
else: del keyword[l[2]]
else:
keyword[l[1]] = {
"plain_text": " ".join(l[2:]),
"image_url": image_url
}
else:
if del_mode:
return False, "该关键词不存在", "keyword"
keyword = {
l[1]: {
"plain_text": " ".join(l[2:]),
"image_url": image_url
}
}
with open("keyword.json", "w", encoding="utf-8") as f:
json.dump(keyword, f, ensure_ascii=False, indent=4)
f.flush()
if del_mode:
return True, "删除成功: "+l[2], "keyword"
if image_url == "":
return True, "设置成功: "+l[1]+" "+" ".join(l[2:]), "keyword"
else:
return True, [Plain("设置成功: "+l[1]+" "+" ".join(l[2:])), Image.fromURL(image_url)], "keyword"
except BaseException as e:
return False, "设置失败: "+str(e), "keyword"
def update(self, message: str, role: str):
if not has_git:
return False, "你正在运行在无Git环境下暂时将无法使用插件、热更新功能。", "update"
if role != "admin":
return True, "你没有权限使用该指令", "keyword"
l = message.split(" ")
try:
repo = Repo()
except git.exc.InvalidGitRepositoryError:
try:
repo = Repo(path="QQChannelChatGPT")
except git.exc.InvalidGitRepositoryError:
repo = Repo(path="AstrBot")
if len(l) == 1:
curr_branch = repo.active_branch.name
# 得到本地版本号和最新版本号
now_commit = repo.head.commit
# 得到远程3条commit列表, 包含commit信息
origin = repo.remotes.origin
origin.fetch()
commits = list(repo.iter_commits(curr_branch, max_count=3))
commits_log = ''
index = 1
for commit in commits:
if commit.message.endswith("\n"):
commits_log += f"[{index}] {commit.message}-----------\n"
else:
commits_log += f"[{index}] {commit.message}\n-----------\n"
index+=1
remote_commit_hash = origin.refs.master.commit.hexsha[:6]
return True, f"当前分支: {curr_branch}\n当前版本: {now_commit.hexsha[:6]}\n最新版本: {remote_commit_hash}\n\n3条commit(非最新):\n{str(commits_log)}\nTips:\n1. 使用 update latest 更新至最新版本;\n2. 使用 update checkout <分支名> 切换代码分支。", "update"
else:
if l[1] == "latest":
try:
origin = repo.remotes.origin
origin.fetch()
commits = list(repo.iter_commits('master', max_count=1))
commit_log = commits[0].message
tag = "update"
if len(l) == 3 and l[2] == "r":
tag = "update latest r"
return True, f"更新成功。新版本内容: \n{commit_log}\nps:重启后生效。输入update r重启重启指令不返回任何确认信息", tag
except BaseException as e:
return False, "更新失败: "+str(e), "update"
if l[1] == "r":
py = sys.executable
os.execl(py, py, *sys.argv)
if l[1] == 'checkout':
# 切换分支
if len(l) < 3:
return False, "请提供分支名,如 /update checkout dev_dashboard", "update"
try:
origin = repo.remotes.origin
origin.fetch()
repo.git.checkout(l[2])
# 获得最新的 commit
commits = list(repo.iter_commits(max_count=1))
commit_log = commits[0].message
return True, f"切换分支成功,机器人将在 5 秒内重新启动以应用新的功能。\n当前分支: {l[2]}\n此分支最近更新: \n{commit_log}", "update latest r"
except BaseException as e:
return False, f"切换分支失败。原因: {str(e)}", "update"
def reset(self):
return False
def set(self):
return False
def unset(self):
return False
def key(self):
return False
def help(self):
return False
def status(self):
return False
def token(self):
return False
def his(self):
return False
def draw(self):
return False

View File

@@ -0,0 +1,276 @@
import aiohttp, os
from model.command.manager import CommandManager
from model.plugin.manager import PluginManager
from type.message_event import AstrMessageEvent
from type.command import CommandResult
from type.types import Context
from type.config import VERSION
from SparkleLogging.utils.core import LogManager
from logging import Logger
from util.agent.web_searcher import search_from_bing, fetch_website_content
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class InternalCommandHandler:
def __init__(self, manager: CommandManager, plugin_manager: PluginManager) -> None:
self.manager = manager
self.plugin_manager = plugin_manager
self.manager.register("help", "查看帮助", 10, self.help)
self.manager.register("wake", "唤醒前缀", 10, self.set_nick)
self.manager.register("update", "更新管理", 10, self.update)
self.manager.register("plugin", "插件管理", 10, self.plugin)
self.manager.register("reboot", "重启 AstrBot", 10, self.reboot)
self.manager.register("websearch", "网页搜索", 10, self.web_search)
self.manager.register("t2i", "文转图", 10, self.t2i_toggle)
self.manager.register("myid", "用户ID", 10, self.myid)
self.manager.register("provider", "LLM 接入源", 10, self.provider)
def _check_auth(self, message: AstrMessageEvent, context: Context):
if os.environ.get("TEST_MODE", "off") == "on":
return
if message.role != "admin":
user_id = message.message_obj.sender.user_id
raise Exception(f"用户(ID: {user_id}) 没有足够的权限使用该指令。")
def provider(self, message: AstrMessageEvent, context: Context):
if len(context.llms) == 0:
return CommandResult().message("当前没有加载任何 LLM 接入源。")
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
ret = "## 当前载入的 LLM 接入源\n"
for idx, llm in enumerate(context.llms):
ret += f"{idx}. {llm.llm_name}"
if llm.origin:
ret += f" (来源: {llm.origin})"
if context.message_handler.provider == llm.llm_instance:
ret += " (当前使用)"
ret += "\n"
ret += "\n使用 provider <序号> 切换 LLM 接入源。"
return CommandResult().message(ret)
else:
try:
idx = int(tokens.get(1))
if idx >= len(context.llms):
return CommandResult().message("provider: 无效的序号。")
context.message_handler.set_provider(context.llms[idx].llm_instance)
return CommandResult().message(f"已经成功切换到 LLM 接入源 {context.llms[idx].llm_name}")
except BaseException as e:
return CommandResult().message("provider: 参数错误。")
def set_nick(self, message: AstrMessageEvent, context: Context):
self._check_auth(message, context)
message_str = message.message_str
l = message_str.split(" ")
if len(l) == 1:
return CommandResult().message(f"设置机器人唤醒词。以唤醒词开头的消息会唤醒机器人处理,起到 @ 的效果。\n示例wake 昵称。当前唤醒词是:{context.config_helper.wake_prefix[0]}")
nick = l[1].strip()
if not nick:
return CommandResult().message("wake: 请指定唤醒词。")
context.config_helper.wake_prefix = [nick]
context.config_helper.save_config()
return CommandResult(
hit=True,
success=True,
message_chain=f"已经成功将唤醒前缀设定为 {nick}",
)
async def update(self, message: AstrMessageEvent, context: Context):
self._check_auth(message, context)
tokens = self.manager.command_parser.parse(message.message_str)
update_info = await context.updator.check_update(None, None)
if tokens.len == 1:
ret = ""
if not update_info:
ret = f"当前已经是最新版本 v{VERSION}"
else:
ret = f"发现新版本 {update_info.version},更新内容如下:\n---\n{update_info.body}\n---\n- 使用 /update latest 更新到最新版本。\n- 使用 /update vX.X.X 更新到指定版本。"
return CommandResult().message(ret)
else:
if tokens.get(1) == "latest":
try:
await context.updator.update()
return CommandResult().message(f"已经成功更新到最新版本 v{update_info.version}。要应用更新,请重启 AstrBot。输入 /reboot 即可重启")
except BaseException as e:
return CommandResult().message(f"更新失败。原因:{str(e)}")
elif tokens.get(1).startswith("v"):
try:
await context.updator.update(version=tokens.get(1))
return CommandResult().message(f"已经成功更新到版本 v{tokens.get(1)}。要应用更新,请重启 AstrBot。输入 /reboot 即可重启")
except BaseException as e:
return CommandResult().message(f"更新失败。原因:{str(e)}")
else:
return CommandResult().message("update: 参数错误。")
def reboot(self, message: AstrMessageEvent, context: Context):
self._check_auth(message, context)
context.updator._reboot(3, context)
return CommandResult(
hit=True,
success=True,
message_chain="AstrBot 将在 3s 后重启。",
)
async def plugin(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
ret = "# 插件指令面板 \n- 安装插件: `plugin i 插件Github地址`\n- 卸载插件: `plugin d 插件名`\n- 查看插件列表:`plugin l`\n - 更新插件: `plugin u 插件名`\n"
return CommandResult().message(ret)
if tokens.get(1) == "l":
plugin_list_info = ""
for plugin in context.cached_plugins:
plugin_list_info += f"- `{plugin.metadata.plugin_name}` By {plugin.metadata.author}: {plugin.metadata.desc}\n"
if plugin_list_info.strip() == "":
return CommandResult().message("plugin v: 没有找到插件。")
return CommandResult().message(plugin_list_info)
self._check_auth(message, context)
if tokens.get(1) == "d":
if tokens.len == 2:
return CommandResult().message("plugin d: 请指定要卸载的插件名。")
plugin_name = tokens.get(2)
try:
self.plugin_manager.uninstall_plugin(plugin_name)
except BaseException as e:
return CommandResult().message(f"plugin d: 卸载插件失败。原因:{str(e)}")
return CommandResult().message(f"plugin d: 已经成功卸载插件 {plugin_name}")
elif tokens.get(1) == "i":
if tokens.len == 2:
return CommandResult().message("plugin i: 请指定要安装的插件的 Github 地址,或者前往可视化面板安装。")
plugin_url = tokens.get(2)
try:
await self.plugin_manager.install_plugin(plugin_url)
except BaseException as e:
return CommandResult().message(f"plugin i: 安装插件失败。原因:{str(e)}")
return CommandResult().message("plugin i: 已经成功安装插件。")
elif tokens.get(1) == "u":
if tokens.len == 2:
return CommandResult().message("plugin u: 请指定要更新的插件名。")
plugin_name = tokens.get(2)
try:
await context.plugin_updator.update(plugin_name)
except BaseException as e:
return CommandResult().message(f"plugin u: 更新插件失败。原因:{str(e)}")
return CommandResult().message(f"plugin u: 已经成功更新插件 {plugin_name}")
return CommandResult().message("plugin: 参数错误。")
async def help(self, message: AstrMessageEvent, context: Context):
notice = ""
try:
async with aiohttp.ClientSession() as session:
async with session.get("https://soulter.top/channelbot/notice.json") as resp:
notice = (await resp.json())["notice"]
except BaseException as e:
logger.warning("An error occurred while fetching astrbot notice. Never mind, it's not important.")
msg = "# 帮助中心\n## 指令\n"
for key, value in self.manager.commands_handler.items():
if value.plugin_metadata:
msg += f"- `{key}` ({value.plugin_metadata.plugin_name}): {value.description}\n"
else: msg += f"- `{key}`: {value.description}\n"
# plugins
if context.cached_plugins:
plugin_list_info = ""
for plugin in context.cached_plugins:
plugin_list_info += f"- `{plugin.metadata.plugin_name}` {plugin.metadata.desc}\n"
if plugin_list_info.strip() != "":
msg += "\n## 插件\n> 使用plugin v 插件名 查看插件帮助\n"
msg += plugin_list_info
msg += notice
return CommandResult().message(msg)
def web_search(self, message: AstrMessageEvent, context: Context):
l = message.message_str.split(' ')
if len(l) == 1:
return CommandResult(
hit=True,
success=True,
message_chain=f"网页搜索功能当前状态: {context.config_helper.llm_settings.web_search}",
)
elif l[1] == 'on':
context.config_helper.llm_settings.web_search = True
context.config_helper.save_config()
context.register_llm_tool("web_search", [{
"type": "string",
"name": "keyword",
"description": "搜索关键词"
}],
"通过搜索引擎搜索。如果问题需要获取近期、实时的消息,在网页上搜索(如天气、新闻或任何需要通过网页获取信息的问题),则调用此函数;如果没有,不要调用此函数。",
search_from_bing
)
context.register_llm_tool("fetch_website_content", [{
"type": "string",
"name": "url",
"description": "要获取内容的网页链接"
}],
"获取网页的内容。如果问题带有合法的网页链接并且用户有需求了解网页内容(例如: `帮我总结一下 https://github.com 的内容`), 就调用此函数。如果没有,不要调用此函数。",
fetch_website_content
)
return CommandResult(
hit=True,
success=True,
message_chain="已开启网页搜索",
)
elif l[1] == 'off':
context.config_helper.llm_settings.web_search = False
context.config_helper.save_config()
context.unregister_llm_tool("web_search")
context.unregister_llm_tool("fetch_website_content")
return CommandResult(
hit=True,
success=True,
message_chain="已关闭网页搜索",
)
else:
return CommandResult(
hit=True,
success=False,
message_chain="参数错误",
)
def t2i_toggle(self, message: AstrMessageEvent, context: Context):
p = context.config_helper.t2i
if p:
context.config_helper.t2i = False
context.config_helper.save_config()
return CommandResult(
hit=True,
success=True,
message_chain="已关闭文本转图片模式。",
)
context.config_helper.t2i = True
context.config_helper.save_config()
return CommandResult(
hit=True,
success=True,
message_chain="已开启文本转图片模式。",
)
def myid(self, message: AstrMessageEvent, context: Context):
try:
user_id = str(message.message_obj.sender.user_id)
return CommandResult(
hit=True,
success=True,
message_chain=f"你在此平台上的ID{user_id}",
)
except BaseException as e:
return CommandResult(
hit=True,
success=False,
message_chain=f"获取失败,原因: {str(e)}",
)

145
model/command/manager.py Normal file
View File

@@ -0,0 +1,145 @@
import heapq
import inspect
import traceback
from typing import Dict
from type.types import Context
from type.plugin import PluginMetadata
from type.message_event import AstrMessageEvent
from type.command import CommandResult
from type.register import RegisteredPlugins
from model.command.parser import CommandParser
from model.plugin.command import PluginCommandBridge
from SparkleLogging.utils.core import LogManager
from logging import Logger
from dataclasses import dataclass
logger: Logger = LogManager.GetLogger(log_name='astrbot')
@dataclass
class CommandMetadata():
inner_command: bool
plugin_metadata: PluginMetadata
handler: callable
use_regex: bool = False
ignore_prefix: bool = False
description: str = ""
class CommandManager():
def __init__(self):
self.commands = []
self.commands_handler: Dict[str, CommandMetadata] = {}
self.command_parser = CommandParser()
def register(self,
command: str,
description: str,
priority: int,
handler: callable,
use_regex: bool = False,
ignore_prefix: bool = False,
plugin_metadata: PluginMetadata = None,
):
'''
优先级越高,越先被处理。
use_regex: 是否使用正则表达式匹配指令。
'''
if command in self.commands_handler:
raise ValueError(f"Command {command} already exists.")
if not handler:
raise ValueError(f"Handler of {command} is None.")
heapq.heappush(self.commands, (-priority, command))
self.commands_handler[command] = CommandMetadata(
inner_command=plugin_metadata == None,
plugin_metadata=plugin_metadata,
handler=handler,
use_regex=use_regex,
ignore_prefix=ignore_prefix,
description=description
)
if plugin_metadata:
logger.info(f"已注册 {plugin_metadata.author}/{plugin_metadata.plugin_name} 的指令 {command}")
else:
logger.info(f"已注册指令 {command}")
def register_from_pcb(self, pcb: PluginCommandBridge):
for request in pcb.plugin_commands_waitlist:
plugin = None
for registered_plugin in pcb.cached_plugins:
if registered_plugin.metadata.plugin_name == request.plugin_name:
plugin = registered_plugin
break
if not plugin:
logger.warning(f"插件 {request.plugin_name} 未找到,无法注册指令 {request.command_name}")
else:
self.register(command=request.command_name,
description=request.description,
priority=request.priority,
handler=request.handler,
use_regex=request.use_regex,
ignore_prefix=request.ignore_prefix,
plugin_metadata=plugin.metadata)
self.plugin_commands_waitlist = []
async def check_command_ignore_prefix(self, message_str: str) -> bool:
for _, command in self.commands:
command_metadata = self.commands_handler[command]
if command_metadata.ignore_prefix:
trig = False
if self.commands_handler[command].use_regex:
trig = self.command_parser.regex_match(message_str, command)
else:
trig = message_str.startswith(command)
if trig:
return True
return False
async def scan_command(self, message_event: AstrMessageEvent, context: Context) -> CommandResult:
message_str = message_event.message_str
for _, command in self.commands:
trig = False
if self.commands_handler[command].use_regex:
trig = self.command_parser.regex_match(message_str, command)
else:
trig = message_str.startswith(command)
if trig:
logger.info(f"触发 {command} 指令。")
command_result = await self.execute_handler(command, message_event, context)
if not command_result:
continue
if command_result.hit:
return command_result
async def execute_handler(self,
command: str,
message_event: AstrMessageEvent,
context: Context) -> CommandResult:
command_metadata = self.commands_handler[command]
handler = command_metadata.handler
# call handler
try:
if inspect.iscoroutinefunction(handler):
command_result = await handler(message_event, context)
else:
command_result = handler(message_event, context)
# if not isinstance(command_result, CommandResult):
# raise ValueError(f"Command {command} handler should return CommandResult.")
if not command_result:
return
context.metrics_uploader.command_stats[command] += 1
return command_result
except BaseException as e:
logger.error(traceback.format_exc())
if not command_metadata.inner_command:
text = f"执行 {command}/({command_metadata.plugin_metadata.plugin_name} By {command_metadata.plugin_metadata.author}) 指令时发生了异常。{e}"
logger.error(text)
else:
text = f"执行 {command} 指令时发生了异常。{e}"
logger.error(text)
return CommandResult().message(text)

View File

@@ -1,287 +0,0 @@
from model.command.command import Command
from model.provider.openai_official import ProviderOpenAIOfficial
from cores.qqbot.personality import personalities
from model.platform.qq import QQ
from util import general_utils as gu
from cores.qqbot.global_object import GlobalObject
class CommandOpenAIOfficial(Command):
def __init__(self, provider: ProviderOpenAIOfficial, global_object: GlobalObject):
self.provider = provider
self.cached_plugins = {}
self.global_object = global_object
self.personality_str = ""
super().__init__(provider, global_object)
def check_command(self,
message: str,
session_id: str,
role: str,
platform: str,
message_obj):
self.platform = platform
hit, res = super().check_command(
message,
session_id,
role,
platform,
message_obj
)
if hit:
return True, res
if self.command_start_with(message, "reset", "重置"):
return True, self.reset(session_id, message)
elif self.command_start_with(message, "his", "历史"):
return True, self.his(message, session_id)
elif self.command_start_with(message, "token"):
return True, self.token(session_id)
elif self.command_start_with(message, "gpt"):
return True, self.gpt()
elif self.command_start_with(message, "status"):
return True, self.status()
elif self.command_start_with(message, "count"):
return True, self.count()
elif self.command_start_with(message, "help", "帮助"):
return True, self.help()
elif self.command_start_with(message, "unset"):
return True, self.unset(session_id)
elif self.command_start_with(message, "set"):
return True, self.set(message, session_id)
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
elif self.command_start_with(message, "", "draw"):
return True, self.draw(message)
elif self.command_start_with(message, "key"):
return True, self.key(message)
elif self.command_start_with(message, "switch"):
return True, self.switch(message)
return False, None
def help(self):
commands = super().general_commands()
commands[''] = '画画'
commands['key'] = '添加OpenAI key'
commands['set'] = '人格设置面板'
commands['gpt'] = '查看gpt配置信息'
commands['status'] = '查看key使用状态'
commands['token'] = '查看本轮会话token'
return True, super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help"
def reset(self, session_id: str, message: str = "reset"):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "reset"
l = message.split(" ")
if len(l) == 1:
self.provider.forget(session_id)
return True, "重置成功", "reset"
if len(l) == 2 and l[1] == "p":
self.provider.forget(session_id)
if self.personality_str != "":
self.set(self.personality_str, session_id) # 重新设置人格
return True, "重置成功", "reset"
def his(self, message: str, session_id: str):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "his"
#分页每页5条
msg = ''
size_per_page = 3
page = 1
if message[4:]:
page = int(message[4:])
# 检查是否有过历史记录
if session_id not in self.provider.session_dict:
msg = f"历史记录为空"
return True, msg, "his"
l = self.provider.session_dict[session_id]
max_page = len(l)//size_per_page + 1 if len(l)%size_per_page != 0 else len(l)//size_per_page
p = self.provider.get_prompts_by_cache_list(self.provider.session_dict[session_id], divide=True, paging=True, size=size_per_page, page=page)
return True, f"历史记录如下:\n{p}\n{page}页 | 共{max_page}\n*输入/his 2跳转到第2页", "his"
def token(self, session_id: str):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "token"
return True, f"会话的token数: {self.provider.get_user_usage_tokens(self.provider.session_dict[session_id])}\n系统最大缓存token数: {self.provider.max_tokens}", "token"
def gpt(self):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "gpt"
return True, f"OpenAI GPT配置:\n {self.provider.chatGPT_configs}", "gpt"
def status(self):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "status"
chatgpt_cfg_str = ""
key_stat = self.provider.get_key_stat()
index = 1
max = 9000000
gg_count = 0
total = 0
tag = ''
for key in key_stat.keys():
sponsor = ''
total += key_stat[key]['used']
if key_stat[key]['exceed']:
gg_count += 1
continue
if 'sponsor' in key_stat[key]:
sponsor = key_stat[key]['sponsor']
chatgpt_cfg_str += f" |-{index}: {key[-8:]} {key_stat[key]['used']}/{max} {sponsor}{tag}\n"
index += 1
return True, f"⭐使用情况({str(gg_count)}个已用):\n{chatgpt_cfg_str}", "status"
def count(self):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型。", "reset"
guild_count, guild_msg_count, guild_direct_msg_count, session_count = self.provider.get_stat()
return True, f"【本指令部分统计可能已经过时】\n当前会话数: {len(self.provider.session_dict)}\n共有频道数: {guild_count} \n共有消息数: {guild_msg_count}\n私信数: {guild_direct_msg_count}\n历史会话数: {session_count}", "count"
def key(self, message: str):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "reset"
l = message.split(" ")
if len(l) == 1:
msg = "感谢您赞助keykey为官方API使用请以以下格式赞助:\n/key xxxxx"
return True, msg, "key"
key = l[1]
if self.provider.check_key(key):
self.provider.append_key(key)
return True, f"*★,°*:.☆( ̄▽ ̄)/$:*.°★* 。\n该Key被验证为有效。感谢你的赞助~"
else:
return True, "该Key被验证为无效。也许是输入错误了或者重试。", "key"
def switch(self, message: str):
'''
切换账号
'''
l = message.split(" ")
if len(l) == 1:
_, ret, _ = self.status()
curr_ = self.provider.get_curr_key()
if curr_ is None:
ret += "当前您未选择账号。输入/switch <账号序号>切换账号。"
else:
ret += f"当前您选择的账号为:{curr_[-8:]}。输入/switch <账号序号>切换账号。"
return True, ret, "switch"
elif len(l) == 2:
try:
key_stat = self.provider.get_key_stat()
index = int(l[1])
if index > len(key_stat) or index < 1:
return True, "账号序号不合法。", "switch"
else:
try:
new_key = list(key_stat.keys())[index-1]
ret = self.provider.check_key(new_key)
self.provider.set_key(new_key)
except BaseException as e:
return True, "账号切换失败,原因: " + str(e), "switch"
return True, f"账号切换成功。", "switch"
except BaseException as e:
return True, "未知错误: "+str(e), "switch"
else:
return True, "参数过多。", "switch"
def unset(self, session_id: str):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "unset"
self.provider.curr_personality = {}
self.provider.forget(session_id)
return True, "已清除人格并重置历史记录。", "unset"
def set(self, message: str, session_id: str):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "set"
l = message.split(" ")
if len(l) == 1:
return True, f"【人格文本由PlexPt开源项目awesome-chatgpt-pr \
ompts-zh提供】\n设置人格: \n/set 人格名。例如/set 编剧\n人格列表: /set list\n人格详细信息: \
/set view 人格名\n自定义人格: /set 人格文本\n重置会话(清除人格): /reset\n重置会话(保留人格): /reset p\n【当前人格】: {str(self.provider.curr_personality)}", "set"
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f" |-{key}\n"
msg += '\n\n*输入/set view 人格名查看人格详细信息'
msg += '\n*不定时更新人格库,请及时更新本项目。'
return True, msg, "set"
elif l[1] == "view":
if len(l) == 2:
return True, "请输入/set view 人格名", "set"
ps = l[2].strip()
if ps in personalities:
msg = f"人格{ps}的详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格{ps}不存在"
return True, msg, "set"
else:
ps = l[1].strip()
if ps in personalities:
self.provider.curr_personality = {
'name': ps,
'prompt': personalities[ps]
}
self.provider.session_dict[session_id] = []
new_record = {
"user": {
"role": "user",
"content": personalities[ps],
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id].append(new_record)
self.personality_str = message
return True, f"人格{ps}已设置。", "set"
else:
self.provider.curr_personality = {
'name': '自定义人格',
'prompt': ps
}
new_record = {
"user": {
"role": "user",
"content": ps,
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id] = []
self.provider.session_dict[session_id].append(new_record)
self.personality_str = message
return True, f"自定义人格已设置。 \n人格信息: {ps}", "set"
def draw(self, message):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "draw"
if message.startswith("/画"):
message = message[2:]
elif message.startswith(""):
message = message[1:]
try:
# 画图模式传回3个参数
img_url = self.provider.image_chat(message)
return True, img_url, "draw"
except Exception as e:
if 'exceeded' in str(e):
return f"OpenAI API错误。原因\n{str(e)} \n超额了。可自己搭建一个机器人(Github仓库QQChannelChatGPT)"
return False, f"图片生成失败: {e}", "draw"

View File

@@ -0,0 +1,186 @@
from model.command.manager import CommandManager
from type.message_event import AstrMessageEvent
from type.command import CommandResult
from type.types import Context
from SparkleLogging.utils.core import LogManager
from logging import Logger
from nakuru.entities.components import Image
from model.provider.openai_official import ProviderOpenAIOfficial, MODELS
from util.personality import personalities
from util.io import download_image_by_url
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class OpenAIOfficialCommandHandler():
def __init__(self, manager: CommandManager) -> None:
self.manager = manager
self.provider = None
self.manager.register("reset", "重置会话", 10, self.reset)
self.manager.register("his", "查看历史记录", 10, self.his)
self.manager.register("status", "查看当前状态", 10, self.status)
self.manager.register("switch", "切换账号", 10, self.switch)
self.manager.register("unset", "清除个性化人格设置", 10, self.unset)
self.manager.register("set", "设置个性化人格", 10, self.set)
self.manager.register("draw", "调用 DallE 模型画图", 10, self.draw)
self.manager.register("model", "切换模型", 10, self.model)
self.manager.register("", "调用 DallE 模型画图", 10, self.draw)
def set_provider(self, provider):
self.provider = provider
async def reset(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
await self.provider.forget(message.session_id, keep_system_prompt=True)
return CommandResult().message("重置成功")
elif tokens.get(1) == 'p':
await self.provider.forget(message.session_id)
async def model(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
ret = await self._print_models()
return CommandResult().message(ret)
model = tokens.get(1)
if model.isdigit():
try:
models = await self.provider.get_models()
except BaseException as e:
logger.error(f"获取模型列表失败: {str(e)}")
return CommandResult().message("获取模型列表失败,无法使用编号切换模型。可以尝试直接输入模型名来切换,如 gpt-4o。")
models = list(models)
if int(model) <= len(models) and int(model) >= 1:
model = models[int(model)-1]
self.provider.set_model(model.id)
return CommandResult().message(f"模型已设置为 {model.id}")
else:
self.provider.set_model(model)
return CommandResult().message(f"模型已设置为 {model} (自定义)")
async def _print_models(self):
try:
models = await self.provider.get_models()
except BaseException as e:
return "获取模型列表失败: " + str(e)
i = 1
ret = "OpenAI GPT 类可用模型"
for model in models:
ret += f"\n{i}. {model.id}"
i += 1
ret += "\nTips: 使用 /model 模型名/编号,即可实时更换模型。如目标模型不存在于上表,请输入模型名。"
logger.debug(ret)
return ret
def his(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
size_per_page = 3
page = 1
if tokens.len == 2:
try:
page = int(tokens.get(1))
except BaseException as e:
return CommandResult().message("页码格式错误")
contexts, total_num = self.provider.dump_contexts_page(message.session_id, size_per_page, page=page)
t_pages = total_num // size_per_page + 1
return CommandResult().message(f"历史记录如下:\n{contexts}\n{page} 页 | 共 {t_pages}\n*输入 /his 2 跳转到第 2 页")
def status(self, message: AstrMessageEvent, context: Context):
keys_data = self.provider.get_keys_data()
ret = "OpenAI Key"
for k in keys_data:
status = "🟢" if keys_data[k] else "🔴"
ret += "\n|- " + k[:8] + " " + status
conf = self.provider.get_configs()
ret += "\n当前模型: " + conf['model']
if conf['model'] in MODELS:
ret += "\n最大上下文窗口: " + str(MODELS[conf['model']]) + " tokens"
if message.session_id in self.provider.session_memory and len(self.provider.session_memory[message.session_id]):
ret += "\n你的会话上下文: " + str(self.provider.session_memory[message.session_id][-1]['usage_tokens']) + " tokens"
return CommandResult().message(ret)
async def switch(self, message: AstrMessageEvent, context: Context):
'''
切换账号
'''
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
_, ret, _ = self.status()
curr_ = self.provider.get_curr_key()
if curr_ is None:
ret += "当前您未选择账号。输入/switch <账号序号>切换账号。"
else:
ret += f"当前您选择的账号为:{curr_[-8:]}。输入/switch <账号序号>切换账号。"
return CommandResult().message(ret)
elif tokens.len == 2:
try:
key_stat = self.provider.get_keys_data()
index = int(tokens.get(1))
if index > len(key_stat) or index < 1:
return CommandResult().message("账号序号错误。")
else:
try:
new_key = list(key_stat.keys())[index-1]
self.provider.set_key(new_key)
except BaseException as e:
return CommandResult().message("切换账号未知错误: "+str(e))
return CommandResult().message("切换账号成功。")
except BaseException as e:
return CommandResult().message("切换账号错误。")
else:
return CommandResult().message("参数过多。")
def unset(self, message: AstrMessageEvent, context: Context):
self.provider.curr_personality = {}
self.provider.forget(message.session_id)
return CommandResult().message("已清除个性化设置。")
def set(self, message: AstrMessageEvent, context: Context):
l = message.message_str.split(" ")
if len(l) == 1:
return CommandResult().message("- 设置人格: \nset 人格名。例如 set 编剧\n- 人格列表: set list\n- 人格详细信息: set view 人格名\n- 自定义人格: set 人格文本\n- 重置会话(清除人格): reset\n- 重置会话(保留人格): reset p\n\n【当前人格】: " + str(self.provider.curr_personality['prompt']))
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f"- {key}\n"
msg += '\n\n*输入 set view 人格名 查看人格详细信息'
return CommandResult().message(msg)
elif l[1] == "view":
if len(l) == 2:
return CommandResult().message("请输入人格名")
ps = l[2].strip()
if ps in personalities:
msg = f"人格{ps}的详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格{ps}不存在"
return CommandResult().message(msg)
else:
ps = "".join(l[1:]).strip()
if ps in personalities:
self.provider.curr_personality = {
'name': ps,
'prompt': personalities[ps]
}
self.provider.personality_set(self.provider.curr_personality, message.session_id)
return CommandResult().message(f"人格已设置。 \n人格信息: {ps}")
else:
self.provider.curr_personality = {
'name': '自定义人格',
'prompt': ps
}
self.provider.personality_set(self.provider.curr_personality, message.session_id)
return CommandResult().message(f"人格已设置。 \n人格信息: {ps}")
async def draw(self, message: AstrMessageEvent, context: Context):
message = message.message_str.removeprefix("")
img_url = await self.provider.image_generate(message)
return CommandResult(
message_chain=[Image.fromURL(img_url)],
)

25
model/command/parser.py Normal file
View File

@@ -0,0 +1,25 @@
import re
class CommandTokens():
def __init__(self) -> None:
self.tokens = []
self.len = 0
def get(self, idx: int):
if idx >= self.len:
return None
return self.tokens[idx].strip()
class CommandParser():
def __init__(self):
pass
def parse(self, message: str):
cmd_tokens = CommandTokens()
cmd_tokens.tokens = message.split(" ")
cmd_tokens.len = len(cmd_tokens.tokens)
return cmd_tokens
def regex_match(self, message: str, command: str) -> bool:
return re.search(command, message, re.MULTILINE) is not None

View File

@@ -1,134 +0,0 @@
from model.command.command import Command
from model.provider.rev_chatgpt import ProviderRevChatGPT
from model.platform.qq import QQ
from cores.qqbot.personality import personalities
from cores.qqbot.global_object import GlobalObject
class CommandRevChatGPT(Command):
def __init__(self, provider: ProviderRevChatGPT, global_object: GlobalObject):
self.provider = provider
self.cached_plugins = {}
self.global_object = global_object
self.personality_str = ""
super().__init__(provider, global_object)
def check_command(self,
message: str,
session_id: str,
role: str,
platform: str,
message_obj):
self.platform = platform
hit, res = super().check_command(
message,
session_id,
role,
platform,
message_obj
)
if hit:
return True, res
if self.command_start_with(message, "help", "帮助"):
return True, self.help()
elif self.command_start_with(message, "reset"):
return True, self.reset(session_id, message)
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
elif self.command_start_with(message, "set"):
return True, self.set(message, session_id)
elif self.command_start_with(message, "switch"):
return True, self.switch(message, session_id)
return False, None
def reset(self, session_id, message: str):
l = message.split(" ")
if len(l) == 1:
self.provider.forget(session_id)
return True, "重置完毕。", "reset"
if len(l) == 2 and l[1] == "p":
self.provider.forget(session_id)
ret = self.provider.text_chat(self.personality_str)
return True, f"重置完毕(保留人格)。\n\n{ret}", "reset"
def set(self, message: str, session_id: str):
l = message.split(" ")
if len(l) == 1:
return True, f"设置人格: \n/set 人格名或人格文本。例如/set 编剧\n人格列表: /set list\n人格详细信息: \
/set view 人格名\n重置会话(清除人格): /reset\n重置会话(保留人格): /reset p", "set"
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f" |-{key}\n"
msg += '\n\n*输入/set view 人格名查看人格详细信息'
msg += '\n*不定时更新人格库,请及时更新本项目。'
return True, msg, "set"
elif l[1] == "view":
if len(l) == 2:
return True, "请输入/set view 人格名", "set"
ps = l[2].strip()
if ps in personalities:
msg = f"人格【{ps}】详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格【{ps}】不存在。"
return True, msg, "set"
else:
ps = l[1].strip()
if ps in personalities:
self.reset(session_id, "reset")
self.personality_str = personalities[ps]
ret = self.provider.text_chat(self.personality_str, session_id)
return True, f"人格【{ps}】已设置。\n\n{ret}", "set"
else:
self.reset(session_id, "reset")
self.personality_str = ps
ret = self.provider.text_chat(ps, session_id)
return True, f"人格信息已设置。\n\n{ret}", "set"
def switch(self, message: str, session_id: str):
'''
切换账号
'''
l = message.split(" ")
rev_chatgpt = self.provider.get_revchatgpt()
if len(l) == 1:
ret = "当前账号:\n"
index = 0
curr_ = None
for revstat in rev_chatgpt:
index += 1
ret += f"[{index}]. {revstat['id']}\n"
# if session_id in revstat['user']:
# curr_ = revstat['id']
for user in revstat['user']:
if session_id == user['id']:
curr_ = revstat['id']
break
if curr_ is None:
ret += "当前您未选择账号。输入/switch <账号序号>切换账号。"
else:
ret += f"当前您选择的账号为:{curr_}。输入/switch <账号序号>切换账号。"
return True, ret, "switch"
elif len(l) == 2:
try:
index = int(l[1])
if index > len(self.provider.rev_chatgpt) or index < 1:
return True, "账号序号不合法。", "switch"
else:
# pop
for revstat in self.provider.rev_chatgpt:
if session_id in revstat['user']:
revstat['user'].remove(session_id)
# append
self.provider.rev_chatgpt[index - 1]['user'].append(session_id)
return True, f"切换账号成功。当前账号为:{self.provider.rev_chatgpt[index - 1]['id']}", "switch"
except BaseException:
return True, "账号序号不合法。", "switch"
else:
return True, "参数过多。", "switch"
def help(self):
commands = super().general_commands()
commands['set'] = '设置人格'
return True, super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help"

View File

@@ -1,52 +0,0 @@
from model.command.command import Command
from model.provider.rev_edgegpt import ProviderRevEdgeGPT
import asyncio
from model.platform.qq import QQ
from cores.qqbot.global_object import GlobalObject
class CommandRevEdgeGPT(Command):
def __init__(self, provider: ProviderRevEdgeGPT, global_object: GlobalObject):
self.provider = provider
self.cached_plugins = {}
self.global_object = global_object
super().__init__(provider, global_object)
def check_command(self,
message: str,
session_id: str,
role: str,
platform: str,
message_obj):
self.platform = platform
hit, res = super().check_command(
message,
session_id,
role,
platform,
message_obj
)
if hit:
return True, res
if self.command_start_with(message, "reset"):
return True, self.reset()
elif self.command_start_with(message, "help"):
return True, self.help()
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
return False, None
def reset(self, loop = None):
if self.provider is None:
return False, "未启动Bing语言模型.", "reset"
res = asyncio.run_coroutine_threadsafe(self.provider.forget(), loop).result()
print(res)
if res:
return res, "重置成功", "reset"
else:
return res, "重置失败", "reset"
def help(self):
return True, super().help_messager(super().general_commands(), self.platform, self.global_object.cached_plugins), "help"

View File

@@ -0,0 +1,86 @@
import abc
from typing import Union, Any, List
from nakuru.entities.components import Plain, At, Image, BaseMessageComponent
from type.astrbot_message import AstrBotMessage
from type.command import CommandResult
from type.astrbot_message import MessageType
class Platform():
def __init__(self, platform_name: str, context) -> None:
self.PLATFORM_NAME = platform_name
self.context = context
@abc.abstractmethod
async def handle_msg(self, message: AstrBotMessage):
'''
处理到来的消息
'''
pass
@abc.abstractmethod
async def reply_msg(self, message: AstrBotMessage,
result_message: List[BaseMessageComponent]):
'''
回复用户唤醒机器人的消息。(被动回复)
'''
pass
@abc.abstractmethod
async def send_msg(self, target: Any, result_message: CommandResult):
'''
发送消息(主动)
'''
pass
@abc.abstractmethod
async def send_msg_new(self, message_type: MessageType, target: str, result_message: CommandResult):
'''
发送消息(主动)
'''
pass
def parse_message_outline(self, message: AstrBotMessage) -> str:
'''
将消息解析成大纲消息形式,如: xxxxx[图片]xxxxx。用于输出日志等。
'''
if isinstance(message, str):
return message
ret = ''
parsed = message if isinstance(message, list) else message.message
try:
for node in parsed:
if isinstance(node, Plain):
ret += node.text.replace('\n', ' ')
elif isinstance(node, At):
ret += f'[At: {node.name}/{node.qq}]'
elif isinstance(node, Image):
ret += '[图片]'
except Exception as e:
pass
return ret[:100] if len(ret) > 100 else ret
def check_nick(self, message_str: str) -> bool:
w = self.context.config_helper.wake_prefix
if not w: return False
for nick in w:
if nick and message_str.strip().startswith(nick):
return True
return False
async def convert_to_t2i_chain(self, message_result: list) -> list:
plain_str = ""
rendered_images = []
for i in message_result:
if isinstance(i, Plain):
plain_str += i.text
if plain_str and len(plain_str) > 50:
p = await self.context.image_renderer.render(plain_str, return_url=True)
if p.startswith('http'):
rendered_images.append(Image.fromURL(p))
else:
rendered_images.append(Image.fromFileSystem(p))
return rendered_images
async def record_metrics(self):
self.context.metrics_uploader.increment_platform_stat(self.PLATFORM_NAME)

97
model/platform/manager.py Normal file
View File

@@ -0,0 +1,97 @@
import asyncio
from util.io import port_checker
from type.register import RegisteredPlatform
from type.types import Context
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
from util.cmd_config import (
PlatformConfig,
AiocqhttpPlatformConfig,
NakuruPlatformConfig,
QQOfficialPlatformConfig
)
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class PlatformManager():
def __init__(self, context: Context, message_handler: MessageHandler) -> None:
self.context = context
self.msg_handler = message_handler
def load_platforms(self):
tasks = []
platforms = self.context.config_helper.platform
logger.info(f"加载 {len(platforms)} 个机器人消息平台...")
for platform in platforms:
if not platform.enable:
continue
if platform.name == "qq_official":
assert isinstance(platform, QQOfficialPlatformConfig), "qq_official: 无法识别的配置类型。"
logger.info(f"加载 QQ官方 机器人消息平台 (appid: {platform.appid})")
tasks.append(asyncio.create_task(self.qqofficial_bot(platform), name="qqofficial-adapter"))
elif platform.name == "nakuru":
assert isinstance(platform, NakuruPlatformConfig), "nakuru: 无法识别的配置类型。"
logger.info(f"加载 QQ(nakuru) 机器人消息平台 ({platform.host}, {platform.websocket_port}, {platform.port})")
tasks.append(asyncio.create_task(self.nakuru_bot(platform), name="nakuru-adapter"))
elif platform.name == "aiocqhttp":
assert isinstance(platform, AiocqhttpPlatformConfig), "aiocqhttp: 无法识别的配置类型。"
logger.info("加载 QQ(aiocqhttp) 机器人消息平台")
tasks.append(asyncio.create_task(self.aiocq_bot(platform), name="aiocqhttp-adapter"))
return tasks
async def nakuru_bot(self, config: NakuruPlatformConfig):
'''
运行 QQ(nakuru 适配器)
'''
from model.platform.qq_nakuru import QQNakuru
noticed = False
host = config.host
port = config.websocket_port
http_port = config.port
logger.info(
f"正在检查连接...host: {host}, ws port: {port}, http port: {http_port}")
while True:
if not port_checker(port=port, host=host) or not port_checker(port=http_port, host=host):
if not noticed:
noticed = True
logger.warning(
f"连接到{host}:{port}(或{http_port})失败。程序会每隔 5s 自动重试。")
await asyncio.sleep(5)
else:
logger.info("nakuru 适配器已连接。")
break
try:
qq_gocq = QQNakuru(self.context, self.msg_handler, config)
self.context.platforms.append(RegisteredPlatform(
platform_name="nakuru", platform_instance=qq_gocq, origin="internal"))
await qq_gocq.run()
except BaseException as e:
logger.error("启动 nakuru 适配器时出现错误: " + str(e))
def aiocq_bot(self, config):
'''
运行 QQ(aiocqhttp 适配器)
'''
from model.platform.qq_aiocqhttp import AIOCQHTTP
qq_aiocqhttp = AIOCQHTTP(self.context, self.msg_handler, config)
self.context.platforms.append(RegisteredPlatform(
platform_name="aiocqhttp", platform_instance=qq_aiocqhttp, origin="internal"))
return qq_aiocqhttp.run_aiocqhttp()
def qqofficial_bot(self, config):
'''
运行 QQ 官方机器人适配器
'''
try:
from model.platform.qq_official import QQOfficial
qqchannel_bot = QQOfficial(self.context, self.msg_handler, config)
self.context.platforms.append(RegisteredPlatform(
platform_name="qqofficial", platform_instance=qqchannel_bot, origin="internal"))
return qqchannel_bot.run()
except BaseException as e:
logger.error("启动 QQ官方机器人适配器时出现错误: " + str(e))

View File

@@ -1,190 +0,0 @@
from nakuru.entities.components import Plain, At, Image, Node
from util import general_utils as gu
from util.cmd_config import CmdConfig
import asyncio
from nakuru import (
CQHTTP,
GuildMessage,
GroupMessage,
FriendMessage
)
from typing import Union
import time
class FakeSource:
def __init__(self, type, group_id):
self.type = type
self.group_id = group_id
class QQ:
def __init__(self, is_start: bool, cc: CmdConfig = None, gocq_loop = None) -> None:
self.is_start = is_start
self.gocq_loop = gocq_loop
self.cc = cc
self.waiting = {}
self.gocq_cnt = 0
def run_bot(self, gocq):
self.client: CQHTTP = gocq
self.client.run()
def get_msg_loop(self):
return self.gocq_loop
def get_cnt(self):
return self.gocq_cnt
def set_cnt(self, cnt):
self.gocq_cnt = cnt
async def send_qq_msg(self,
source,
res,
image_mode=None):
self.gocq_cnt += 1
if not self.is_start:
raise Exception("管理员未启动GOCQ平台")
"""
res可以是一个数组, 也就是gocq的消息链。
插件开发者请使用send方法, 可以不用直接调用这个方法。
"""
gu.log("回复GOCQ消息: "+str(res), level=gu.LEVEL_INFO, tag="GOCQ", max_len=300)
if isinstance(source, int):
source = FakeSource("GroupMessage", source)
# str convert to CQ Message Chain
if isinstance(res, str):
res_str = res
res = []
if source.type == "GroupMessage" and not isinstance(source, FakeSource):
res.append(At(qq=source.user_id))
res.append(Plain(text=res_str))
# if image mode, put all Plain texts into a new picture.
if image_mode is None:
image_mode = self.cc.get('qq_pic_mode', False)
if image_mode and isinstance(res, list):
plains = []
news = []
for i in res:
if isinstance(i, Plain):
plains.append(i.text)
else:
news.append(i)
plains_str = "".join(plains).strip()
if plains_str != "" and len(plains_str) > 50:
p = gu.create_markdown_image("".join(plains))
news.append(Image.fromFileSystem(p))
res = news
# 回复消息链
if isinstance(res, list) and len(res) > 0:
if source.type == "GuildMessage":
await self.client.sendGuildChannelMessage(source.guild_id, source.channel_id, res)
return
elif source.type == "FriendMessage":
await self.client.sendFriendMessage(source.user_id, res)
return
elif source.type == "GroupMessage":
# 过长时forward发送
plain_text_len = 0
image_num = 0
for i in res:
if isinstance(i, Plain):
plain_text_len += len(i.text)
elif isinstance(i, Image):
image_num += 1
if plain_text_len > self.cc.get('qq_forward_threshold', 200):
# 删除At
for i in res:
if isinstance(i, At):
res.remove(i)
node = Node(res)
# node.content = res
node.uin = 123456
node.name = f"bot"
node.time = int(time.time())
# print(node)
nodes=[node]
await self.client.sendGroupForwardMessage(source.group_id, nodes)
return
await self.client.sendGroupMessage(source.group_id, res)
return
def send(self,
to,
res,
image_mode=False,
):
'''
提供给插件的发送QQ消息接口, 不用在外部await。
参数说明第一个参数可以是消息对象也可以是QQ群号。第二个参数是消息内容消息内容可以是消息链列表也可以是纯文字信息
第三个参数是是否开启图片模式,如果开启,那么所有纯文字信息都会被合并成一张图片。
'''
try:
asyncio.run_coroutine_threadsafe(self.send_qq_msg(to, res, image_mode), self.gocq_loop).result()
except BaseException as e:
raise e
def send_guild(self,
message_obj,
res,
):
'''
提供给插件的发送GOCQ QQ频道消息接口, 不用在外部await。
参数说明:第一个参数必须是消息对象, 第二个参数是消息内容(消息内容可以是消息链列表,也可以是纯文字信息)。
'''
try:
asyncio.run_coroutine_threadsafe(self.send_qq_msg(message_obj, res), self.gocq_loop).result()
except BaseException as e:
raise e
def create_text_image(title: str, text: str, max_width=30, font_size=20):
'''
文本转图片。
title: 标题
text: 文本内容
max_width: 文本宽度最大值默认30
font_size: 字体大小默认20
返回:文件路径
'''
try:
img = gu.word2img(title, text, max_width, font_size)
p = gu.save_temp_img(img)
return p
except Exception as e:
raise e
def wait_for_message(self, group_id) -> Union[GroupMessage, FriendMessage, GuildMessage]:
'''
等待下一条消息,超时 300s 后抛出异常
'''
self.waiting[group_id] = ''
cnt = 0
while True:
if group_id in self.waiting and self.waiting[group_id] != '':
# 去掉
ret = self.waiting[group_id]
del self.waiting[group_id]
return ret
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)
def get_client(self):
return self.client
def nakuru_method_invoker(self, func, *args, **kwargs):
"""
返回一个方法调用器可以用来立即调用nakuru的方法。
"""
try:
ret = asyncio.run_coroutine_threadsafe(func(*args, **kwargs), self.gocq_loop).result()
return ret
except BaseException as e:
raise e

View File

@@ -0,0 +1,279 @@
import time
import asyncio
import traceback
import logging
from aiocqhttp import CQHttp, Event
from aiocqhttp.exceptions import ActionFailed
from . import Platform
from type.astrbot_message import *
from type.message_event import *
from type.command import *
from typing import Union, List, Dict
from nakuru.entities.components import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
from util.cmd_config import PlatformConfig, AiocqhttpPlatformConfig
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class AIOCQHTTP(Platform):
def __init__(self, context: Context,
message_handler: MessageHandler,
platform_config: PlatformConfig) -> None:
super().__init__("aiocqhttp", context)
assert isinstance(platform_config, AiocqhttpPlatformConfig), "aiocqhttp: 无法识别的配置类型。"
self.message_handler = message_handler
self.waiting = {}
self.context = context
self.config = platform_config
self.unique_session = context.config_helper.platform_settings.unique_session
self.host = platform_config.ws_reverse_host
self.port = platform_config.ws_reverse_port
self.admins = context.config_helper.admins_id
def convert_message(self, event: Event) -> AstrBotMessage:
abm = AstrBotMessage()
abm.self_id = str(event.self_id)
abm.tag = "aiocqhttp"
abm.sender = MessageMember(str(event.sender['user_id']), event.sender['nickname'])
if event['message_type'] == 'group':
abm.type = MessageType.GROUP_MESSAGE
elif event['message_type'] == 'private':
abm.type = MessageType.FRIEND_MESSAGE
if self.unique_session:
abm.session_id = abm.sender.user_id
else:
abm.session_id = str(event.group_id) if abm.type == MessageType.GROUP_MESSAGE else abm.sender.user_id
abm.message_id = str(event.message_id)
abm.message = []
message_str = ""
if not isinstance(event.message, list):
err = f"aiocqhttp: 无法识别的消息类型: {str(event.message)},此条消息将被忽略。如果您在使用 go-cqhttp请将其配置文件中的 message.post-format 更改为 array。"
logger.critical(err)
try:
self.bot.send(event, err)
except BaseException as e:
logger.error(f"回复消息失败: {e}")
return
for m in event.message:
t = m['type']
a = None
if t == 'at':
a = At(**m['data'])
abm.message.append(a)
if t == 'text':
a = Plain(text=m['data']['text'])
message_str += m['data']['text'].strip()
abm.message.append(a)
if t == 'image':
file = m['data']['file'] if 'file' in m['data'] else None
url = m['data']['url'] if 'url' in m['data'] else None
a = Image(file=file, url=url)
abm.message.append(a)
abm.timestamp = int(time.time())
abm.message_str = message_str
abm.raw_message = event
return abm
def run_aiocqhttp(self):
if not self.host or not self.port:
return
self.bot = CQHttp(use_ws_reverse=True, import_name='aiocqhttp', api_timeout_sec=180)
@self.bot.on_message('group')
async def group(event: Event):
abm = self.convert_message(event)
if abm:
await self.handle_msg(abm)
@self.bot.on_message('private')
async def private(event: Event):
abm = self.convert_message(event)
if abm:
await self.handle_msg(abm)
bot = self.bot.run_task(host=self.host, port=int(self.port), shutdown_trigger=self.shutdown_trigger_placeholder)
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging.getLogger('aiocqhttp').setLevel(logging.ERROR)
return bot
async def shutdown_trigger_placeholder(self):
while self.context.running:
await asyncio.sleep(1)
async def pre_check(self, message: AstrBotMessage) -> bool:
# if message chain contains Plain components or
# At components which points to self_id, return True
if message.type == MessageType.FRIEND_MESSAGE:
return True, "friend"
for comp in message.message:
if isinstance(comp, At) and str(comp.qq) == message.self_id:
return True, "at"
# check commands which ignore prefix
if await self.context.command_manager.check_command_ignore_prefix(message.message_str):
return True, "command"
# check nicks
if self.check_nick(message.message_str):
return True, "nick"
return False, "none"
async def handle_msg(self, message: AstrBotMessage):
logger.info(
f"{message.sender.nickname}/{message.sender.user_id} -> {self.parse_message_outline(message)}")
ok, reason = await self.pre_check(message)
if not ok:
return
# 解析 role
sender_id = str(message.sender.user_id)
if sender_id in self.admins:
role = 'admin'
else:
role = 'member'
# parse unified message origin
unified_msg_origin = None
assert isinstance(message.raw_message, Event)
if message.type == MessageType.GROUP_MESSAGE:
unified_msg_origin = f"aiocqhttp:{message.type.value}:{message.raw_message.group_id}"
elif message.type == MessageType.FRIEND_MESSAGE:
unified_msg_origin = f"aiocqhttp:{message.type.value}:{message.sender.user_id}"
logger.debug(f"unified_msg_origin: {unified_msg_origin}")
# construct astrbot message event
ame = AstrMessageEvent.from_astrbot_message(message,
self.context,
"aiocqhttp",
message.session_id,
role,
unified_msg_origin,
reason == "command") # only_command
# transfer control to message handler
message_result = await self.message_handler.handle(ame)
if not message_result: return
await self.reply_msg(message, message_result.result_message, message_result.use_t2i)
if message_result.callback:
message_result.callback()
# 如果是等待回复的消息
if message.session_id in self.waiting and self.waiting[message.session_id] == '':
self.waiting[message.session_id] = message
return message_result
async def reply_msg(self,
message: AstrBotMessage,
result_message: list,
use_t2i: bool = None):
"""
回复用户唤醒机器人的消息。(被动回复)
"""
res = result_message
if isinstance(res, str):
res = [Plain(text=res), ]
# if image mode, put all Plain texts into a new picture.
if (use_t2i or (use_t2i == None and self.context.config_helper.t2i)) and isinstance(result_message, list):
rendered_images = await self.convert_to_t2i_chain(res)
if rendered_images:
try:
await self._reply(message, rendered_images)
return rendered_images
except BaseException as e:
logger.warn(traceback.format_exc())
logger.warn(f"以文本转图片的形式回复消息时发生错误: {e},将尝试默认方式。")
await self._reply(message, res)
return res
async def _reply(self, message: Union[AstrBotMessage, Dict], message_chain: List[BaseMessageComponent]):
await self.record_metrics()
if isinstance(message_chain, str):
message_chain = [Plain(text=message_chain), ]
if isinstance(message, AstrBotMessage):
logger.info(
f"{message.sender.user_id} <- {self.parse_message_outline(message)}")
else:
logger.info(f"回复消息: {message_chain}")
ret = []
image_idx = []
for idx, segment in enumerate(message_chain):
d = segment.toDict()
if isinstance(segment, Plain):
d['type'] = 'text'
if isinstance(segment, Image):
image_idx.append(idx)
ret.append(d)
if os.environ.get('TEST_MODE', 'off') == 'on':
logger.info(f"回复消息: {ret}")
return
try:
await self._reply_wrapper(message, ret)
except ActionFailed as e:
if e.retcode == 1200:
# ENOENT
if not image_idx:
raise e
logger.warn("回复失败。检测到失败原因为文件未找到,猜测用户的协议端与 AstrBot 位于不同的文件系统上。尝试采用上传图片的方式发图。")
for idx in image_idx:
if ret[idx]['data']['file'].startswith('file://'):
logger.info(f"正在上传图片: {ret[idx]['data']['path']}")
image_url = await self.context.image_uploader.upload_image(ret[idx]['data']['path'])
logger.info(f"上传成功。")
ret[idx]['data']['file'] = image_url
ret[idx]['data']['path'] = image_url
await self._reply_wrapper(message, ret)
else:
logger.error(traceback.format_exc())
logger.error(f"回复消息失败: {e}")
raise e
async def _reply_wrapper(self, message: Union[AstrBotMessage, Dict], ret: List):
if isinstance(message, AstrBotMessage):
await self.bot.send(message.raw_message, ret)
if isinstance(message, dict):
if 'group_id' in message:
await self.bot.send_group_msg(group_id=message['group_id'], message=ret)
elif 'user_id' in message:
await self.bot.send_private_msg(user_id=message['user_id'], message=ret)
else:
raise Exception("aiocqhttp: 无法识别的消息来源。仅支持 group_id 和 user_id。")
async def send_msg(self, target: Dict[str, int], result_message: CommandResult):
'''
以主动的方式给QQ用户、QQ群发送一条消息。
`target` 接收一个 dict 类型的值引用。
- 要发给 QQ 下的某个用户,请添加 key `user_id`,值为 int 类型的 qq 号;
- 要发给某个群聊,请添加 key `group_id`,值为 int 类型的 qq 群号;
'''
await self._reply(target, result_message.message_chain)
async def send_msg_new(self, message_type: MessageType, target: str, result_message: CommandResult):
if message_type == MessageType.GROUP_MESSAGE:
await self.send_msg({'group_id': int(target)}, result_message)
elif message_type == MessageType.FRIEND_MESSAGE:
await self.send_msg({'user_id': int(target)}, result_message)
else:
raise Exception("aiocqhttp: 无法识别的消息类型。")

310
model/platform/qq_nakuru.py Normal file
View File

@@ -0,0 +1,310 @@
import time, asyncio, traceback
from nakuru.entities.components import Plain, At, Image, Node, BaseMessageComponent
from nakuru import (
CQHTTP,
GuildMessage,
GroupMessage,
FriendMessage,
GroupMemberIncrease,
MessageItemType
)
from typing import Union, List, Dict
from type.types import Context
from . import Platform
from type.astrbot_message import *
from type.message_event import *
from type.command import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
from util.cmd_config import PlatformConfig, NakuruPlatformConfig
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class FakeSource:
def __init__(self, type, group_id):
self.type = type
self.group_id = group_id
class QQNakuru(Platform):
def __init__(self, context: Context,
message_handler: MessageHandler,
platform_config: PlatformConfig) -> None:
super().__init__("nakuru", context)
assert isinstance(platform_config, NakuruPlatformConfig), "gocq: 无法识别的配置类型。"
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.message_handler = message_handler
self.waiting = {}
self.context = context
self.unique_session = context.config_helper.platform_settings.unique_session
self.config = platform_config
self.admins = context.config_helper.admins_id
self.client = CQHTTP(
host=self.config.host,
port=self.config.websocket_port,
http_port=self.config.port
)
gocq_app = self.client
@gocq_app.receiver("GroupMessage")
async def _(app: CQHTTP, source: GroupMessage):
if self.config.enable_group:
abm = self.convert_message(source)
await self.handle_msg(abm)
@gocq_app.receiver("FriendMessage")
async def _(app: CQHTTP, source: FriendMessage):
if self.config.enable_direct_message:
abm = self.convert_message(source)
await self.handle_msg(abm)
@gocq_app.receiver("GuildMessage")
async def _(app: CQHTTP, source: GuildMessage):
if self.config.enable_guild:
abm = self.convert_message(source)
await self.handle_msg(abm)
def pre_check(self, message: AstrBotMessage) -> bool:
# if message chain contains Plain components or At components which points to self_id, return True
if message.type == MessageType.FRIEND_MESSAGE:
return True, "friend"
for comp in message.message:
if isinstance(comp, At) and str(comp.qq) == message.self_id:
return True, "at"
# check commands which ignore prefix
if self.context.command_manager.check_command_ignore_prefix(message.message_str):
return True, "command"
# check nicks
if self.check_nick(message.message_str):
return True, "nick"
return False, "none"
def run(self):
coro = self.client._run()
return coro
async def handle_msg(self, message: AstrBotMessage):
logger.info(
f"{message.sender.nickname}/{message.sender.user_id} -> {self.parse_message_outline(message)}")
assert isinstance(message.raw_message,
(GroupMessage, FriendMessage, GuildMessage))
# 判断是否响应消息
ok, reason = self.pre_check(message)
if not ok:
return
# 解析 session_id
if self.unique_session or message.type == MessageType.FRIEND_MESSAGE:
session_id = message.raw_message.user_id
elif message.type == MessageType.GROUP_MESSAGE:
session_id = message.raw_message.group_id
elif message.type == MessageType.GUILD_MESSAGE:
session_id = message.raw_message.channel_id
else:
session_id = message.raw_message.user_id
message.session_id = session_id
# 解析 role
sender_id = str(message.raw_message.user_id)
if sender_id in self.admins:
role = 'admin'
else:
role = 'member'
# parse unified message origin
unified_msg_origin = None
if message.type == MessageType.GROUP_MESSAGE:
assert isinstance(message.raw_message, GroupMessage)
unified_msg_origin = f"nakuru:{message.type.value}:{message.raw_message.group_id}"
elif message.type == MessageType.FRIEND_MESSAGE:
assert isinstance(message.raw_message, FriendMessage)
unified_msg_origin = f"nakuru:{message.type.value}:{message.sender.user_id}"
elif message.type == MessageType.GUILD_MESSAGE:
assert isinstance(message.raw_message, GuildMessage)
unified_msg_origin = f"nakuru:{message.type.value}:{message.raw_message.channel_id}"
logger.debug(f"unified_msg_origin: {unified_msg_origin}")
# construct astrbot message event
ame = AstrMessageEvent.from_astrbot_message(message,
self.context,
"nakuru",
session_id,
role,
unified_msg_origin,
reason == 'command') # only_command
# transfer control to message handler
message_result = await self.message_handler.handle(ame)
if not message_result: return
await self.reply_msg(message, message_result.result_message, message_result.use_t2i)
if message_result.callback:
message_result.callback()
# 如果是等待回复的消息
if session_id in self.waiting and self.waiting[session_id] == '':
self.waiting[session_id] = message
async def reply_msg(self,
message: AstrBotMessage,
result_message: List[BaseMessageComponent],
use_t2i: bool = None):
"""
回复用户唤醒机器人的消息。(被动回复)
"""
source = message.raw_message
res = result_message
assert isinstance(source,
(GroupMessage, FriendMessage, GuildMessage))
logger.info(
f"{source.user_id} <- {self.parse_message_outline(res)}")
if isinstance(res, str):
res = [Plain(text=res), ]
# if image mode, put all Plain texts into a new picture.
if use_t2i or (use_t2i == None and self.context.config_helper.t2i) and isinstance(result_message, list):
rendered_images = await self.convert_to_t2i_chain(res)
if rendered_images:
try:
await self._reply(source, rendered_images)
return
except BaseException as e:
logger.warn(traceback.format_exc())
logger.warn(f"以文本转图片的形式回复消息时发生错误: {e},将尝试默认方式。")
await self._reply(source, res)
async def _reply(self, source, message_chain: List[BaseMessageComponent]):
await self.record_metrics()
if isinstance(message_chain, str):
message_chain = [Plain(text=message_chain), ]
is_dict = isinstance(source, dict)
typ = None
if is_dict:
if "group_id" in source:
typ = "GroupMessage"
elif "user_id" in source:
typ = "FriendMessage"
elif "guild_id" in source:
typ = "GuildMessage"
else:
typ = source.type
if typ == "GuildMessage":
guild_id = source['guild_id'] if is_dict else source.guild_id
chan_id = source['channel_id'] if is_dict else source.channel_id
await self.client.sendGuildChannelMessage(guild_id, chan_id, message_chain)
elif typ == "FriendMessage":
user_id = source['user_id'] if is_dict else source.user_id
await self.client.sendFriendMessage(user_id, message_chain)
elif typ == "GroupMessage":
group_id = source['group_id'] if is_dict else source.group_id
# 过长时forward发送
plain_text_len = 0
image_num = 0
for i in message_chain:
if isinstance(i, Plain):
plain_text_len += len(i.text)
elif isinstance(i, Image):
image_num += 1
if plain_text_len > self.context.config_helper.platform_settings.forward_threshold or image_num > 1:
# 删除At
for i in message_chain:
if isinstance(i, At):
message_chain.remove(i)
node = Node(message_chain)
node.uin = 123456
node.name = f"bot"
node.time = int(time.time())
nodes = [node]
await self.client.sendGroupForwardMessage(group_id, nodes)
return
await self.client.sendGroupMessage(group_id, message_chain)
async def send_msg(self, target: Dict[str, int], result_message: CommandResult):
'''
以主动的方式给用户、群或者频道发送一条消息。
`target` 接收一个 dict 类型的值引用。
- 要发给 QQ 下的某个用户,请添加 key `user_id`,值为 int 类型的 qq 号;
- 要发给某个群聊,请添加 key `group_id`,值为 int 类型的 qq 群号;
- 要发给某个频道,请添加 key `guild_id`, `channel_id`。均为 int 类型。
guild_id 不是频道号。
'''
await self._reply(target, result_message.message_chain)
async def send_msg_new(self, message_type: MessageType, target: str, result_message: CommandResult):
'''
以主动的方式给用户、群或者频道发送一条消息。
`message_type` 为 MessageType 枚举类型。
- 要发给 QQ 下的某个用户,请使用 MessageType.FRIEND_MESSAGE
- 要发给某个群聊,请使用 MessageType.GROUP_MESSAGE
- 要发给某个频道,请使用 MessageType.GUILD_MESSAGE。
'''
if message_type == MessageType.FRIEND_MESSAGE:
await self.send_msg({"user_id": int(target)}, result_message)
elif message_type == MessageType.GROUP_MESSAGE:
await self.send_msg({"group_id": int(target)}, result_message)
elif message_type == MessageType.GUILD_MESSAGE:
await self.send_msg({"channel_id": int(target)}, result_message)
def convert_message(self, message: Union[GroupMessage, FriendMessage, GuildMessage]) -> AstrBotMessage:
abm = AstrBotMessage()
abm.type = MessageType(message.type)
abm.raw_message = message
abm.message_id = message.message_id
plain_content = ""
for i in message.message:
if isinstance(i, Plain):
plain_content += i.text
abm.message_str = plain_content.strip()
if message.type == MessageItemType.GuildMessage:
abm.self_id = str(message.self_tiny_id)
else:
abm.self_id = str(message.self_id)
abm.sender = MessageMember(
str(message.sender.user_id),
str(message.sender.nickname)
)
abm.tag = "nakuru"
abm.message = message.message
return abm
def wait_for_message(self, group_id) -> Union[GroupMessage, FriendMessage, GuildMessage]:
'''
等待下一条消息,超时 300s 后抛出异常
'''
self.waiting[group_id] = ''
cnt = 0
while True:
if group_id in self.waiting and self.waiting[group_id] != '':
# 去掉
ret = self.waiting[group_id]
del self.waiting[group_id]
return ret
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)

View File

@@ -0,0 +1,405 @@
import botpy
import re
import time
import traceback
import asyncio
import botpy.message
import botpy.types
import botpy.types.message
from botpy.types.message import Reference, Media
from botpy import Client
from util.io import save_temp_img, download_image_by_url
from . import Platform
from type.astrbot_message import *
from type.message_event import *
from type.command import *
from typing import Union, List, Dict
from nakuru.entities.components import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
from util.cmd_config import PlatformConfig, QQOfficialPlatformConfig
logger: Logger = LogManager.GetLogger(log_name='astrbot')
# QQ 机器人官方框架
class botClient(Client):
def set_platform(self, platform: 'QQOfficial'):
self.platform = platform
# 收到群消息
async def on_group_at_message_create(self, message: botpy.message.GroupMessage):
abm = self.platform._parse_from_qqofficial(message, MessageType.GROUP_MESSAGE)
await self.platform.handle_msg(abm)
# 收到频道消息
async def on_at_message_create(self, message: botpy.message.Message):
# 转换层
abm = self.platform._parse_from_qqofficial(message, MessageType.GUILD_MESSAGE)
await self.platform.handle_msg(abm)
# 收到私聊消息
async def on_direct_message_create(self, message: botpy.message.DirectMessage):
# 转换层
abm = self.platform._parse_from_qqofficial(message, MessageType.FRIEND_MESSAGE)
await self.platform.handle_msg(abm)
# 收到 C2C 消息
async def on_c2c_message_create(self, message: botpy.message.C2CMessage):
abm = self.platform._parse_from_qqofficial(message, MessageType.FRIEND_MESSAGE)
await self.platform.handle_msg(abm)
class QQOfficial(Platform):
def __init__(self, context: Context,
message_handler: MessageHandler,
platform_config: PlatformConfig,
test_mode = False) -> None:
super().__init__("qqofficial", context)
assert isinstance(platform_config, QQOfficialPlatformConfig), "qq_official: 无法识别的配置类型。"
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.message_handler = message_handler
self.waiting: dict = {}
self.context = context
self.config = platform_config
self.admins = context.config_helper.admins_id
self.appid = platform_config.appid
self.secret = platform_config.secret
self.unique_session = context.config_helper.platform_settings.unique_session
qq_group = platform_config.enable_group_c2c
guild_dm = platform_config.enable_guild_direct_message
if qq_group:
self.intents = botpy.Intents(
public_messages=True,
public_guild_messages=True,
direct_message=guild_dm
)
else:
self.intents = botpy.Intents(
public_guild_messages=True,
direct_message=guild_dm
)
self.client = botClient(
intents=self.intents,
bot_log=False,
timeout=20,
)
self.client.set_platform(self)
self.test_mode = os.environ.get('TEST_MODE', 'off') == 'on'
async def _parse_to_qqofficial(self, message: List[BaseMessageComponent], is_group: bool = False):
plain_text = ""
image_path = None # only one img supported
for i in message:
if isinstance(i, Plain):
plain_text += i.text
elif isinstance(i, Image) and not image_path:
if i.path:
image_path = i.path
elif i.file and i.file.startswith("base64://"):
img_data = base64.b64decode(i.file[9:])
image_path = save_temp_img(img_data)
elif i.file and i.file.startswith("http"):
# 如果是群消息,不需要下载
image_path = await download_image_by_url(i.file) if not is_group else i.file
return plain_text, image_path
def _parse_from_qqofficial(self, message: Union[botpy.message.Message, botpy.message.GroupMessage],
message_type: MessageType):
abm = AstrBotMessage()
abm.type = message_type
abm.timestamp = int(time.time())
abm.raw_message = message
abm.message_id = message.id
abm.tag = "qqofficial"
msg: List[BaseMessageComponent] = []
if isinstance(message, botpy.message.GroupMessage) or isinstance(message, botpy.message.C2CMessage):
if isinstance(message, botpy.message.GroupMessage):
abm.sender = MessageMember(
message.author.member_openid,
""
)
else:
abm.sender = MessageMember(
message.author.user_openid,
""
)
abm.message_str = message.content.strip()
abm.self_id = "unknown_selfid"
msg.append(Plain(abm.message_str))
if message.attachments:
for i in message.attachments:
if i.content_type.startswith("image"):
url = i.url
if not url.startswith("http"):
url = "https://"+url
img = Image.fromURL(url)
msg.append(img)
abm.message = msg
elif isinstance(message, botpy.message.Message) or isinstance(message, botpy.message.DirectMessage):
try:
abm.self_id = str(message.mentions[0].id)
except:
abm.self_id = ""
plain_content = message.content.replace(
"<@!"+str(abm.self_id)+">", "").strip()
msg.append(Plain(plain_content))
if message.attachments:
for i in message.attachments:
if i.content_type.startswith("image"):
url = i.url
if not url.startswith("http"):
url = "https://"+url
img = Image.fromURL(url)
msg.append(img)
abm.message = msg
abm.message_str = plain_content
abm.sender = MessageMember(
str(message.author.id),
str(message.author.username)
)
else:
raise ValueError(f"Unknown message type: {message_type}")
return abm
def run(self):
return self.client.start(
appid=self.appid,
secret=self.secret
)
async def handle_msg(self, message: AstrBotMessage):
assert isinstance(message.raw_message, (botpy.message.Message,
botpy.message.GroupMessage, botpy.message.DirectMessage, botpy.message.C2CMessage))
is_group = message.type != MessageType.FRIEND_MESSAGE
_t = "/私聊" if not is_group else ""
logger.info(
f"{message.sender.nickname}({message.sender.user_id}{_t}) -> {self.parse_message_outline(message)}")
# 解析出 session_id
if self.unique_session or not is_group:
session_id = message.sender.user_id
else:
if message.type == MessageType.GUILD_MESSAGE:
session_id = message.raw_message.channel_id
elif message.type == MessageType.GROUP_MESSAGE:
session_id = str(message.raw_message.group_openid)
else:
session_id = str(message.raw_message.author.id)
message.session_id = session_id
# 解析出 role
sender_id = message.sender.user_id
if sender_id in self.admins:
role = 'admin'
else:
role = 'member'
# construct astrbot message event
ame = AstrMessageEvent.from_astrbot_message(message, self.context, "qqofficial", session_id, role)
message_result = await self.message_handler.handle(ame)
if not message_result:
return
ret = await self.reply_msg(message, message_result.result_message, message_result.use_t2i)
if message_result.callback:
message_result.callback()
# 如果是等待回复的消息
if session_id in self.waiting and self.waiting[session_id] == '':
self.waiting[session_id] = message
return ret
async def reply_msg(self,
message: AstrBotMessage,
result_message: List[BaseMessageComponent],
use_t2i: bool = None):
'''
回复频道消息
'''
source = message.raw_message
assert isinstance(source, (botpy.message.Message,
botpy.message.GroupMessage, botpy.message.DirectMessage, botpy.message.C2CMessage))
logger.info(
f"{message.sender.nickname}({message.sender.user_id}) <- {self.parse_message_outline(result_message)}")
plain_text = ''
image_path = ''
msg_ref = None
rendered_images = []
if use_t2i or (use_t2i == None and self.context.config_helper.t2i) and isinstance(result_message, list):
rendered_images = await self.convert_to_t2i_chain(result_message)
if isinstance(result_message, list):
plain_text, image_path = await self._parse_to_qqofficial(result_message, message.type == MessageType.GROUP_MESSAGE)
else:
plain_text = result_message
if source and not image_path: # file_image与message_reference不能同时传入
msg_ref = Reference(message_id=source.id,
ignore_get_message_error=False)
# 到这里,我们得到了 plain_textimage_pathmsg_ref
data = {
'content': plain_text,
'msg_id': message.message_id,
'message_reference': msg_ref
}
if isinstance(message.raw_message, botpy.message.GroupMessage):
data['group_openid'] = str(source.group_openid)
elif isinstance(message.raw_message, botpy.message.Message):
data['channel_id'] = source.channel_id
elif isinstance(message.raw_message, botpy.message.DirectMessage):
data['guild_id'] = source.guild_id
elif isinstance(message.raw_message, botpy.message.C2CMessage):
data['openid'] = source.author.user_openid
if image_path:
data['file_image'] = image_path
if rendered_images:
# 文转图
_data = data.copy()
_data['content'] = ''
_data['file_image'] = rendered_images[0].file
_data['message_reference'] = None
try:
return await self._reply(**_data)
except BaseException as e:
logger.warn(traceback.format_exc())
logger.warn(f"以文本转图片的形式回复消息时发生错误: {e},将尝试默认方式。")
try:
return await self._reply(**data)
except BaseException as e:
logger.error(traceback.format_exc())
# 分割过长的消息
if "msg over length" in str(e):
split_res = []
split_res.append(plain_text[:len(plain_text)//2])
split_res.append(plain_text[len(plain_text)//2:])
for i in split_res:
data['content'] = i
return await self._reply(**data)
else:
try:
# 防止被qq频道过滤消息
plain_text = plain_text.replace(".", " . ")
return await self._reply(**data)
except BaseException as e:
try:
data['content'] = str.join(" ", plain_text)
return await self._reply(**data)
except BaseException as e:
plain_text = re.sub(
r'(https|http)?:\/\/(\w|\.|\/|\?|\=|\&|\%)*\b', '[被隐藏的链接]', str(e), flags=re.MULTILINE)
plain_text = plain_text.replace(".", "·")
data['content'] = plain_text
return await self._reply(**data)
async def _reply(self, **kwargs):
await self.record_metrics()
if 'group_openid' in kwargs or 'openid' in kwargs:
# QQ群组消息
if 'file_image' in kwargs and kwargs['file_image']:
file_image_path = kwargs['file_image'].replace("file:///", "")
if file_image_path:
if file_image_path.startswith("http"):
image_url = file_image_path
else:
logger.debug(f"上传图片: {file_image_path}")
image_url = await self.context.image_uploader.upload_image(file_image_path)
logger.debug(f"上传成功: {image_url}")
if 'group_openid' in kwargs:
media = await self.client.api.post_group_file(kwargs['group_openid'], 1, image_url)
elif 'openid' in kwargs:
media = await self.client.api.post_c2c_file(kwargs['openid'], 1, image_url)
del kwargs['file_image']
kwargs['media'] = media
logger.debug(f"发送群图片: {media}")
kwargs['msg_type'] = 7 # 富媒体
if self.test_mode:
return kwargs
if 'group_openid' in kwargs:
await self.client.api.post_group_message(**kwargs)
elif 'openid' in kwargs:
await self.client.api.post_c2c_message(**kwargs)
elif 'channel_id' in kwargs:
# 频道消息
if 'file_image' in kwargs and kwargs['file_image']:
kwargs['file_image'] = kwargs['file_image'].replace("file:///", "")
# 频道消息发图只支持本地
if kwargs['file_image'].startswith("http"):
kwargs['file_image'] = await download_image_by_url(kwargs['file_image'])
if self.test_mode:
return kwargs
await self.client.api.post_message(**kwargs)
elif 'guild_id' in kwargs:
# 频道私聊消息
if 'file_image' in kwargs and kwargs['file_image']:
kwargs['file_image'] = kwargs['file_image'].replace("file:///", "")
if kwargs['file_image'].startswith("http"):
kwargs['file_image'] = await download_image_by_url(kwargs['file_image'])
if self.test_mode:
return kwargs
await self.client.api.post_dms(**kwargs)
else:
raise ValueError("Unknown target type.")
async def send_msg(self, target: Dict[str, str], result_message: CommandResult):
'''
以主动的方式给频道用户、群、频道或者消息列表用户QQ用户发送一条消息。
`target` 接收一个 dict 类型的值引用。
- 如果目标是 QQ 群,请添加 key `group_openid`。
- 如果目标是 频道消息,请添加 key `channel_id`。
- 如果目标是 频道私聊,请添加 key `guild_id`。
- 如果目标是 QQ 用户,请添加 key `openid`。
'''
plain_text, image_path = await self._parse_to_qqofficial(result_message.message_chain)
payload = {
'content': plain_text,
**target
}
if image_path:
payload['file_image'] = image_path
await self._reply(**payload)
async def send_msg_new(self, message_type: MessageType, target: str, result_message: CommandResult):
raise NotImplementedError("qqofficial 不支持此方法。")
def wait_for_message(self, channel_id: int) -> AstrBotMessage:
'''
等待指定 channel_id 的下一条信息,超时 300s 后抛出异常
'''
self.waiting[channel_id] = ''
cnt = 0
while True:
if channel_id in self.waiting and self.waiting[channel_id] != '':
# 去掉
ret = self.waiting[channel_id]
del self.waiting[channel_id]
return ret
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)

View File

@@ -1,217 +0,0 @@
import io
import botpy
from PIL import Image as PILImage
from botpy.message import Message, DirectMessage
import re
import asyncio
import requests
from cores.qqbot.personality import personalities
from util import general_utils as gu
from nakuru.entities.components import Plain, At, Image
from botpy.types.message import Reference
from botpy import Client
import time
class NakuruGuildMember():
tiny_id: int # 发送者识别号
user_id: int # 发送者识别号
title: str
nickname: str # 昵称
role: int # 角色
icon_url: str # 头像url
class NakuruGuildMessage():
type: str = "GuildMessage"
self_id: int # bot的qq号
self_tiny_id: int # bot的qq号
sub_type: str # 消息类型
message_id: str # 消息id
guild_id: int # 频道号
channel_id: int # 子频道号
user_id: int # 发送者qq号
message: list # 消息内容
sender: NakuruGuildMember # 发送者信息
raw_message: Message
def __str__(self) -> str:
return str(self.__dict__)
class QQChan():
def __init__(self, cnt: dict = None) -> None:
self.qqchan_cnt = 0
self.waiting: dict = {}
def get_cnt(self):
return self.qqchan_cnt
def set_cnt(self, cnt):
self.qqchan_cnt = cnt
def run_bot(self, botclient: Client, appid, token):
intents = botpy.Intents(public_guild_messages=True, direct_message=True)
self.client = botclient
self.client.run(appid=appid, token=token)
# gocq-频道SDK兼容层
def gocq_compatible_send(self, gocq_message_chain: list):
plain_text = ""
image_path = None # only one img supported
for i in gocq_message_chain:
if isinstance(i, Plain):
plain_text += i.text
elif isinstance(i, Image) and image_path == None:
if i.path is not None:
image_path = i.path
else:
image_path = i.file
return plain_text, image_path
# gocq-频道SDK兼容层
def gocq_compatible_receive(self, message: Message) -> NakuruGuildMessage:
ngm = NakuruGuildMessage()
try:
ngm.self_id = message.mentions[0].id
ngm.self_tiny_id = message.mentions[0].id
except:
ngm.self_id = 0
ngm.self_tiny_id = 0
ngm.sub_type = "normal"
ngm.message_id = message.id
ngm.guild_id = int(message.guild_id)
ngm.channel_id = int(message.channel_id)
ngm.user_id = int(message.author.id)
msg = []
plain_content = message.content.replace("<@!"+str(ngm.self_id)+">", "").strip()
msg.append(Plain(plain_content))
if message.attachments:
for i in message.attachments:
if i.content_type.startswith("image"):
url = i.url
if not url.startswith("http"):
url = "https://"+url
img = Image.fromURL(url)
msg.append(img)
ngm.message = msg
ngm.sender = NakuruGuildMember()
ngm.sender.tiny_id = int(message.author.id)
ngm.sender.user_id = int(message.author.id)
ngm.sender.title = ""
ngm.sender.nickname = message.author.username
ngm.sender.role = 0
ngm.sender.icon_url = message.author.avatar
ngm.raw_message = message
return ngm
def send_qq_msg(self,
message: NakuruGuildMessage,
res: list):
'''
回复频道消息
'''
gu.log("回复QQ频道消息: "+str(res), level=gu.LEVEL_INFO, tag="QQ频道", max_len=500)
self.qqchan_cnt += 1
plain_text = ""
image_path = None
if isinstance(res, list):
# 兼容gocq
plain_text, image_path = self.gocq_compatible_send(res)
elif isinstance(res, str):
plain_text = res
# print(plain_text, image_path)
msg_ref = None
if message.raw_message is not None:
msg_ref = Reference(message_id=message.raw_message.id, ignore_get_message_error=False)
if image_path is not None:
msg_ref = None
if image_path.startswith("http"):
pic_res = requests.get(image_path, stream = True)
if pic_res.status_code == 200:
image = PILImage.open(io.BytesIO(pic_res.content))
image_path = gu.save_temp_img(image)
try:
# reply_res = asyncio.run_coroutine_threadsafe(message.raw_message.reply(content=str(plain_text), message_reference = msg_ref, file_image=image_path), self.client.loop)
reply_res = asyncio.run_coroutine_threadsafe(self.client.api.post_message(channel_id=str(message.channel_id),
content=str(plain_text),
msg_id=message.message_id,
file_image=image_path,
message_reference=msg_ref), self.client.loop)
reply_res.result()
except BaseException as e:
# 分割过长的消息
if "msg over length" in str(e):
split_res = []
split_res.append(plain_text[:len(plain_text)//2])
split_res.append(plain_text[len(plain_text)//2:])
for i in split_res:
reply_res = asyncio.run_coroutine_threadsafe(self.client.api.post_message(channel_id=str(message.channel_id),
content=str(i),
msg_id=message.message_id,
file_image=image_path,
message_reference=msg_ref), self.client.loop)
reply_res.result()
else:
# 发送qq信息
try:
# 防止被qq频道过滤消息
plain_text = plain_text.replace(".", " . ")
reply_res = asyncio.run_coroutine_threadsafe(self.client.api.post_message(channel_id=str(message.channel_id),
content=str(plain_text),
msg_id=message.message_id,
file_image=image_path,
message_reference=msg_ref), self.client.loop).result() # 发送信息
except BaseException as e:
print("QQ频道API错误: \n"+str(e))
try:
# reply_res = asyncio.run_coroutine_threadsafe(message.raw_message.reply(content=str(str.join(" ", plain_text)), message_reference = msg_ref, file_image=image_path), self.client.loop)
reply_res = asyncio.run_coroutine_threadsafe(self.client.api.post_message(channel_id=str(message.channel_id),
content=str(str.join(" ", plain_text)),
msg_id=message.message_id,
file_image=image_path,
message_reference=msg_ref), self.client.loop).result()
except BaseException as e:
plain_text = re.sub(r'(https|http)?:\/\/(\w|\.|\/|\?|\=|\&|\%)*\b', '[被隐藏的链接]', str(e), flags=re.MULTILINE)
plain_text = plain_text.replace(".", "·")
reply_res = asyncio.run_coroutine_threadsafe(self.client.api.post_message(channel_id=str(message.channel_id),
content=plain_text,
msg_id=message.message_id,
file_image=image_path,
message_reference=msg_ref), self.client.loop).result()
# send(message, f"QQ频道API错误{str(e)}\n下面是格式化后的回答\n{f_res}")
def push_message(self, channel_id: int, message_chain: list, message_id: int = None):
'''
推送消息, 如果有 message_id那么就是回复消息。
'''
_n = NakuruGuildMessage()
_n.channel_id = channel_id
_n.message_id = message_id
self.send_qq_msg(_n, message_chain)
def send(self, message_obj, message_chain: list):
'''
发送信息
'''
self.send_qq_msg(message_obj, message_chain)
def wait_for_message(self, channel_id: int) -> NakuruGuildMessage:
'''
等待指定 channel_id 的下一条信息,超时 300s 后抛出异常
'''
self.waiting[channel_id] = ''
cnt = 0
while True:
if channel_id in self.waiting and self.waiting[channel_id] != '':
# 去掉
ret = self.waiting[channel_id]
del self.waiting[channel_id]
return ret
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)

View File

@@ -1,188 +0,0 @@
import requests
import asyncio
import websockets
from websockets import WebSocketClientProtocol
import json
import inspect
from typing import Callable, Awaitable, Union
from pydantic import BaseModel
import datetime
class Event(BaseModel):
GroupMessage: str = "GuildMessage"
class Sender(BaseModel):
user_id: str
member_openid: str
class MessageComponent(BaseModel):
type: str
class PlainText(MessageComponent):
text: str
class Image(MessageComponent):
path: str
file: str
url: str
class MessageChain(list):
def append(self, __object: MessageComponent) -> None:
if not isinstance(__object, MessageComponent):
raise TypeError("不受支持的消息链元素类型。回复的消息链必须是 MessageComponent 的子类。")
return super().append(__object)
def insert(self, __index: int, __object: MessageComponent) -> None:
if not isinstance(__object, MessageComponent):
raise TypeError("不受支持的消息链元素类型。回复的消息链必须是 MessageComponent 的子类。")
return super().insert(__index, __object)
def parse_from_nakuru(self, nakuru_message_chain: Union[list, str]) -> None:
if isinstance(nakuru_message_chain, str):
self.append(PlainText(type='Plain', text=nakuru_message_chain))
else:
for i in nakuru_message_chain:
if i['type'] == 'Plain':
self.append(PlainText(type='Plain', text=i['text']))
elif i['type'] == 'Image':
self.append(Image(path=i['path'], file=i['file'], url=i['url']))
class Message(BaseModel):
type: str
user_id: str
member_openid: str
message_id: str
group_id: str
group_openid: str
content: str
message: MessageChain
time: int
sender: Sender
class UnofficialQQBotSDK:
GET_APP_ACCESS_TOKEN_URL = "https://bots.qq.com/app/getAppAccessToken"
OPENAPI_BASE_URL = "https://api.sgroup.qq.com"
def __init__(self, appid: str, client_secret: str) -> None:
self.appid = appid
self.client_secret = client_secret
self.events: dict[str, Awaitable] = {}
def run_bot(self) -> None:
self.__get_access_token()
self.__get_wss_endpoint()
asyncio.get_event_loop().run_until_complete(self.__ws_client())
def __get_access_token(self) -> None:
res = requests.post(self.GET_APP_ACCESS_TOKEN_URL, json={
"appId": self.appid,
"clientSecret": self.client_secret
}, headers={
"Content-Type": "application/json"
})
res = res.json()
code = res['code'] if 'code' in res else 1
if 'access_token' not in res:
raise Exception(f"获取 access_token 失败。原因:{res['message'] if 'message' in res else '未知'}")
self.access_token = 'QQBot ' + res['access_token']
def __auth_header(self) -> str:
return {
'Authorization': self.access_token,
'X-Union-Appid': self.appid,
}
def __get_wss_endpoint(self):
res = requests.get(self.OPENAPI_BASE_URL + "/gateway", headers=self.__auth_header())
self.wss_endpoint = res.json()['url']
# print("wss_endpoint: " + self.wss_endpoint)
async def __behav_heartbeat(self, ws: WebSocketClientProtocol, t: int):
while True:
await asyncio.sleep(t - 1)
try:
await ws.send(json.dumps({
"op": 1,
"d": self.s
}))
except:
print("heartbeat error.")
async def __handle_msg(self, ws: WebSocketClientProtocol, msg: dict):
if msg['op'] == 10:
asyncio.get_event_loop().create_task(self.__behav_heartbeat(ws, msg['d']['heartbeat_interval'] / 1000))
# 鉴权获得session
await ws.send(json.dumps({
"op": 2,
"d": {
"token": self.access_token,
"intents": 33554432,
"shard": [0, 1],
"properties": {
"$os": "linux",
"$browser": "my_library",
"$device": "my_library"
}
}
}))
if msg['op'] == 0:
# ready
data = msg['d']
event_typ: str = msg['t'] if 't' in msg else None
if event_typ == 'GROUP_AT_MESSAGE_CREATE':
if 'GroupMessage' in self.events:
coro = self.events['GroupMessage']
else:
return
message_chain = MessageChain()
message_chain.append(PlainText(type="Plain", text=data['content']))
group_message = Message(
type='GroupMessage',
user_id=data['author']['id'],
member_openid=data['author']['member_openid'],
message_id=data['id'],
group_id=data['group_id'],
group_openid=data['group_openid'],
content=data['content'],
# 2023-11-24T19:51:11+08:00
time=int(datetime.datetime.strptime(data['timestamp'], "%Y-%m-%dT%H:%M:%S%z").timestamp()),
sender=Sender(
user_id=data['author']['id'],
member_openid=data['author']['member_openid']
),
message=message_chain
)
await coro(self, group_message)
async def send(self, message: Message, message_chain: MessageChain) -> None:
# todo: 消息链转换支持更多类型。
plain_text = ""
for i in message_chain:
if isinstance(i, PlainText):
plain_text += i.text
requests.post(self.OPENAPI_BASE_URL + f"/v2/groups/{message.group_openid}/messages", headers=self.__auth_header(), json={
"content": plain_text,
"message_type": 0,
"msg_id": message.message_id
})
async def __ws_client(self):
self.s = 0
async with websockets.connect(self.wss_endpoint) as websocket:
while True:
msg = await websocket.recv()
msg = json.loads(msg)
if 's' in msg:
self.s = msg['s']
await self.__handle_msg(websocket, msg)
def on(self, event: str) -> None:
def wrapper(func: Awaitable):
if inspect.iscoroutinefunction(func) == False:
raise TypeError("func must be a coroutine function")
self.events[event] = func
return wrapper

26
model/plugin/command.py Normal file
View File

@@ -0,0 +1,26 @@
from dataclasses import dataclass
from type.register import RegisteredPlugins
from typing import List, Union, Callable
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot')
@dataclass
class CommandRegisterRequest():
command_name: str
description: str
priority: int
handler: Callable
use_regex: bool = False
plugin_name: str = None
ignore_prefix: bool = False
class PluginCommandBridge():
def __init__(self, cached_plugins: RegisteredPlugins):
self.plugin_commands_waitlist: List[CommandRegisterRequest] = []
self.cached_plugins = cached_plugins
def register_command(self, plugin_name, command_name, description, priority, handler, use_regex=False, ignore_prefix=False):
self.plugin_commands_waitlist.append(CommandRegisterRequest(command_name, description, priority, handler, use_regex, plugin_name, ignore_prefix))

Some files were not shown because too many files have changed in this diff Show More