Compare commits

..

487 Commits

Author SHA1 Message Date
Soulter
6ed7559518 Merge pull request #174 from Soulter/refactor-v3.3.0
重写工程,提高稳定性
2024-07-26 18:24:33 +08:00
Soulter
d977dbe9a7 update version 2024-07-26 06:24:11 -04:00
Soulter
17fc761c61 chore: update default plugin 2024-07-26 05:15:41 -04:00
Soulter
af878f2ed3 fix: 修复 aiocqhttp 运行导致 ctrl+c 无法退出 bot 的问题
perf: 支持通过context注册task
2024-07-26 05:02:29 -04:00
Soulter
bb2164c324 perf: 在 context 中添加message_handler 2024-07-25 12:58:45 -04:00
Soulter
0496becc50 perf: 增强 aiocqhttp 发图的稳定性 2024-07-25 12:33:31 -04:00
Soulter
618f8aa7d2 fix: 修复一些指令的bug 2024-07-25 10:44:17 -04:00
Soulter
c57f711c48 update: metrics refactoring 2024-07-24 09:48:25 -04:00
Soulter
4edd11f2f7 fix: 修复了一些bug。 2024-07-24 09:19:43 -04:00
Soulter
a2cf058951 update: refactor codes 2024-07-24 18:40:08 +08:00
Soulter
d52eb10ddd chore: remove large font files to shrink the source code size 2024-07-07 21:37:44 +08:00
Soulter
4b6dae71fc update: 更新默认插件 helloworld 2024-07-07 21:00:18 +08:00
Soulter
ddad30c22e feat: 支持本地上传插件 2024-07-07 20:59:12 +08:00
Soulter
77067c545c feat: 使用压缩包文件的更新方式 2024-07-07 18:26:58 +08:00
Soulter
465d283cad Update README.md 2024-06-23 11:23:17 +08:00
Soulter
05071144fb fix: 修复文转图相关问题 2024-06-09 08:56:52 -04:00
Soulter
a4e7904953 chore: clean codes 2024-06-03 20:40:18 -04:00
Soulter
986a8c7554 Update README.md 2024-06-03 21:18:53 +08:00
Soulter
9272843b77 Update README.md 2024-06-03 21:18:00 +08:00
Soulter
542d4bc703 typo: fix t2i typo 2024-06-03 08:47:51 -04:00
Soulter
e3640fdac9 perf: 优化update、help等指令的输出效果 2024-06-03 08:33:17 -04:00
Soulter
f64ab4b190 chore: 移除了一些过时的方法 2024-06-03 05:54:40 -04:00
Soulter
bd571e1577 feat: 提供新的文本转图片样式 2024-06-03 05:51:44 -04:00
Soulter
e4a5cbd893 prof: 改善加载插件时的稳定性 2024-06-03 00:20:56 -04:00
Soulter
7a9fd7fd1e fix: 修复报配置文件未找到的问题 2024-06-02 23:14:48 -04:00
Soulter
d9b60108db Update README.md 2024-05-30 18:11:57 +08:00
Soulter
8455c8b4ed Update README.md 2024-05-30 18:03:59 +08:00
Soulter
5c2e7099fc Update README.md 2024-05-26 21:38:32 +08:00
Soulter
1fd1d55895 Update config.py 2024-05-26 21:31:26 +08:00
Soulter
5ce4137e75 fix: 修复model指令 2024-05-26 21:15:33 +08:00
Soulter
d49179541e feat: 给插件的init方法传入 ctx 2024-05-26 21:10:19 +08:00
Soulter
676f258981 perf: 重启后终止子进程 2024-05-26 21:09:23 +08:00
Soulter
fa44749240 fix: 修复相对路径导致的windows启动器无法安装依赖的问题 2024-05-26 18:15:25 +08:00
Soulter
6c856f9da2 fix(typo): 修复插件注册器的一个typo导致无法注册消息平台插件的问题 2024-05-26 18:07:07 +08:00
Soulter
e8773cea7f fix: 修复配置文件没有有效迁移的问题 2024-05-25 20:59:37 +08:00
Soulter
4d36ffcb08 fix: 优化插件的结果处理 2024-05-25 18:46:38 +08:00
Soulter
c653e492c4 Merge pull request #164 from Soulter/stat-upload-perf
/models 指令优化
2024-05-25 18:35:56 +08:00
Soulter
f08de1f404 perf: 添加 models 指令到帮助中 2024-05-25 18:34:08 +08:00
Soulter
1218691b61 perf: model 指令放宽限制,支持输入自定义模型。设置模型后持久化保存。 2024-05-25 18:29:01 +08:00
Soulter
61fc27ff79 Merge pull request #163 from Soulter/stat-upload-perf
优化统计记录数据结构
2024-05-25 18:28:08 +08:00
Soulter
123ee24f7e fix: stat perf 2024-05-25 18:01:16 +08:00
Soulter
52c9045a28 feat: 优化了统计信息数据结构 2024-05-25 17:47:41 +08:00
Soulter
f00f1e8933 fix: 画图报错 2024-05-24 13:33:02 +08:00
Soulter
8da4433e57 chore: 更改相关字段 2024-05-21 08:44:05 +08:00
Soulter
7babb87934 perf: 更改库的加载顺序 2024-05-21 08:41:46 +08:00
Soulter
f67b171385 perf: 数据库迁移至 data 目录下 2024-05-19 17:10:11 +08:00
Soulter
1780d1355d perf: 将内部pip全部更换为阿里云镜像; 插件依赖更新逻辑优化 2024-05-19 16:45:08 +08:00
Soulter
5a3390e4f3 fix: force update 2024-05-19 16:06:47 +08:00
Soulter
337d96b41d Merge pull request #160 from Soulter/dev_default_openai_refactor
优化自带的 OpenAI LLM 交互, 人格, 网页搜索
2024-05-19 15:23:19 +08:00
Soulter
38a1dfea98 fix: web content scraper add proxy 2024-05-19 15:08:22 +08:00
Soulter
fbef73aeec fix: websearch encoding set to utf-8 2024-05-19 14:42:28 +08:00
Soulter
d6214c2b7c fix: web search 2024-05-19 12:55:54 +08:00
Soulter
d58c86f6fc perf: websearch 优化;项目结构调整 2024-05-19 12:46:07 +08:00
Soulter
ea34c20198 perf: 优化人格和LVM的处理过程 2024-05-18 10:34:35 +08:00
Soulter
934ca94e62 refactor: 重写 LLM OpenAI 模块 2024-05-17 22:56:44 +08:00
Soulter
1775327c2e chore: refact openai official 2024-05-17 09:07:11 +08:00
Soulter
707fcad8b4 feat: gpt 模型列表查看指令 models 2024-05-17 00:06:49 +08:00
Soulter
f143c5afc6 fix: 修复 plugin v 子指令报错的问题 2024-05-16 23:11:07 +08:00
Soulter
99f94b2611 fix: 修复无法调用某些指令的问题 2024-05-16 23:04:47 +08:00
Soulter
e39c1f9116 remove: 移除自动更换多模态模型的功能 2024-05-16 22:46:50 +08:00
Soulter
235e0b9b8f fix: gocq logging 2024-05-09 13:24:31 +08:00
Soulter
d5a9bed8a4 fix(updator): IterableList object has no
attribute origin
2024-05-08 19:18:21 +08:00
Soulter
d7dc8a7612 chore: 添加一些日志;更新版本 2024-05-08 19:12:23 +08:00
Soulter
08cd3ca40c perf: 更好的日志输出;
fix: 修复可视化面板刷新404
2024-05-08 19:01:36 +08:00
Soulter
a13562dcea fix: 修复启动器启动加载带有配置的插件时提示配置文件缺失的问题 2024-05-08 16:28:30 +08:00
Soulter
d7a0c0d1d0 Update requirements.txt 2024-05-07 15:58:51 +08:00
Soulter
c0729b2d29 fix: 修复插件重载相关问题 2024-04-22 19:04:15 +08:00
Soulter
a80f474290 fix: 修复更新插件时的报错 2024-04-22 18:36:56 +08:00
Soulter
699207dd54 update: version 2024-04-21 22:41:48 +08:00
Soulter
e7708010c9 fix: 修复 gocq 平台下无法回复消息的问题 2024-04-21 22:39:09 +08:00
Soulter
f66091e08f 🎨: clean codes 2024-04-21 22:20:23 +08:00
Soulter
03bb932f8f fix: 修复可视化面板报错 2024-04-21 22:16:42 +08:00
Soulter
fbf8b349e0 update: helloworld 2024-04-21 22:13:27 +08:00
Soulter
e9278fce6a !! delete: 移除对逆向 ChatGPT 的所有支持。 2024-04-21 22:12:09 +08:00
Soulter
9a7db956d5 fix: 修复 3.10.x readibility 依赖导致的报错 2024-04-21 16:40:02 +08:00
Soulter
13196dd667 perf: 修改包路径 2024-03-15 14:49:44 +08:00
Soulter
52b80e24d2 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-03-15 14:29:48 +08:00
Soulter
7dff87e65d fix: 修复无法更新到指定版本的问题 2024-03-15 14:29:28 +08:00
Soulter
31ee64d1b2 Update docker-image.yml 2024-03-15 14:11:57 +08:00
Soulter
8e865b6918 fix: 修复无LLM情况下update不管用的问题 2024-03-15 14:05:16 +08:00
Soulter
66f91e5832 update: 更新版本号 2024-03-15 13:50:57 +08:00
Soulter
cd2d368f9c fix: 修复可视化面板指定版本更新失效的问题 2024-03-15 13:48:14 +08:00
Soulter
7736c1c9bd feat: QQ机器人官方API 支持可选择是否接收Q群消息 2024-03-15 13:44:18 +08:00
Soulter
6728c0b7b5 chore: 改变包名 2024-03-15 13:37:51 +08:00
Soulter
344f92e0e7 perf: 将内部基本消息对象统一为 AstrBotMessage
feat: 支持官方qq接口的Q群消息
2024-03-14 13:56:32 +08:00
Soulter
fdabfef6a7 update: version 2024-03-13 21:28:18 +08:00
Soulter
6c5718f134 fix: 修复画图的报错 2024-03-13 21:27:48 +08:00
Soulter
edfde51434 fix: 修复频道平台下未找到平台 qqchan 的实例的错误 2024-03-13 19:53:36 +08:00
Soulter
3fc1347bba fix: plugin register management 2024-03-12 20:00:02 +08:00
Soulter
e643eea365 perf: 结构化插件的表示格式; 优化插件开发接口 2024-03-12 18:50:50 +08:00
Soulter
1af481f5f9 fix: function call with newer version 2024-03-07 17:35:21 +08:00
Soulter
317d1c4c41 fix: onebot protocol connection error 2024-03-05 14:03:46 +08:00
Soulter
a703860512 fix: plugin call 2024-03-05 13:52:44 +08:00
Soulter
1cd1c8ea0d feat: 异步重写
perf: 优化网页搜索回答规范
2024-03-03 18:54:50 +08:00
Soulter
53ef3bbf4f fix: 修复修改cqhttp端口后仍检测失败的问题 2024-02-19 19:04:40 +08:00
Soulter
ab7b8aad7c chore: delete llms 2024-02-12 23:28:12 +08:00
Soulter
c49213282b Merge remote-tracking branch 'refs/remotes/origin/master' 2024-02-12 23:18:11 +08:00
Soulter
3c87fc5b31 perf: clean codes; 将 keyword 功能转移至 helloworld 插件下 2024-02-12 23:17:55 +08:00
Soulter
9684508e1d Update README.md 2024-02-11 13:47:09 +08:00
Soulter
bb0edae200 Update README.md 2024-02-08 00:40:48 +08:00
Soulter
acb68a4a1e chore: 更新版本标识 2024-02-08 00:31:08 +08:00
Soulter
46dd6f3243 fix: 1. 修复可视化面板无法保存配置的问题;修复help指令无法生成图片的问题
feat: 支持更多插件标准接口
2024-02-08 00:29:37 +08:00
Soulter
ecab072890 chore: 改变版本号;清理一些无用变量 2024-02-07 17:41:10 +08:00
Soulter
148534d3c2 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-02-07 16:45:11 +08:00
Soulter
1278f16973 feat: 可视化面板完整支持插件配置 2024-02-07 16:44:38 +08:00
Soulter
7d9b3c6c5c Update docker-image.yml 2024-02-07 13:05:52 +08:00
Soulter
83dcb5165c perf: 优化可视化面板配置显示;
feat: 新增面向插件的配置接口
2024-02-07 12:19:52 +08:00
Soulter
30862bb82f perf: 优化更新速度和更新流程 2024-02-06 19:18:53 +08:00
Soulter
6c0bda8feb Update README.md 2024-02-06 18:30:56 +08:00
Soulter
e14dece206 perf: 优化项目更新逻辑 2024-02-06 17:45:02 +08:00
Soulter
680593d636 fix: 修复web指令前缀失效问题 2024-02-06 15:42:29 +08:00
Soulter
144440214f fix: 修复画图报错的问题 2024-02-06 12:56:41 +08:00
Soulter
6667b58a3f fix: 修复容器出现的一些问题 2024-02-06 12:48:57 +08:00
Soulter
b55d9533be chore: 清理了一些无用代码 2024-02-05 23:46:46 +08:00
Soulter
3484fc60e6 fix: dashboard fake dead 2024-02-05 14:51:19 +08:00
Soulter
eac0265522 fix: 修复频道私聊且独立会话下的报错 2024-02-05 14:45:32 +08:00
Soulter
ac74431633 fix: 修复远程连接可视化面板下控制台不能正常显示的问题 2024-02-05 14:12:38 +08:00
Soulter
4c098200be fix: 修复docker环境下ws server的报错 2024-02-05 13:54:45 +08:00
Soulter
2cf18972f3 fix: 修复面板保存配置时报错的问题;修复频道私聊报错的问题
perf: 改善日志
2024-02-05 13:18:34 +08:00
Soulter
d522d2a6a9 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-02-04 21:29:58 +08:00
Soulter
7079ce096f feat: 可视化面板支持日志显示
chore: 减少了一些日志输出
2024-02-04 21:28:03 +08:00
Soulter
5e8c5067b1 Update README.md 2024-01-16 00:32:07 +08:00
Soulter
570ff4e8b6 perf: 优化bing网页搜索 2024-01-10 16:48:46 +08:00
Soulter
e2f1362a1f fix: 修复myid指令在gocq平台上不可用的情况 2024-01-09 22:25:52 +08:00
Soulter
3519e38211 perf: 移除默认prompt 2024-01-07 14:51:43 +08:00
Soulter
08734250f7 feat: 支持频道上的文字转图片 2024-01-05 18:59:02 +08:00
Soulter
e8407f6449 feat: 添加逆向语言模型服务相关配置到面板 2024-01-05 17:13:52 +08:00
Soulter
04f3400f83 perf: 改善插件搜集流程 2024-01-03 20:19:31 +08:00
Soulter
89c8b3e7fc fix: 修复 gocq 环境下 at 机器人报错的问题 2024-01-03 16:32:08 +08:00
Soulter
66294100ec fix: typo fix 2024-01-03 16:26:58 +08:00
Soulter
8ed8a23c8b fix: 修复 gocq 环境下消息响应的一些问题 2024-01-03 16:15:37 +08:00
Soulter
449b0b03b5 fix: 修复报错 nick_qq 的问题 2024-01-03 16:00:51 +08:00
Soulter
d93754bf1d Update cmd_config.py 2024-01-03 15:46:11 +08:00
Soulter
a007a61ecc Update docker-image.yml 2024-01-02 16:44:58 +08:00
Soulter
e481377317 fix: 修复 update 的一些问题 2024-01-01 12:46:22 +08:00
Soulter
4c5831c7b4 remove: 删除 simhei 字体资源 2024-01-01 12:03:21 +08:00
Soulter
fc54b5237f feat: 支持设置 llm 唤醒词 2024-01-01 11:48:55 +08:00
Soulter
f8f42678d1 fix: 修复 消息 send() 不能够正常使用的问题 2024-01-01 11:34:56 +08:00
Soulter
38b1f4128c Merge pull request #145 from Soulter/dev_platform_refact
重构与消息平台有关的部分代码
2024-01-01 11:07:13 +08:00
Soulter
04fb4f88ad feat: 重构代码 2023-12-30 20:08:28 +08:00
Soulter
4675f5df08 Create stale.yml 2023-12-28 14:12:25 +08:00
Soulter
34ee358d40 Update README.md 2023-12-28 14:01:53 +08:00
Soulter
c4cfd1a3e2 Update README.md 2023-12-28 13:18:47 +08:00
Soulter
5ac4748537 Merge pull request #143 from Soulter/dev_dashboard
[Feature] 可视化面板功能和一些常规优化
2023-12-28 13:03:31 +08:00
Soulter
2e5ec1d2dc Create docker-image.yml 2023-12-28 13:02:37 +08:00
Soulter
bac4c069d7 Update Dockerfile 2023-12-28 12:57:53 +08:00
Soulter
9d4a21a10b Update README.md 2023-12-27 00:09:15 +08:00
Soulter
dbeb41195d Update README.md 2023-12-26 23:11:27 +08:00
Soulter
71f4998458 fix: 修复 console 2023-12-26 09:29:25 +08:00
Soulter
40af5b7574 feat: 支持更多配置类型 2023-12-26 09:25:22 +08:00
Soulter
e7a1020f82 Merge branch 'master' into dev_dashboard 2023-12-26 09:24:28 +08:00
Soulter
018e49ed95 Update README.md 2023-12-25 20:36:54 +08:00
Soulter
582cfe9f7c Update README.md 2023-12-23 19:03:02 +08:00
Soulter
db07f740b3 Update README.md 2023-12-23 17:03:31 +08:00
Soulter
bacbd351d7 Update README.md 2023-12-23 16:49:15 +08:00
Soulter
7e2c61c661 Update README.md 2023-12-23 16:19:57 +08:00
Soulter
3df30fd4de Update README.md 2023-12-23 16:18:47 +08:00
Soulter
92789ffdc9 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-12-23 14:09:20 +08:00
Soulter
09b746cdec feat: 插件、指令返回接口优化 2023-12-23 14:08:44 +08:00
Soulter
8ace7b59e3 Update requirements.txt 2023-12-23 00:21:35 +08:00
Soulter
1fc0248d8f feat: 插件安装卸载 2023-12-22 16:00:46 +08:00
Soulter
57bde33bfe perf: 优化插件代码结构。
fix: 修复卸载插件之后,线程无限自旋的问题。
2023-12-21 14:00:29 +08:00
Soulter
1b1e558a3b feat: dashboard 用户登录、重置密码 2023-12-20 19:13:38 +08:00
Soulter
c5c7e686d0 feat: ip 信息指令 2023-12-20 16:14:42 +08:00
Soulter
bd28f880f6 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-12-19 18:44:44 +08:00
Soulter
fe2ab69773 feat: bing网页搜索失败之后使用搜狗 2023-12-19 18:44:26 +08:00
Soulter
75f9d383cb feat: 补充一些config 2023-12-19 18:36:33 +08:00
Soulter
5fefba4583 feat: 插件显示 2023-12-19 00:40:47 +08:00
Soulter
780d126437 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-12-18 20:18:41 +08:00
Soulter
4057dd9f5b delete: test module 2023-12-18 20:18:27 +08:00
Soulter
b5f8df4bb6 feat: dashboard 首页实现功能 2023-12-17 14:21:34 +08:00
Soulter
5ace10d39f feat: 联网时间 2023-12-15 13:49:55 +08:00
Soulter
07ecdedf0d Update README.md 2023-12-14 20:05:58 +08:00
Soulter
c2ca365312 📦 NEW: 面板支持更多配置 2023-12-14 19:59:15 +08:00
Soulter
8b9ca08903 Update README.md 2023-12-14 17:18:08 +08:00
Soulter
16e6b588f6 Merge branch 'master' into dev_dashboard 2023-12-14 17:12:21 +08:00
Soulter
3a1d5d8904 📦 NEW: /update checkout 指令支持切换代码分支 2023-12-14 17:11:00 +08:00
Soulter
84d1293fd0 🐛 FIX: 移除一些不必要的报错抛出 2023-12-14 16:41:05 +08:00
Soulter
a12be7fa77 feat: 集成可视化面板到机器人内部 2023-12-14 16:39:47 +08:00
Soulter
6eee4f678f feat: dashboard 支持内存显示、配置更新 2023-12-13 22:47:17 +08:00
Soulter
0e53c95c06 feat: config 2023-12-13 18:35:50 +08:00
Soulter
3ba97ad0dc chore: dashboard update 2023-12-13 16:17:49 +08:00
Soulter
99ff8bc1f5 feat: dashboard partially 2023-12-12 20:23:39 +08:00
Soulter
63aa6ee9a5 feat: 支持 Docker 部署项目 2023-12-07 19:18:54 +08:00
Soulter
925a42e2c4 feat: 修复 nohup 等无标准输出流情况下启动失败的问题 2023-12-07 15:30:50 +08:00
Soulter
8dc91cfed4 delete: remove screenshots 2023-12-07 11:44:19 +08:00
Soulter
9c6bdeea9d feat: 画图指令支持 DallE3 2023-12-04 13:50:49 +08:00
Soulter
9bc8ac10fa chore: remove some unuseful log 2023-12-02 16:19:41 +08:00
Soulter
3df3879954 feat: 支持设置默认人格 2023-11-30 12:46:29 +08:00
Soulter
be1f8e7075 feat: 支持在命令行操作bot
fix: 修复 windows 下 ctrl+c 不能退出程序的问题
2023-11-30 12:06:37 +08:00
Soulter
d602041ad0 Update README.md 2023-11-25 23:10:54 +08:00
Soulter
23882bcb8e Update README.md 2023-11-25 23:09:15 +08:00
Soulter
311178189f fix: 修复未期望的QQ群BOT启动和文件BOM的问题 2023-11-25 20:09:21 +08:00
Soulter
5a57526aab fix: 修复配置文件BOM的一些问题 2023-11-25 19:56:11 +08:00
Soulter
450dd34f4d perf: dump 配置时关闭强制ascii 2023-11-25 11:59:39 +08:00
Soulter
89ed31a888 feat: 支持在cmd_config中设置llm_env_prompt来自定义环境提示词 2023-11-25 11:55:26 +08:00
Soulter
9fe031efe3 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-11-25 11:51:47 +08:00
Soulter
baa57266b4 feat: 初步接入官方QQ群机器人API 2023-11-25 11:50:32 +08:00
Soulter
3e4818d0ee feat: 适配部分插件 2023-11-23 17:50:40 +08:00
Soulter
b36747c728 fix: 修复QQ频道发图文消息报错的情况 2023-11-23 11:55:22 +08:00
Soulter
fdbe993913 fix: 修复消息兼容的一些问题 2023-11-21 22:38:54 +08:00
Soulter
9c3c8ff2c4 feat: 支持频道主动消息回复
fix: 修复一些问题
2023-11-20 23:45:04 +08:00
Soulter
aaefdab0aa fix: 修复没有语言模型启动时输入指令报错的问题 2023-11-20 14:56:28 +08:00
Soulter
f18a311bc2 chore: tidy some files 2023-11-18 15:17:42 +08:00
Soulter
ad9705f9c4 🐛: 取消QQSDK的旧版标记
🐛: 修复switch指令的一些问题
2023-11-18 15:09:22 +08:00
Soulter
fb0b626813 feat: 平衡请求和回答的token数比例 2023-11-14 20:38:21 +08:00
Soulter
b48fbf10e1 perf: 优化网页搜索回答的格式 2023-11-14 20:34:23 +08:00
Soulter
4aa2eab8b6 fix: 修复计算token的一些问题 2023-11-14 11:40:30 +08:00
Soulter
3960a19bcb perf: 增加一些注释 2023-11-14 11:30:08 +08:00
Soulter
b3cec4781b fix: 修复 requirements 中的typo 2023-11-14 11:17:35 +08:00
Soulter
8f0b0bf0d0 perf: 优化插件run函数参数传递规范 2023-11-14 11:15:19 +08:00
Soulter
847672d7f1 Update README.md 2023-11-14 10:32:59 +08:00
Soulter
c7f2962654 Update README.md 2023-11-14 10:32:45 +08:00
Soulter
752201cb46 update: requirements.txt 2023-11-14 09:33:30 +08:00
Soulter
deebf61b5f feat: 大幅优化网页搜索的信息提取准确性
perf: 使用 tictoken 预先计算 token
2023-11-14 09:33:18 +08:00
Soulter
d5e5b06e86 perf: 让回复末尾添加1-2个emoji 2023-11-13 23:05:19 +08:00
Soulter
cb5975c102 feat: 1. 适配新版openai sdk
2. 适配官方 function calling
2023-11-13 21:54:23 +08:00
Soulter
5b1aee1b4d feat: web search support prefix keyword call 2023-11-09 16:05:42 +00:00
Soulter
510c8b4236 feat: support gpt-4-vision-preview 2023-11-09 20:53:02 +08:00
Soulter
89fc7b0553 perf: 使用异步重写部分代码 2023-10-12 11:16:49 +08:00
Soulter
123c21fcb3 perf: 重载插件支持更新依赖库 2023-10-05 22:34:26 +08:00
Soulter
75d62d66f9 fix: 修复折叠发送时可能发送失败的问题 2023-10-05 21:38:35 +08:00
Soulter
23a8e989a5 perf: 优化插件加载机制 2023-10-05 13:38:10 +08:00
Soulter
9577e637f1 perf: 优化代码结构、稳定性和插件加载机制 2023-10-05 13:21:39 +08:00
Soulter
e51ef2201b Merge remote-tracking branch 'refs/remotes/origin/master' 2023-10-05 10:49:49 +08:00
Soulter
f4ae503abf perf: 优化报错提示和代码结构 2023-10-05 10:48:35 +08:00
Soulter
3424b658f3 bugfixes 2023-10-02 10:35:51 +08:00
Soulter
3198f73f3d perf: 清除警告;适配新版启动器 2023-10-02 10:17:10 +08:00
Soulter
aa3262a8ab chore: fix some typos 2023-10-02 10:10:04 +08:00
Soulter
6acd7be547 perf: 优化一些库的导入机制 2023-10-01 17:46:51 +08:00
Soulter
fb7669ddad perf: 依赖库安装优化 2023-10-01 16:20:51 +08:00
Soulter
f2c4ef126e perf: 优化openai模型消息截断机制 2023-09-30 15:11:06 +08:00
Soulter
33dcc4c152 perf: openai模型超限时截断消息(0.75x) 2023-09-30 15:06:57 +08:00
Soulter
b9e331ebd6 perf: 网页搜索改用google search,是改善效果 2023-09-30 14:59:25 +08:00
Soulter
7832ec386e perf: 优化web search 2023-09-30 14:06:50 +08:00
Soulter
b9828428cc perf: web search优化 2023-09-30 13:37:10 +08:00
Soulter
da11034aec feat: 支持在cmd_config中修改配置文件 2023-09-29 10:06:41 +08:00
Soulter
578c9e0695 feat: 支持戳一戳消息 2023-09-28 20:51:50 +08:00
Soulter
cc675a9b4f perf: 对插件开放更多接口 2023-09-28 20:12:39 +08:00
Soulter
08e7d4d0c6 fix: 修复一部分超限的报错
perf: web search稳定性和精确度优化
2023-09-27 22:06:08 +08:00
Soulter
553f1b8d83 fix: 修复官方模型下web search报错的问题 2023-09-27 21:14:03 +08:00
Soulter
73e7e2088d perf: 完善报错堆栈显示 2023-09-27 21:02:50 +08:00
Soulter
e40c9de610 perf: 优化聊天会话管理 2023-09-27 16:42:39 +08:00
Soulter
2f4e0bb4f2 fix: 修复人格一段时间后消失的问题 2023-09-25 15:55:51 +08:00
Soulter
191976e22e fix: 修复一些权限上的问题 2023-09-25 13:55:00 +08:00
Soulter
52656b8586 perf: 支持多管理员配置 2023-09-25 13:51:12 +08:00
Soulter
998e29ded6 fix: myid显示异常 2023-09-25 13:43:33 +08:00
Soulter
5bbe3f12d6 feat: OpenAI官方模型支持切换账号 2023-09-25 13:25:38 +08:00
Soulter
56aea81ed7 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-09-25 12:04:04 +08:00
Soulter
7b8a311dde fix: 修复gocq启动下QQ频道无法通过@回复消息的问题
feat:  支持重置会话时保留人格
perf: 清除部分无用日志输出
2023-09-25 12:03:17 +08:00
Soulter
b75d20a3e8 Update README.md 2023-09-20 10:46:09 +08:00
Soulter
67faa587b6 fix: 修复初次调用/keyword指令时报错文件不存在的bug 2023-09-20 10:31:31 +08:00
Soulter
15fde686d4 perf: 精简日志输出和冗余的日志文件 2023-09-14 14:04:47 +08:00
Soulter
741284f6e8 perf: 去除启动时检查更新产生的大量的日志 2023-09-14 13:50:00 +08:00
Soulter
8352fc269b 1. 修复qq频道发不了图片的问题 2023-09-14 08:39:05 +08:00
Soulter
5852f36557 1. gocq支持选择不回复群、私聊、频道消息。
(在cmd_config.json文件设置gocq_react_xxx等项);
2. update指令升级成功后返回新版本信息
2023-09-10 09:03:26 +08:00
Soulter
cc1c723c12 fix: 修复OpenAI官方模型无法启用的问题 2023-09-09 09:45:34 +08:00
Soulter
adf5cbfeba fix: 优化网页搜索的稳定性 2023-09-08 16:41:37 +08:00
Soulter
d6d0516c9a feat: gocq服务器地址支持在cmd_config自定义。 2023-09-08 14:19:07 +08:00
Soulter
8aab10aaf3 websearch bugfixes 2023-09-08 13:46:57 +08:00
Soulter
4fe5616ae1 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-09-08 13:40:03 +08:00
Soulter
7e1c76a3f5 fix: 修复openai官方模型一些指令报错的问题
feat: revChatGPT支持人格设置
2023-09-08 13:38:48 +08:00
Soulter
f74665ff71 Update README.md 2023-09-08 12:01:39 +08:00
Soulter
a96d64fe88 fix: 修复qq频道下无法发送图片的bug 2023-09-04 10:14:46 +08:00
Soulter
fd2aa0cba6 bugfixes 2023-09-02 19:59:14 +08:00
Soulter
a92ea3db02 fix: 修复只启动频道官方SDK下,不显示管理者QQ设置的问题 2023-09-02 19:39:38 +08:00
Soulter
d7a513b640 fix: 关键词指令 2023-09-02 18:30:11 +08:00
Soulter
8a017ff693 bugfixes 2023-09-02 11:11:54 +08:00
Soulter
7d08f57b32 bugfixes 2023-09-02 10:31:13 +08:00
Soulter
6f4ad7890b bugfixes 2023-09-02 10:05:06 +08:00
Soulter
37488118a6 feat: 1. keyword指令支持记录图片;
2. qq频道转gocq数据结构兼容层实现;
perf: 1. 优化代码结构;
2. log 支持环境变量指定log等级
2023-09-02 00:24:13 +08:00
Soulter
b2da0778ae Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-09-01 15:12:18 +08:00
Soulter
cc887a5037 perf: 优化代码结构 2023-09-01 15:11:58 +08:00
Soulter
ca86a02d30 Update requirements.txt 2023-08-31 21:27:26 +08:00
Soulter
d652dc19a6 Update README.md 2023-08-31 18:39:37 +08:00
Soulter
6a56b7bff5 Update README.md 2023-08-31 18:35:29 +08:00
Soulter
81e8997852 feat: 1. 支持llm网页搜索,实时消息。
2. 加入频道兼容层;支持频道发图
perf: 1. 稳定性优化
2. 精简部分代码结构
2023-08-31 18:34:20 +08:00
Soulter
372a204ba9 feat: QQ频道平台支持myid指令 2023-08-27 19:25:39 +08:00
Soulter
15ad5aae35 Update README.md 2023-08-20 17:44:39 +08:00
Soulter
fd2e9ef93f Update README.md 2023-08-20 14:48:40 +08:00
Soulter
5be3bf1f46 feat: 网页版ChatGPT模型支持Plus账户、网页搜索、插件 2023-08-20 14:26:13 +08:00
Soulter
4915c2d480 bugfixes 2023-08-20 14:04:50 +08:00
Soulter
bd56a19ac5 bugfixes 2023-08-20 14:03:44 +08:00
Soulter
da8fa2d905 bugfixes 2023-08-20 14:00:46 +08:00
Soulter
f56fd100d7 bugfixes 2023-08-20 14:00:25 +08:00
Soulter
b725a1a20c Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-08-20 13:56:53 +08:00
Soulter
ff1b5d02d2 perf: 优化初次启动后报错时的处理 2023-08-20 13:56:50 +08:00
Soulter
d4882a8240 Update README.md 2023-08-15 21:18:47 +08:00
Soulter
e37f84c1ae Update README.md 2023-08-15 21:18:08 +08:00
Soulter
a23bd0a63c Update README.md 2023-08-15 16:21:34 +08:00
Soulter
ae00e84974 Update README.md 2023-08-15 15:48:24 +08:00
Soulter
53b3250978 Update README.md 2023-08-15 15:42:54 +08:00
Soulter
7f15a59a4e Update README.md 2023-08-15 15:39:16 +08:00
Soulter
6a164c9961 Update README.md 2023-08-15 15:35:23 +08:00
Soulter
bd779a3df3 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-08-15 13:43:16 +08:00
Soulter
9ebb340c00 perf: 优化更新插件的相关逻辑;优化日志输出 2023-08-15 13:42:12 +08:00
Soulter
e8edbaae2d Update README.md 2023-08-12 12:47:41 +08:00
Soulter
2aab1f4c96 Update requirements.txt 2023-08-11 23:43:38 +08:00
Soulter
90ea621c65 Update main.py 2023-08-11 23:36:38 +08:00
Soulter
34bdceb41b Update README.md 2023-08-11 02:38:44 +08:00
Soulter
6d2ded1c6c Update README.md 2023-08-11 02:37:03 +08:00
Soulter
9b926048ca Update README.md 2023-08-11 02:35:43 +08:00
Soulter
9cf4f0f57d Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-08-06 11:04:30 +08:00
Soulter
9123b9d773 fix: 修复windows启动下会弹出markdown测试窗口的问题 2023-08-06 11:04:25 +08:00
Soulter
f9258ae1e1 fix: 修复生成图片时报错的问题 2023-08-06 11:02:09 +08:00
Soulter
d8808de4a9 Update README.md 2023-08-03 22:30:54 +08:00
Soulter
afcb152d8d Update requirements.txt 2023-07-21 21:43:11 +08:00
Soulter
ff01174a1f 删除GUI界面下启动项目出现的二维码 2023-06-26 20:34:10 +08:00
Soulter
71f1625284 Update README.md 2023-06-18 13:30:08 +08:00
Soulter
19e3390083 Update README.md 2023-06-13 17:04:29 +08:00
Soulter
3015b90e12 bugfixes 2023-06-13 11:59:16 +08:00
Soulter
aa419f3ef9 perf: 去帮助中心部分指令显示 2023-06-13 11:54:44 +08:00
Soulter
954236c284 fix: 修复markdown宽度计算异常的问题 2023-06-13 11:54:20 +08:00
Soulter
72d6b3886b perf: markdown render 增大 fontsize 2023-06-13 11:44:34 +08:00
Soulter
a95046ecaf Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-06-13 10:09:33 +08:00
Soulter
ccdb11575b remove chore 2023-06-13 10:09:28 +08:00
Soulter
7e68b2f2be Update requirements.txt 2023-06-13 10:05:57 +08:00
Soulter
39efab1081 perf: enhanced markdown image render regex 2023-06-12 18:41:04 +08:00
Soulter
cc6707c8ce Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-06-12 18:26:16 +08:00
Soulter
09080adf84 perf: markdown渲染器支持渲染图片 2023-06-12 18:26:11 +08:00
Soulter
4cc72030c0 Update README.md 2023-06-12 08:32:05 +08:00
Soulter
a395902184 Update README.md 2023-06-12 08:30:58 +08:00
Soulter
5156f0584a Update README.md 2023-06-12 08:30:03 +08:00
Soulter
be171fe0d7 Update README.md 2023-06-12 08:14:55 +08:00
Soulter
ad4bf5e654 perf: update command add "update latest r" 2023-06-11 09:53:49 +08:00
Soulter
da7429ad62 perf: add markdown minheight 2023-06-11 09:51:53 +08:00
Soulter
b5f20ee282 chore: change fonts 2023-06-11 09:16:16 +08:00
Soulter
a9023d6f3a perf: 支持markdown渲染 2023-06-10 13:10:32 +00:00
Soulter
628b661a18 fix markdown 2023-06-10 13:05:22 +00:00
Soulter
638fe466f8 perf markdown 2023-06-10 12:51:34 +00:00
Soulter
a90adcf15c chore: change some markdown parameters 2023-06-10 12:24:37 +00:00
Soulter
7896066db6 perf: markdown perf 2023-06-10 12:22:32 +00:00
Soulter
b1314bcc31 perf: \t -> 4 blanks 2023-06-10 12:13:50 +00:00
Soulter
b1ecc929f2 perf: markdown render perf 2023-06-10 12:10:20 +00:00
Soulter
3aad42a886 perf: markdown render perf 2023-06-10 12:08:32 +00:00
Soulter
b6e87d3d31 perf: markdown render perf 2023-06-10 10:54:26 +00:00
Soulter
461eb4b9c7 perf: markdown render perf 2023-06-10 10:48:36 +00:00
Soulter
a89e92d5cc perf: markdown render perf 2023-06-10 10:33:14 +00:00
Soulter
6e69e88e91 perf: markdown render perf 2023-06-10 10:30:17 +00:00
Soulter
ae732c1dac perf: markdown render perf 2023-06-10 10:03:03 +00:00
Soulter
8e4a72c97b perf: markdown render perf 2023-06-10 10:02:23 +00:00
Soulter
bf0d82fe67 perf: markdown render perf 2023-06-10 10:01:23 +00:00
Soulter
987383f957 perf: markdown render perf 2023-06-10 10:00:22 +00:00
Soulter
c2cacf3281 perf: markdown render perf 2023-06-10 09:56:58 +00:00
Soulter
72878477dc perf: qq pic mode support markdown 2023-06-10 09:47:02 +00:00
Soulter
ad0d14420a feat: markdown render support 2023-06-10 09:39:37 +00:00
Soulter
5a7c60c81e fix: markdown render support 2023-06-10 09:38:19 +00:00
Soulter
6011840d1f feat: markdown render support 2023-06-10 09:32:49 +00:00
Soulter
9a2dffe299 feat: markdown render support 2023-06-10 09:27:02 +00:00
Soulter
e6770d2b12 Update README.md 2023-06-09 00:19:27 +08:00
Soulter
255db6ee57 Update README.md 2023-06-08 23:58:44 +08:00
Soulter
aa9ff99557 perf: better help 2023-06-06 12:31:14 +00:00
Soulter
5f024e9f30 fix: bugfixes 2023-06-06 12:28:55 +00:00
Soulter
cbdc7b7ce4 perf: better help 2023-06-06 12:23:48 +00:00
Soulter
5f636ca061 perf: improve text2img 2023-06-06 11:57:52 +00:00
Soulter
9fa3651170 perf: change word2img factors 2023-06-06 11:45:32 +00:00
Soulter
bba66788c3 fix: bugfixes 2023-06-06 11:41:26 +00:00
Soulter
200f3cce00 fix: bugfixes 2023-06-06 11:34:52 +00:00
Soulter
938490b739 fix: bugfixes 2023-06-06 11:31:08 +00:00
Soulter
e77e7b050a feat: QQ message plain texts to pic support #108 2023-06-06 11:21:55 +00:00
Soulter
bd2dbe5b63 feat:转发消息支持非文本类型 2023-06-03 14:21:47 +08:00
Soulter
c684d9cb4a fix: 修复某些插件调用send可能发生的错误 2023-06-03 10:49:12 +08:00
Soulter
7a39a9d45e feat: nick指令仅管理者能用 2023-06-01 22:09:51 +08:00
Soulter
2a3bb068db feat: bing支持自定义代理地址 2023-05-31 21:17:47 +08:00
Soulter
1aa4384ca3 perf: 优化日志输出长度限制 2023-05-31 20:31:11 +08:00
Soulter
3b26b7b26c feat: 将CmdConfig的一些方法改为静态方法 2023-05-31 10:25:39 +08:00
Soulter
3b097d662b perf: 增加支持查看新版配置文件的管理员指令newconfig 2023-05-31 10:18:08 +08:00
Soulter
c3acb3e77f feat: 支持修改入群欢迎 2023-05-31 10:07:15 +08:00
Soulter
55d58d30a8 fix: 修复手滑造成的启动报错 2023-05-29 16:40:02 +08:00
Soulter
020a8ace9f feat: 支持自定义qq回复折叠阈值
perf: 优化新版配置文件加载流程
2023-05-29 16:37:11 +08:00
Soulter
15f56ffc01 feat: 长文本支持折叠发送 #104 2023-05-29 01:10:37 +08:00
Soulter
3724659b32 perf: improve stater 2023-05-24 18:24:18 +08:00
Soulter
df77152581 chore: 更新说明 2023-05-23 23:11:24 +08:00
Soulter
339ea5f12a feat: 支持更多本地预设指令的图片化 2023-05-23 11:01:56 +08:00
Soulter
36f96ccc97 feat: 文字转图片的图片过期处理逻辑 2023-05-23 10:58:07 +08:00
Soulter
190e0a4971 feat: 支持文字转图片 2023-05-23 10:41:12 +08:00
Soulter
72638fac68 fix: 修复QQ频道@不回的问题 2023-05-23 07:58:03 +08:00
Soulter
807d19e381 fix: 修复gocq群聊时@无反应的问题 2023-05-22 20:54:31 +08:00
Soulter
10870172b4 fix: 修复私聊不回的问题 2023-05-22 20:17:53 +08:00
Soulter
1f7d3eccf9 fix: blank nick 2023-05-22 19:42:37 +08:00
Soulter
5fc58123bb fix: 修复频率限制消息识别的问题 2023-05-22 19:31:24 +08:00
Soulter
c84c9f4aaa fix: 修复gocq_loop 2023-05-22 18:47:33 +08:00
Soulter
cabe66fc0a perf: 优化gocq平台消息处理逻辑 2023-05-22 18:46:01 +08:00
Soulter
9f1315b06d perf: 优化gocq平台消息处理逻辑 2023-05-22 18:42:23 +08:00
Soulter
6f27f59730 fix: 修复GOCQ频道at报错的问题 2023-05-22 18:25:58 +08:00
Soulter
17815e7fe3 fix: 优化切换到未启动的模型报错的问题 2023-05-22 18:23:16 +08:00
Soulter
596ae80fea perf: 优化模型识别提示 2023-05-22 18:22:34 +08:00
Soulter
be2dc6ba70 feat: 指令操作不再需要在消息前加前缀
perf: 改善性能
2023-05-22 18:10:22 +08:00
Soulter
e5aa8c8270 fix: 修复群内欢迎 2023-05-21 11:12:50 +08:00
Soulter
7c5ac41c55 chore: 删除不必要的log日志 2023-05-21 11:02:00 +08:00
Soulter
c6cf1153c1 fix: 修复Windows下删除插件报错拒绝访问的问题;
修复权限组异常的问题
2023-05-21 11:00:59 +08:00
Soulter
a68338b651 perf: 优化bing报错提示 2023-05-21 10:23:50 +08:00
Soulter
bab46e912e fix: 修复默认昵称失效的问题;
修复启动时跳过管理者qq设置的问题
2023-05-21 10:18:51 +08:00
Soulter
4b158a1c89 feat: GOCQ适配QQ频道 2023-05-20 15:30:07 +08:00
Soulter
6894900e46 fix: 修复画画指令得到的图片风格像油画的问题 2023-05-20 14:27:02 +08:00
Soulter
2e11d6e007 perf: log perf 2023-05-18 22:21:29 +08:00
Soulter
348381be15 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-05-18 22:15:07 +08:00
Soulter
9024c28e70 perf: fix some logs 2023-05-18 22:15:01 +08:00
Soulter
ae1702901b Update README.md 2023-05-18 08:34:41 +08:00
Soulter
c1c0df85e6 Update README.md 2023-05-17 20:36:54 +08:00
Soulter
f3c6d9c02b fix: draw command 2023-05-16 15:06:39 +08:00
Soulter
811a885411 fix: draw command 2023-05-16 15:04:55 +08:00
Soulter
b4ec28b71c Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-05-16 11:57:04 +08:00
Soulter
cdf4a5321b perf: 1.逆向ChatGPT库支持消息等待,不会回复忙碌。
2. 优化模型加载流程
2023-05-16 11:56:59 +08:00
Soulter
d83f155f80 Update README.md 2023-05-15 20:54:19 +08:00
Soulter
4c402ed5bd perf: 优化插件鉴别 2023-05-15 20:43:42 +08:00
Soulter
ec5aff8d0b fix: update helloworld default plugin 2023-05-15 20:14:14 +08:00
Soulter
eae0d6c422 fix: 修复一些奇怪的地方 2023-05-15 20:09:01 +08:00
Soulter
9c284b84b1 perf: 升级插件协议簇 2023-05-15 20:03:17 +08:00
Soulter
9f36e5ae05 perf: 在连接到go-cqhttp之前添加连接检测 2023-05-15 18:33:07 +08:00
Soulter
7caa380e54 perf: 优化控制台输出的长度限制 2023-05-14 21:58:45 +08:00
Soulter
41d81bb60e perf: 简化控制台字数 2023-05-14 20:54:56 +08:00
Soulter
454a74f4e1 perf: 颜色日志-优化控制台显示 2023-05-14 20:51:39 +08:00
Soulter
c5bdad02e5 fix: 修复ChatGPT逆向库回答报错的问题 2023-05-14 20:39:15 +08:00
Soulter
f46de3d518 perf: 颜色日志-美化控制台显示 2023-05-14 20:38:28 +08:00
Soulter
a3e21bea1a perf: 删除不必要的控制台信息显示 2023-05-14 19:54:47 +08:00
Soulter
d7e4707d5d perf: 简化控制台输出信息 2023-05-14 19:43:12 +08:00
Soulter
a78ebf2fd7 feat: plugin dev mode 2023-05-14 18:20:28 +08:00
Soulter
bd11541678 perf: 优化插件更新缓存策略 2023-05-14 18:16:12 +08:00
Soulter
0d99aa81e6 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-05-14 17:40:16 +08:00
Soulter
f104d40d0a perf: 优化插件加载规则、更新插件接口规范
fix: 修复发言频率限制报错的问题
2023-05-14 17:40:13 +08:00
Soulter
0d69f8ab8a Update README.md 2023-05-13 17:03:32 +08:00
Soulter
66d1fc08b6 perf: 去除不必要的import 2023-05-13 14:25:16 +08:00
Soulter
e32fc27728 perf: 优化插件的删除逻辑 2023-05-13 14:23:05 +08:00
Soulter
eec890cd02 perf: 优化插件指令的卸载插件逻辑 2023-05-13 14:20:38 +08:00
Soulter
d30881e59b perf: 补充插件指令的帮助信息 2023-05-13 14:10:00 +08:00
Soulter
9afaf83368 perf: 优化插件指令的身份组鉴定 2023-05-13 14:07:40 +08:00
Soulter
33f9a9cfa0 feat: 支持显示插件列表和插件帮助 2023-05-13 14:04:15 +08:00
Soulter
bf72d5fa27 fix: 修复有依赖的插件拉取问题 2023-05-13 13:55:22 +08:00
Soulter
567c29bcd6 perf: 清除多余log 2023-05-13 13:45:04 +08:00
Soulter
dcdfe453fb perf: 优化插件类鉴定规则 2023-05-13 13:44:14 +08:00
Soulter
0d23c0900b perf: 优化插件卸载逻辑 2023-05-13 13:28:18 +08:00
Soulter
86eda7bdf8 perf: 优化插件缓存逻辑 2023-05-13 13:25:56 +08:00
Soulter
1e46525b0f perf: 优化pip更新逻辑 2023-05-13 13:02:24 +08:00
Soulter
8d41efea4d Update README.md 2023-05-13 11:43:03 +08:00
Soulter
f15d0eb0eb Update README.md 2023-05-13 11:10:56 +08:00
Soulter
1795362bcd perf: 优化插件调用逻辑 2023-05-13 11:03:16 +08:00
Soulter
2bf9c82617 perf: 更好的插件处理逻辑和更开放的插件功能 2023-05-13 10:54:57 +08:00
Soulter
33793a2053 chore: 更新版本号 2023-05-12 09:21:57 +08:00
Soulter
656fe14af4 perf: 优化身份组鉴定 2023-05-12 09:15:32 +08:00
Soulter
46197d49a4 perf: 调换语言模型启动顺序 2023-05-12 09:08:06 +08:00
Soulter
843ab56f50 perf: 优化身份组 2023-05-12 09:02:29 +08:00
Soulter
6b4b52f3c5 perf: 完善身份组功能 2023-05-11 22:56:38 +08:00
Soulter
392e5cd592 chore: 添加默认插件 2023-05-11 22:43:26 +08:00
Soulter
d273019830 chore: 删除一些没必要的文件 2023-05-11 22:39:55 +08:00
Soulter
fd59ec4b6c fix: 修复插件指令clone插件异常的问题 2023-05-11 22:12:39 +08:00
Soulter
bf33ccafca fix: 修复插件指令创建文件夹出错的问题 2023-05-11 22:06:38 +08:00
Soulter
425936872d fix: 修复插件指令结果显示异常的问题 2023-05-11 22:01:57 +08:00
Soulter
6627b2e1e5 fix: 修复插件指令报错的问题 2023-05-11 22:00:18 +08:00
Soulter
323c2cecf8 feat: 新增插件指令 2023-05-11 21:52:44 +08:00
Soulter
5b1dd3dce9 feat: 插件支持 2023-05-11 21:35:25 +08:00
Soulter
54af770dfb fix: 修复keyword指令的一些问题 2023-05-08 20:30:36 +08:00
Soulter
30a48fea6e feat: QQ群的免@唤醒支持多个前缀(nick指令) #92 2023-05-08 20:17:51 +08:00
Soulter
cfd5fb1452 perf: keyword指令支持删除关键词 2023-05-08 19:43:26 +08:00
Soulter
a78984376f perf: 优化与go-cqhttp的通信 2023-04-26 14:52:09 +08:00
Soulter
9887cae43c fix: replit web fix 2023-04-25 20:57:22 +08:00
Soulter
e63fe60f8d Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-25 20:48:08 +08:00
Soulter
b0ac2d676c feat: Replit平台支持 2023-04-25 20:48:04 +08:00
Soulter
5ef515165c Merge pull request #94 from RockChinQ/patch-1
chore: 更换使用nakuru-project-idk
2023-04-25 19:43:02 +08:00
Rock Chin
e21d43f920 chore: 更换使用nakuru-project-idk 2023-04-25 12:46:41 +08:00
Soulter
3a80ffad88 perf: 优化控制台信息显示 2023-04-25 10:42:03 +08:00
Soulter
47506d60cd perf: 优化pip检测 2023-04-25 10:29:16 +08:00
Soulter
b999b712b7 perf: 优化逆向库的错误管理 2023-04-25 10:21:12 +08:00
Soulter
6860ba3f05 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-25 09:38:21 +08:00
Soulter
02594867c0 fix: 修复QQ平台昵称后的消息前导空格问题 2023-04-25 09:38:17 +08:00
Soulter
250435f3e7 Update requirements.txt 2023-04-25 09:29:35 +08:00
Soulter
3c593fb6f7 Update README.md 2023-04-24 19:34:11 +08:00
Soulter
807cad5c48 fix: 删除启动时对qq频道appid不应该的检查 2023-04-24 08:00:45 +00:00
Soulter
e92ecdd3f8 Update README.md 2023-04-23 17:16:31 +08:00
Soulter
1c91079d8f Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-23 09:05:32 +00:00
Soulter
376b2fef40 fix: 修复启动前检查依赖的问题 2023-04-23 09:05:30 +00:00
Soulter
300f3b6df8 Update README.md 2023-04-23 16:52:07 +08:00
Soulter
6e6f6d5cd4 Update README.md 2023-04-23 16:51:46 +08:00
Soulter
077e54d0f1 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-23 08:35:02 +00:00
Soulter
18ffaa2b91 perf: 优化各种报错管理;
feat: 启动时检查依赖库
2023-04-23 08:31:22 +00:00
Soulter
a6555681a0 Update requirements.txt 2023-04-23 15:31:33 +08:00
Soulter
43ac0ef87c fix: remove judge_res 2023-04-22 08:09:22 +00:00
Soulter
754842be7c Update README.md 2023-04-22 14:35:22 +08:00
Soulter
5b3ee2dbe8 fix: 修复回复内容屏蔽词无效的问题 2023-04-22 06:07:33 +00:00
Soulter
ca5a1ddc0b perf: 优化bing模型达到单次会话上限后自动重置 2023-04-22 11:27:43 +08:00
Soulter
c9821132ad fix: QQ频道停止信息来源显示 2023-04-21 11:11:16 +08:00
Soulter
0641dca2a6 fix: 修复qqAt的一些问题 2023-04-21 01:04:57 +08:00
Soulter
fd983b9f5d fix: 修复了一些问题 2023-04-21 01:01:54 +08:00
Soulter
7e1e51c450 feat: QQ支持at发送方和画画指令支持 2023-04-21 01:00:31 +08:00
Soulter
d912b990e4 fix: 修复画画指令失效的问题 2023-04-21 00:45:59 +08:00
Soulter
8224aa87a5 fix: 修复bing模型不想继续会话自动重置的一些问题 2023-04-20 09:07:55 +08:00
Soulter
4cb5abc7b6 fix: 修复bing会话超时过期的问题 2023-04-20 08:59:58 +08:00
133 changed files with 7838 additions and 2273 deletions

23
.github/workflows/docker-image.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Docker Image CI/CD
on:
release:
types: [published]
workflow_dispatch:
jobs:
publish-latest-docker-image:
runs-on: ubuntu-latest
name: Build and publish docker image
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Build image
run: |
git clone https://github.com/Soulter/AstrBot
cd AstrBot
docker build -t ${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:latest .
- name: Publish image
run: |
docker login -u ${{ secrets.DOCKER_HUB_USERNAME }} -p ${{ secrets.DOCKER_HUB_PASSWORD }}
docker push ${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:latest

27
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
#
# You can adjust the behavior by modifying this file.
# For more information, see:
# https://github.com/actions/stale
name: Mark stale issues and pull requests
on:
schedule:
- cron: '21 23 * * *'
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'Stale issue message'
stale-pr-message: 'Stale pull request message'
stale-issue-label: 'no-issue-activity'
stale-pr-label: 'no-pr-activity'

9
.gitignore vendored
View File

@@ -1,3 +1,12 @@
__pycache__
botpy.log
.vscode
data.db
configs/session
configs/config.yaml
**/.DS_Store
temp
cmd_config.json
data/*
cookies.json
logs/

8
Dockerfile Normal file
View File

@@ -0,0 +1,8 @@
FROM python:3.10-slim
WORKDIR /AstrBot
COPY . /AstrBot/
RUN python -m pip install -r requirements.txt
CMD [ "python", "main.py" ]

298
README.md
View File

@@ -1,279 +1,51 @@
<p align="center">
<img width="806" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/c6f057d9-46d7-4144-8116-00a962941746">
</p>
<div align="center">
# QQChannelChatGPT
在QQ和QQ频道上使用ChatGPT、NewBing等语言模型稳定一次部署同时使用。
教程https://soulter.top/posts/qpdg.html
欢迎体验😊(频道名: GPT机器人 | 频道号: x42d56aki2) | QQ群号322154837
<img src="https://user-images.githubusercontent.com/37870767/230417115-9dd3c9d5-6b6b-4928-8fe3-82f559208aab.JPG" width="300"></img>
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/Soulter/AstrBot)](https://github.com/Soulter/AstrBot/releases/latest)
<img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="python">
<a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg"/></a>
<a href="https://qm.qq.com/cgi-bin/qm/qr?k=EYGsuUTfe00_iOu9JTXS7_TEpMkXOvwv&jump_from=webapi&authKey=uUEMKCROfsseS+8IzqPjzV3y1tzy4AkykwTib2jNkOFdzezF9s9XknqnIaf3CDft">
<img alt="Static Badge" src="https://img.shields.io/badge/QQ群-322154837-purple">
</a>
<a href="https://astrbot.soulter.top/center">项目部署</a>
<a href="https://github.com/Soulter/AstrBot/issues">问题提交</a>
<a href="https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91">插件开发</a>
</div>
## 功能
## 🛠️ 功能
通知部署好后如果使用的是bing或者逆向ChatGPT库需要使用切换模型指令`/bing`或者'/revgpt'
🌍 支持的消息平台
- QQ 群、QQ 频道OneBot、QQ 官方接口)
- Telegram由 [astrbot_plugin_telegram](https://github.com/Soulter/astrbot_plugin_telegram) 插件支持)
- WeChat(微信) (由 [astrbot_plugin_vchat](https://github.com/z2z63/astrbot_plugin_vchat) 插件支持)
近期新功能
- 支持一键切换语言模型(使用/bing /revgpt /gpt分别可以切换newbing、逆向ChatGPT、官方ChatGPT模型
- 热更新
- 接入QQ支持在QQ上和QQ频道上同时聊天https://github.com/Soulter/QQChannelChatGPT/issues/82
🌍 支持的大模型一览
支持的AI语言模型请在`configs/config.yaml`下配置):
- 逆向ChatGPT库
- 官方ChatGPT AI
- 文心一言(即将支持,链接https://github.com/Soulter/ERNIEBot 欢迎Star
- NewBing
- Bard (即将支持)
- OpenAI GPT、DallE 系列
- Claude由[LLMs插件](https://github.com/Soulter/llms)支持)
- HuggingChat由[LLMs插件](https://github.com/Soulter/llms)支持)
- Gemini由[LLMs插件](https://github.com/Soulter/llms)支持
部署QQ频道机器人教程链接https://soulter.top/posts/qpdg.html
🌍 机器人支持的能力一览:
- 大模型对话、人格、网页搜索
- 可视化管理面板
- 同时处理多平台消息
- 精确到个人的会话隔离
- 插件支持
- 文本转图片回复Markdown
### 基本功能
<details>
<summary>✅ 回复符合上下文</summary>
## 🧩 插件支持
- 程序向API发送近多次对话内容模型根据上下文生成回复
有关插件的使用和列表请移步:[AstrBot 文档 - 插件](https://astrbot.soulter.top/center/docs/%E4%BD%BF%E7%94%A8/%E6%8F%92%E4%BB%B6)
- 你可在`configs/config.yaml`中修改`total_token_limit`来近似控制缓存大小。
</details>
## ✨ Demo
<details>
<summary>✅ 超额自动切换</summary>
- 超额时程序自动切换openai的key方便快捷
</details>
<details>
<summary>✅ 支持统计频道、消息数量等信息</summary>
- 实现了简单的统计功能
</details>
<details>
<summary>✅ 多并发处理,回复速度快</summary>
- 使用了协程理论最高可以支持每个子频道每秒回复5条信息
</details>
<details>
<summary>✅ 持久化转储历史记录,重启不丢失</summary>
- 使用内置的sqlite数据库存储历史记录到本地
- 方式为定时转储,可在`config.yaml`下修改`dump_history_interval`来修改间隔时间,单位为分钟。
</details>
<details>
<summary>✅ 支持多种指令控制</summary>
- 详见下方`指令功能`
</details>
<details>
<summary>✅ 官方API稳定</summary>
- 不使用ChatGPT逆向接口而使用官方API接口稳定方便。
- QQ频道机器人框架为QQ官方开源的框架稳定。
</details>
> 关于tokentoken就相当于是AI中的单词数但是不等于单词数`text-davinci-003`模型中最大可以支持`4097`个token。在发送信息时这个机器人会将用户的历史聊天记录打包发送给ChatGPT因此`token`也会相应的累加为了保证聊天的上下文的逻辑性就有了缓存token。
### 指令功能
#### OpenAI官方API
在频道内需要先`@`机器人之后再输入指令在QQ中暂时需要在消息前加上`ai `,不需要@
- `/reset`重置prompt
- `/his`查看历史记录(每个用户都有独立的会话)
- `/his [页码数]`查看不同页码的历史记录。例如`/his 2`查看第2页
- `/token`查看当前缓存的总token数
- `/count` 查看统计
- `/status` 查看chatGPT的配置
- `/help` 查看帮助
- `/key` 动态添加key
- `/set` 人格设置面板
- `/keyword nihao 你好` 设置关键词回复。nihao->你好
- `/bing` 切换为bing
- `/revgpt` 切换为ChatGPT逆向库
- `/画` 画画
#### Bing语言模型
- `/reset`重置prompt
- `/gpt` 切换为OpenAI官方API
- `/revgpt` 切换为ChatGPT逆向库
#### 逆向ChatGPT库语言模型
- `/gpt` 切换为OpenAI官方API
- `/bing` 切换为bing
* 切换模型指令支持临时回复。如`/bing 你好`将会临时使用一次bing模型
## 📰使用方法:
**详细部署教程链接**https://soulter.top/posts/qpdg.html
**Windows用户推荐Windows一键安装请前往Release下载最新版本Beta**
有报错请先看issue解决不了再在频道内反馈。
### 安装第三方库
```shell
pip install -r requirements.txt
```
> ⚠Python版本应>=3.9
### 配置
**详细部署教程链接**https://soulter.top/posts/qpdg.html
### 启动
- 启动main.py
## ⚙配置文件说明:
```yaml
# 如果你不知道怎么部署请查看https://soulter.top/posts/qpdg.html
# 不一定需要key了如果你没有key但有openAI账号或者必应账号可以考虑使用下面的逆向库
###############平台设置#################
# QQ频道机器人
# QQ开放平台的appid和令牌
# q.qq.com
# enable为true则启用false则不启用
qqbot:
enable: true
appid:
token:
# QQ机器人
# enable为true则启用false则不启用
# 需要安装GO-CQHTTP配合使用。
# 文档https://docs.go-cqhttp.org/
# 请将go-cqhttp的配置文件的sever部分粘贴为以下内容否则无法使用
# 请先启动go-cqhttp再启动本程序
#
# servers:
# - http:
# host: 127.0.0.1
# version: 0
# port: 5700
# timeout: 5
# - ws:
# address: 127.0.0.1:6700
# middlewares:
# <<: *default
gocqbot:
enable: false
# 设置是否一个人一个会话
uniqueSessionMode: false
# QChannelBot 的版本请勿修改此字段否则可能产生一些bug
version: 3.0
# [Beta] 转储历史记录时间间隔(分钟)
dump_history_interval: 10
# 一个用户只能在time秒内发送count条消息
limit:
time: 60
count: 5
# 公告
notice: "此机器人由Github项目QQChannelChatGPT驱动。"
# 是否打开私信功能
# 设置为true则频道成员可以私聊机器人。
# 设置为false则频道成员不能私聊机器人。
direct_message_mode: true
# 系统代理
# http_proxy: http://localhost:7890
# https_proxy: http://localhost:7890
# 自定义回复前缀,如[Rev]或其他务必加引号以防止不必要的bug。
reply_prefix:
openai_official: "[GPT]"
rev_chatgpt: "[Rev]"
rev_edgegpt: "[RevBing]"
# 百度内容审核服务
# 新用户免费5万次调用。https://cloud.baidu.com/doc/ANTIPORN/index.html
baidu_aip:
enable: false
app_id:
api_key:
secret_key:
<img width="900" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/824d1ff3-7b85-481c-b795-8e62dedb9fd7">
###############语言模型设置#################
# OpenAI官方API
# 注意已支持多key自动切换方法
# key:
# - sk-xxxxxx
# - sk-xxxxxx
# 在下方非注释的地方使用以上格式
# 关于api_base可以使用一些云函数如腾讯、阿里来避免国内被墙的问题。
# 详见:
# https://github.com/Ice-Hazymoon/openai-scf-proxy
# https://github.com/Soulter/QQChannelChatGPT/issues/42
# 设置为none则表示使用官方默认api地址
openai:
key:
-
api_base: none
# 这里是GPT配置语言模型默认使用gpt-3.5-turbo
chatGPTConfigs:
model: gpt-3.5-turbo
max_tokens: 3000
temperature: 0.9
top_p: 1
frequency_penalty: 0
presence_penalty: 0
total_tokens_limit: 5000
# 逆向文心一言【暂时不可用,请勿使用】
rev_ernie:
enable: false
# 逆向New Bing
# 需要在项目根目录下创建cookies.json并粘贴cookies进去。
# 详见https://soulter.top/posts/qpdg.html
rev_edgegpt:
enable: false
# 逆向ChatGPT库
# https://github.com/acheong08/ChatGPT
# 优点:免费(无免费额度限制);
# 缺点速度相对慢。OpenAI 速率限制:免费帐户每小时 50 个请求。您可以通过多帐户循环来绕过它
# enable设置为true后将会停止使用上面正常的官方API调用而使用本逆向项目
#
# 多账户可以保证每个请求都能得到及时的回复。
# 关于account的格式
# account:
# - email: 第1个账户
# password: 第1个账户密码
# - email: 第2个账户
# password: 第2个账户密码
# - ....
# 支持使用access_token登录
# 例:
# - session_token: xxxxx
# - access_token: xxxx
# 请严格按照上面这个格式填写。
# 逆向ChatGPT库的email-password登录方式不工作建议使用access_token登录
# 获取access_token的方法详见https://soulter.top/posts/qpdg.html
rev_ChatGPT:
enable: false
account:
- access_token:
```

View File

@@ -0,0 +1,10 @@
# helloworld
AstrBot 插件模板
A template plugin for AstrBot plugin feature
# 支持
[帮助文档](https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91/
)

View File

@@ -0,0 +1 @@
https://github.com/Soulter/helloworld

View File

@@ -0,0 +1,32 @@
flag_not_support = False
try:
from util.plugin_dev.api.v1.bot import Context, AstrMessageEvent, CommandResult
from util.plugin_dev.api.v1.config import *
except ImportError:
flag_not_support = True
print("导入接口失败。请升级到 AstrBot 最新版本。")
'''
注意以格式 XXXPlugin 或 Main 来修改插件名。
提示:把此模板仓库 fork 之后 clone 到机器人文件夹下的 addons/plugins/ 目录下,然后用 Pycharm/VSC 等工具打开可获更棒的编程体验(自动补全等)
'''
class HelloWorldPlugin:
"""
AstrBot 会传递 context 给插件。
- context.register_commands: 注册指令
- context.register_task: 注册任务
- context.message_handler: 消息处理器(平台类插件用)
"""
def __init__(self, context: Context) -> None:
self.context = context
self.context.register_commands("helloworld", "helloworld", "内置测试指令。", 1, self.helloworld)
"""
指令处理函数。
- 需要接收两个参数message: AstrMessageEvent, context: Context
- 返回 CommandResult 对象
"""
def helloworld(self, message: AstrMessageEvent, context: Context):
return CommandResult().message("Hello, World!")

View File

@@ -0,0 +1,6 @@
name: helloworld # 这是你的插件的唯一识别名。
desc: 这是 AstrBot 的默认插件。
help:
version: v1.3 # 插件版本号。格式v1.1.1 或者 v1.1
author: Soulter # 作者
repo: https://github.com/Soulter/helloworld # 插件的仓库地址

110
astrbot/bootstrap.py Normal file
View File

@@ -0,0 +1,110 @@
import asyncio
import traceback
from astrbot.message.handler import MessageHandler
from astrbot.persist.helper import dbConn
from dashboard.server import AstrBotDashBoard
from model.provider.provider import Provider
from model.command.manager import CommandManager
from model.command.internal_handler import InternalCommandHandler
from model.plugin.manager import PluginManager
from model.platform.manager import PlatformManager
from typing import Dict, List, Union
from type.types import Context
from SparkleLogging.utils.core import LogManager
from logging import Logger
from util.cmd_config import CmdConfig
from util.metrics import MetricUploader
from util.config_utils import *
from util.updator.astrbot_updator import AstrBotUpdator
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class AstrBotBootstrap():
def __init__(self) -> None:
self.context = Context()
self.config_helper: CmdConfig = CmdConfig()
# load configs and ensure the backward compatibility
init_configs()
try_migrate_config()
self.configs = inject_to_context(self.context)
logger.info("AstrBot v" + self.context.version)
self.context.config_helper = self.config_helper
# apply proxy settings
http_proxy = self.context.base_config.get("http_proxy")
https_proxy = self.context.base_config.get("https_proxy")
if http_proxy:
os.environ['HTTP_PROXY'] = http_proxy
if https_proxy:
os.environ['HTTPS_PROXY'] = https_proxy
os.environ['NO_PROXY'] = 'https://api.sgroup.qq.com'
if http_proxy and https_proxy:
logger.info(f"使用代理: {http_proxy}, {https_proxy}")
else:
logger.info("未使用代理。")
async def run(self):
self.command_manager = CommandManager()
self.plugin_manager = PluginManager(self.context)
self.updator = AstrBotUpdator()
self.cmd_handler = InternalCommandHandler(self.command_manager, self.plugin_manager)
self.db_conn_helper = dbConn()
# load llm provider
self.llm_instance: Provider = None
self.load_llm()
self.message_handler = MessageHandler(self.context, self.command_manager, self.db_conn_helper, self.llm_instance)
self.platfrom_manager = PlatformManager(self.context, self.message_handler)
self.dashboard = AstrBotDashBoard(self.context, plugin_manager=self.plugin_manager, astrbot_updator=self.updator)
self.metrics_uploader = MetricUploader(self.context)
self.context.metrics_uploader = self.metrics_uploader
self.context.updator = self.updator
self.context.plugin_updator = self.plugin_manager.updator
self.context.message_handler = self.message_handler
# load plugins, plugins' commands.
self.load_plugins()
self.command_manager.register_from_pcb(self.context.plugin_command_bridge)
# load platforms
platform_tasks = self.load_platform()
# load metrics uploader
metrics_upload_task = asyncio.create_task(self.metrics_uploader.upload_metrics(), name="metrics-uploader")
# load dashboard
self.dashboard.run_http_server()
dashboard_task = asyncio.create_task(self.dashboard.ws_server(), name="dashboard")
tasks = [metrics_upload_task, dashboard_task, *platform_tasks, *self.context.ext_tasks]
tasks = [self.handle_task(task) for task in tasks]
await asyncio.gather(*tasks)
async def handle_task(self, task: Union[asyncio.Task, asyncio.Future]):
while True:
try:
result = await task
return result
except Exception as e:
logger.error(traceback.format_exc())
logger.error(f"{task.get_name()} 任务发生错误,将在 5 秒后重试。")
await asyncio.sleep(5)
def load_llm(self):
if 'openai' in self.configs and \
len(self.configs['openai']['key']) and \
self.configs['openai']['key'][0] is not None:
from model.provider.openai_official import ProviderOpenAIOfficial
from model.command.openai_official_handler import OpenAIOfficialCommandHandler
self.openai_command_handler = OpenAIOfficialCommandHandler(self.command_manager)
self.llm_instance = ProviderOpenAIOfficial(self.context)
self.openai_command_handler.set_provider(self.llm_instance)
logger.info("已启用 OpenAI API 支持。")
def load_plugins(self):
self.plugin_manager.plugin_reload()
def load_platform(self):
return self.platfrom_manager.load_platforms()

View File

@@ -1,14 +1,17 @@
from aip import AipContentCensor
class BaiduJudge:
def __init__(self, baidu_configs) -> None:
if 'app_id' in baidu_configs and 'api_key' in baidu_configs and 'secret_key' in baidu_configs:
self.app_id = str(baidu_configs['app_id'])
self.api_key = baidu_configs['api_key']
self.secret_key = baidu_configs['secret_key']
self.client = AipContentCensor(self.app_id, self.api_key, self.secret_key)
self.client = AipContentCensor(
self.app_id, self.api_key, self.secret_key)
else:
raise ValueError("Baidu configs error! 请填写百度内容审核服务相关配置!")
def judge(self, text):
res = self.client.textCensorUserDefined(text)
if 'conclusionType' not in res:
@@ -23,4 +26,4 @@ class BaiduJudge:
for i in res['data']:
info += f"{i['msg']}\n"
info += "\n判断结果:"+res['conclusion']
return False, info
return False, info

198
astrbot/message/handler.py Normal file
View File

@@ -0,0 +1,198 @@
import time
import re
import asyncio
import traceback
import astrbot.message.unfit_words as uw
from typing import Dict
from astrbot.persist.helper import dbConn
from model.provider.provider import Provider
from model.command.manager import CommandManager
from type.message_event import AstrMessageEvent, MessageResult
from type.types import Context
from type.command import CommandResult
from SparkleLogging.utils.core import LogManager
from logging import Logger
from nakuru.entities.components import Image
import util.agent.web_searcher as web_searcher
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class RateLimitHelper():
def __init__(self, context: Context) -> None:
self.user_rate_limit: Dict[int, int] = {}
self.rate_limit_time: int = 60
self.rate_limit_count: int = 10
self.user_frequency = {}
if 'limit' in context.base_config:
if 'count' in context.base_config['limit']:
self.rate_limit_count = context.base_config['limit']['count']
if 'time' in context.base_config['limit']:
self.rate_limit_time = context.base_config['limit']['time']
def check_frequency(self, session_id: str) -> bool:
'''
检查发言频率
'''
ts = int(time.time())
if session_id in self.user_frequency:
if ts-self.user_frequency[session_id]['time'] > self.rate_limit_time:
self.user_frequency[session_id]['time'] = ts
self.user_frequency[session_id]['count'] = 1
return True
else:
if self.user_frequency[session_id]['count'] >= self.rate_limit_count:
return False
else:
self.user_frequency[session_id]['count'] += 1
return True
else:
t = {'time': ts, 'count': 1}
self.user_frequency[session_id] = t
return True
class ContentSafetyHelper():
def __init__(self, context: Context) -> None:
self.baidu_judge = None
if 'baidu_api' in context.base_config and \
'enable' in context.base_config['baidu_aip'] and \
context.base_config['baidu_aip']['enable']:
try:
from astrbot.message.baidu_aip_judge import BaiduJudge
self.baidu_judge = BaiduJudge(context.base_config['baidu_aip'])
logger.info("已启用百度 AI 内容审核。")
except BaseException as e:
logger.error("百度 AI 内容审核初始化失败。")
logger.error(e)
async def check_content(self, content: str) -> bool:
'''
检查文本内容是否合法
'''
for i in uw.unfit_words_q:
matches = re.match(i, content.strip(), re.I | re.M)
if matches:
return False
if self.baidu_judge != None:
check, msg = await asyncio.to_thread(self.baidu_judge.judge, content)
if not check:
logger.info(f"百度 AI 内容审核发现以下违规:{msg}")
return False
return True
def filter_content(self, content: str) -> str:
'''
过滤文本内容
'''
for i in uw.unfit_words_q:
content = re.sub(i, "*", content, flags=re.I)
return content
def baidu_check(self, content: str) -> bool:
'''
使用百度 AI 内容审核检查文本内容是否合法
'''
if self.baidu_judge != None:
check, msg = self.baidu_judge.judge(content)
if not check:
logger.info(f"百度 AI 内容审核发现以下违规:{msg}")
return False
return True
class MessageHandler():
def __init__(self, context: Context,
command_manager: CommandManager,
persist_manager: dbConn,
provider: Provider) -> None:
self.context = context
self.command_manager = command_manager
self.persist_manager = persist_manager
self.rate_limit_helper = RateLimitHelper(context)
self.content_safety_helper = ContentSafetyHelper(context)
self.llm_wake_prefix = self.context.base_config['llm_wake_prefix']
self.nicks = self.context.nick
self.provider = provider
self.reply_prefix = self.context.reply_prefix
async def handle(self, message: AstrMessageEvent, llm_provider: Provider = None) -> MessageResult:
'''
Handle the message event, including commands, plugins, etc.
`llm_provider`: the provider to use for LLM. If None, use the default provider
'''
msg_plain = message.message_str.strip()
provider = llm_provider if llm_provider else self.provider
inner_provider = False if llm_provider else True
self.persist_manager.record_message(message.platform.platform_name, message.session_id)
# TODO: this should be configurable
if not message.message_str:
return MessageResult("Hi~")
# check the rate limit
if not self.rate_limit_helper.check_frequency(message.message_obj.sender.user_id):
return MessageResult(f'你的发言超过频率限制(╯▔皿▔)╯。\n管理员设置 {self.rate_limit_helper.rate_limit_time} 秒内只能提问{self.rate_limit_helper.rate_limit_count} 次。')
# remove the nick prefix
for nick in self.nicks:
if msg_plain.startswith(nick):
msg_plain = msg_plain.removeprefix(nick)
break
# scan candidate commands
cmd_res = await self.command_manager.scan_command(message, self.context)
if cmd_res:
assert(isinstance(cmd_res, CommandResult))
return MessageResult(
cmd_res.message_chain,
is_command_call=True
)
# check if the message is a llm-wake-up command
if not msg_plain.startswith(self.llm_wake_prefix):
return
if not provider:
return
# check the content safety
if not await self.content_safety_helper.check_content(msg_plain):
return MessageResult("信息包含违规内容,由于机器人管理者开启内容安全审核,你的此条消息已被停止继续处理。")
image_url = None
for comp in message.message_obj.message:
if isinstance(comp, Image):
image_url = comp.url if comp.url else comp.file
break
web_search = self.context.web_search
if not web_search and msg_plain.startswith("ws"):
# leverage web search feature
web_search = True
msg_plain = msg_plain.removeprefix("ws").strip()
try:
if web_search:
llm_result = await web_searcher.web_search(msg_plain, provider, message.session_id, inner_provider)
else:
llm_result = await provider.text_chat(
msg_plain, message.session_id, image_url
)
except BaseException as e:
logger.error(traceback.format_exc())
logger.error(f"LLM 调用失败。")
return MessageResult("AstrBot 请求 LLM 资源失败:" + str(e))
# concatenate the reply prefix
if self.reply_prefix:
llm_result = self.reply_prefix + llm_result
# mask the unsafe content
llm_result = self.content_safety_helper.filter_content(llm_result)
check = self.content_safety_helper.baidu_check(llm_result)
if not check:
return MessageResult("LLM 输出的信息包含违规内容,由于机器人管理者开启了内容安全审核,该条消息已拦截。")
return MessageResult(llm_result)

View File

@@ -0,0 +1,114 @@
# 网上搜集的。
# 来源词拦截
# 支持正则
unfit_words_q = ["习近平",
"胡锦涛",
"江泽民",
"温家宝",
"李克强",
"李长春",
"毛泽东",
"邓小平",
"周恩来",
"社会主义",
"共产党",
"共产主义",
"大陆官方",
"北京政权",
"中华帝国",
"中国政府",
"共狗",
"六四事件",
"天安门",
"六四",
"政治局常委",
"学潮",
"八九",
"二十大",
"民进党",
"台独",
"台湾独立",
"台湾国",
"国民党",
"台湾民国",
"中华民国",
"pornhub",
"Pornhub",
"作爱",
"做爱",
"性交",
"自慰",
"阴茎",
"淫妇",
"肛交",
"交配",
"性关系",
"性活动",
"色情",
"色图",
"裸体",
"小穴",
"淫荡",
"性爱",
"港独",
"法轮功",
"六四"]
# 回复词过滤
unfit_words = ["习近平",
"胡锦涛",
"江泽民",
"温家宝",
"李克强",
"李长春",
"毛泽东",
"邓小平",
"周恩来",
"社会主义",
"共产党",
"共产主义",
"大陆官方",
"北京政权",
"中华帝国",
"中国政府",
"共狗",
"六四事件",
"天安门",
"六四",
"政治局常委",
"学潮",
"八九",
"二十大",
"民进党",
"台独",
"台湾独立",
"台湾国",
"国民党",
"台湾民国",
"中华民国",
"pornhub",
"Pornhub",
"作爱",
"做爱",
"性交",
"自慰",
"阴茎",
"淫妇",
"肛交",
"交配",
"性关系",
"性活动",
"色情",
"色图",
"涩图",
"裸体",
"小穴",
"淫荡",
"性爱",
"中华人民共和国",
"党中央",
"中央军委主席",
"台湾",
"港独",
"法轮功",
"PRC"]

269
astrbot/persist/helper.py Normal file
View File

@@ -0,0 +1,269 @@
import sqlite3
import os
import shutil
import time
from typing import Tuple
class dbConn():
def __init__(self):
db_path = "data/data.db"
if os.path.exists("data.db"):
shutil.copy("data.db", db_path)
with open(os.path.dirname(__file__) + "/initialization.sql", "r") as f:
sql = f.read()
self.conn = sqlite3.connect(db_path)
self.conn.text_factory = str
c = self.conn.cursor()
c.executescript(sql)
self.conn.commit()
def record_message(self, platform, session_id):
curr_ts = int(time.time())
self.increment_stat_session(platform, session_id, 1)
self.increment_stat_message(curr_ts, 1)
self.increment_stat_platform(curr_ts, platform, 1)
def insert_session(self, qq_id, history):
conn = self.conn
c = conn.cursor()
c.execute(
'''
INSERT INTO tb_session(qq_id, history) VALUES (?, ?)
''', (qq_id, history)
)
conn.commit()
def update_session(self, qq_id, history):
conn = self.conn
c = conn.cursor()
c.execute(
'''
UPDATE tb_session SET history = ? WHERE qq_id = ?
''', (history, qq_id)
)
conn.commit()
def get_session(self, qq_id):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_session WHERE qq_id = ?
''', (qq_id, )
)
return c.fetchone()
def get_all_session(self):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_session
'''
)
return c.fetchall()
def check_session(self, qq_id):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_session WHERE qq_id = ?
''', (qq_id, )
)
return c.fetchone() is not None
def delete_session(self, qq_id):
conn = self.conn
c = conn.cursor()
c.execute(
'''
DELETE FROM tb_session WHERE qq_id = ?
''', (qq_id, )
)
conn.commit()
def increment_stat_session(self, platform, session_id, cnt):
# if not exist, insert
conn = self.conn
c = conn.cursor()
if self.check_stat_session(platform, session_id):
c.execute(
'''
UPDATE tb_stat_session SET cnt = cnt + ? WHERE platform = ? AND session_id = ?
''', (cnt, platform, session_id)
)
conn.commit()
else:
c.execute(
'''
INSERT INTO tb_stat_session(platform, session_id, cnt) VALUES (?, ?, ?)
''', (platform, session_id, cnt)
)
conn.commit()
def check_stat_session(self, platform, session_id):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_session WHERE platform = ? AND session_id = ?
''', (platform, session_id)
)
return c.fetchone() is not None
def get_all_stat_session(self):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_session
'''
)
return c.fetchall()
def get_session_cnt_total(self):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT COUNT(*) FROM tb_stat_session
'''
)
return c.fetchone()[0]
def increment_stat_message(self, ts, cnt):
# 以一个小时为单位。ts的单位是秒。
# 找到最近的一个小时,如果没有,就插入
conn = self.conn
c = conn.cursor()
ok, new_ts = self.check_stat_message(ts)
if ok:
c.execute(
'''
UPDATE tb_stat_message SET cnt = cnt + ? WHERE ts = ?
''', (cnt, new_ts)
)
conn.commit()
else:
c.execute(
'''
INSERT INTO tb_stat_message(ts, cnt) VALUES (?, ?)
''', (new_ts, cnt)
)
conn.commit()
def check_stat_message(self, ts) -> Tuple[bool, int]:
# 换算成当地整点的时间戳
ts = ts - ts % 3600
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_message WHERE ts = ?
''', (ts, )
)
if c.fetchone() is not None:
return True, ts
else:
return False, ts
def get_last_24h_stat_message(self):
# 获取最近24小时的消息统计
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_message WHERE ts > ?
''', (time.time() - 86400, )
)
return c.fetchall()
def get_message_cnt_total(self) -> int:
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT SUM(cnt) FROM tb_stat_message
'''
)
return c.fetchone()[0]
def increment_stat_platform(self, ts, platform, cnt):
# 以一个小时为单位。ts的单位是秒。
# 找到最近的一个小时,如果没有,就插入
conn = self.conn
c = conn.cursor()
ok, new_ts = self.check_stat_platform(ts, platform)
if ok:
c.execute(
'''
UPDATE tb_stat_platform SET cnt = cnt + ? WHERE ts = ? AND platform = ?
''', (cnt, new_ts, platform)
)
conn.commit()
else:
c.execute(
'''
INSERT INTO tb_stat_platform(ts, platform, cnt) VALUES (?, ?, ?)
''', (new_ts, platform, cnt)
)
conn.commit()
def check_stat_platform(self, ts, platform):
# 换算成当地整点的时间戳
ts = ts - ts % 3600
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_platform WHERE ts = ? AND platform = ?
''', (ts, platform)
)
if c.fetchone() is not None:
return True, ts
else:
return False, ts
def get_last_24h_stat_platform(self):
# 获取最近24小时的消息统计
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_platform WHERE ts > ?
''', (time.time() - 86400, )
)
return c.fetchall()
def get_platform_cnt_total(self) -> int:
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT platform, SUM(cnt) FROM tb_stat_platform GROUP BY platform
'''
)
# return c.fetchall()
platforms = []
ret = c.fetchall()
for i in ret:
# platforms[i[0]] = i[1]
platforms.append({
"name": i[0],
"count": i[1]
})
return platforms
def close(self):
self.conn.close()

View File

@@ -0,0 +1,18 @@
CREATE TABLE IF NOT EXISTS tb_session(
qq_id VARCHAR(32) PRIMARY KEY,
history TEXT
);
CREATE TABLE IF NOT EXISTS tb_stat_session(
platform VARCHAR(32),
session_id VARCHAR(32),
cnt INTEGER
);
CREATE TABLE IF NOT EXISTS tb_stat_message(
ts INTEGER,
cnt INTEGER
);
CREATE TABLE IF NOT EXISTS tb_stat_platform(
ts INTEGER,
platform VARCHAR(32),
cnt INTEGER
);

View File

View File

View File

View File

@@ -1,86 +0,0 @@
import sqlite3
import yaml
# TODO: 数据库缓存prompt
class dbConn():
def __init__(self):
# 读取参数,并支持中文
conn = sqlite3.connect("data.db")
conn.text_factory=str
self.conn = conn
c = conn.cursor()
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_session(
qq_id VARCHAR(32) PRIMARY KEY,
history TEXT
)
'''
)
conn.commit()
def insert_session(self, qq_id, history):
conn = self.conn
c = conn.cursor()
c.execute(
'''
INSERT INTO tb_session(qq_id, history) VALUES (?, ?)
''', (qq_id, history)
)
conn.commit()
def update_session(self, qq_id, history):
conn = self.conn
c = conn.cursor()
c.execute(
'''
UPDATE tb_session SET history = ? WHERE qq_id = ?
''', (history, qq_id)
)
conn.commit()
def get_session(self, qq_id):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_session WHERE qq_id = ?
''', (qq_id, )
)
return c.fetchone()
def get_all_session(self):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_session
'''
)
return c.fetchall()
def check_session(self, qq_id):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_session WHERE qq_id = ?
''', (qq_id, )
)
return c.fetchone() is not None
def delete_session(self, qq_id):
conn = self.conn
c = conn.cursor()
c.execute(
'''
DELETE FROM tb_session WHERE qq_id = ?
''', (qq_id, )
)
conn.commit()
def close(self):
self.conn.close()

View File

@@ -1,678 +0,0 @@
import botpy
from botpy.message import Message
from botpy.types.message import Reference
import re
from botpy.message import DirectMessage
import json
import threading
import asyncio
import time
import requests
import util.unfit_words as uw
import os
import sys
from cores.qqbot.personality import personalities
from addons.baidu_aip_judge import BaiduJudge
from model.platform.qqchan import QQChan
from model.platform.qq import QQ
from nakuru import (
CQHTTP,
GroupMessage,
GroupMemberIncrease,
FriendMessage
)
from nakuru.entities.components import Plain,At
# QQBotClient实例
client = ''
# ChatGPT实例
global chatgpt
# 缓存的会话
session_dict = {}
# 最大缓存token在配置里改 configs/config.yaml
max_tokens = 2000
# 配置信息
config = {}
# 统计信息
count = {}
# 统计信息
stat_file = ''
# 是否独立会话默认值
uniqueSession = False
# 日志记录
logf = open('log.log', 'a+', encoding='utf-8')
# 是否上传日志,仅上传频道数量等数量的统计信息
is_upload_log = True
# 用户发言频率
user_frequency = {}
# 时间默认值
frequency_time = 60
# 计数默认值
frequency_count = 2
# 公告(可自定义):
announcement = ""
# 机器人私聊模式
direct_message_mode = True
# 适配pyinstaller
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
# 版本
version = '3.0'
# 语言模型
REV_CHATGPT = 'rev_chatgpt'
OPENAI_OFFICIAL = 'openai_official'
REV_ERNIE = 'rev_ernie'
REV_EDGEGPT = 'rev_edgegpt'
provider = None
chosen_provider = None
# 逆向库对象
rev_chatgpt = None
# gpt配置信息
gpt_config = {}
# 百度内容审核实例
baidu_judge = None
# 回复前缀
reply_prefix = {}
# 关键词回复
keywords = {}
# QQ频道机器人
qqchannel_bot = None
PLATFORM_QQCHAN = 'qqchan'
qqchan_loop = None
# QQ机器人
gocq_bot = None
PLATFORM_GOCQ = 'gocq'
gocq_app = CQHTTP(
host="127.0.0.1",
port=6700,
http_port=5700,
)
gocq_loop = None
nick_qq = "ai "
bing_cache_loop = None
def new_sub_thread(func, args=()):
thread = threading.Thread(target=func, args=args, daemon=True)
thread.start()
# 写入统计信息
def toggle_count(at: bool, message):
global stat_file
try:
if str(message.guild_id) not in count:
count[str(message.guild_id)] = {
'count': 1,
'direct_count': 1,
}
else:
count[str(message.guild_id)]['count'] += 1
if not at:
count[str(message.guild_id)]['direct_count'] += 1
stat_file = open(abs_path+"configs/stat", 'w', encoding='utf-8')
stat_file.write(json.dumps(count))
stat_file.flush()
stat_file.close()
except BaseException:
pass
# 上传统计信息并检查更新
def upload():
global object_id
global version
while True:
addr = ''
try:
# 用户唯一性标识
addr = requests.get('http://myip.ipip.net', timeout=5).text
except BaseException:
pass
try:
ts = str(time.time())
guild_count, guild_msg_count, guild_direct_msg_count, session_count = get_stat()
headers = {
'X-LC-Id': 'UqfXTWW15nB7iMT0OHvYrDFb-gzGzoHsz',
'X-LC-Key': 'QAZ1rQLY1ZufHrZlpuUiNff7',
'Content-Type': 'application/json'
}
key_stat = chatgpt.get_key_stat()
d = {"data": {'version': version, "guild_count": guild_count, "guild_msg_count": guild_msg_count, "guild_direct_msg_count": guild_direct_msg_count, "session_count": session_count, 'addr': addr, 'key_stat':key_stat}}
d = json.dumps(d).encode("utf-8")
res = requests.put(f'https://uqfxtww1.lc-cn-n1-shared.com/1.1/classes/bot_record/{object_id}', headers = headers, data = d)
if json.loads(res.text)['code'] == 1:
print("[System] New User.")
res = requests.post(f'https://uqfxtww1.lc-cn-n1-shared.com/1.1/classes/bot_record', headers = headers, data = d)
object_id = json.loads(res.text)['objectId']
object_id_file = open(abs_path+"configs/object_id", 'w+', encoding='utf-8')
object_id_file.write(str(object_id))
object_id_file.flush()
object_id_file.close()
except BaseException as e:
pass
# 每隔2小时上传一次
time.sleep(60*60*2)
'''
初始化机器人
'''
def initBot(cfg, prov):
global chatgpt, provider, rev_chatgpt, baidu_judge, rev_edgegpt, chosen_provider
global reply_prefix, gpt_config, config, uniqueSession, frequency_count, frequency_time,announcement, direct_message_mode, version
global command_openai_official, command_rev_chatgpt, command_rev_edgegpt,reply_prefix, keywords
provider = prov
config = cfg
if 'reply_prefix' in cfg:
reply_prefix = cfg['reply_prefix']
# 语言模型提供商
if REV_CHATGPT in prov:
if cfg['rev_ChatGPT']['enable']:
if 'account' in cfg['rev_ChatGPT']:
from model.provider.provider_rev_chatgpt import ProviderRevChatGPT
from model.command.command_rev_chatgpt import CommandRevChatGPT
rev_chatgpt = ProviderRevChatGPT(cfg['rev_ChatGPT'])
command_rev_chatgpt = CommandRevChatGPT(cfg['rev_ChatGPT'])
chosen_provider = REV_CHATGPT
else:
input("[System-err] 请退出本程序, 然后在配置文件中填写rev_ChatGPT相关配置")
if REV_EDGEGPT in prov:
if cfg['rev_edgegpt']['enable']:
from model.provider.provider_rev_edgegpt import ProviderRevEdgeGPT
from model.command.command_rev_edgegpt import CommandRevEdgeGPT
rev_edgegpt = ProviderRevEdgeGPT()
command_rev_edgegpt = CommandRevEdgeGPT(rev_edgegpt)
chosen_provider = REV_EDGEGPT
if OPENAI_OFFICIAL in prov:
if cfg['openai']['key'] is not None:
from model.provider.provider_openai_official import ProviderOpenAIOfficial
from model.command.command_openai_official import CommandOpenAIOfficial
chatgpt = ProviderOpenAIOfficial(cfg['openai'])
command_openai_official = CommandOpenAIOfficial(chatgpt)
chosen_provider = OPENAI_OFFICIAL
# 得到关键词
if os.path.exists("keyword.json"):
with open("keyword.json", 'r', encoding='utf-8') as f:
keywords = json.load(f)
# 检查provider设置偏好
if os.path.exists("provider_preference.txt"):
with open("provider_preference.txt", 'r', encoding='utf-8') as f:
res = f.read()
if res in prov:
chosen_provider = res
# 百度内容审核
if 'baidu_aip' in cfg and 'enable' in cfg['baidu_aip'] and cfg['baidu_aip']['enable']:
try:
baidu_judge = BaiduJudge(cfg['baidu_aip'])
print("[System] 百度内容审核初始化成功")
except BaseException as e:
input("[System] 百度内容审核初始化失败: " + str(e))
exit()
# 统计上传
if is_upload_log:
# 读取object_id
global object_id
if not os.path.exists(abs_path+"configs/object_id"):
with open(abs_path+"configs/object_id", 'w', encoding='utf-8') as f:
f.write("")
object_id_file = open(abs_path+"configs/object_id", 'r', encoding='utf-8')
object_id = object_id_file.read()
object_id_file.close()
# 创建上传定时器线程
threading.Thread(target=upload, daemon=True).start()
# 得到私聊模式配置
if 'direct_message_mode' in cfg:
direct_message_mode = cfg['direct_message_mode']
print("[System] 私聊功能: "+str(direct_message_mode))
# 得到发言频率配置
if 'limit' in cfg:
print('[System] 发言频率配置: '+str(cfg['limit']))
if 'count' in cfg['limit']:
frequency_count = cfg['limit']['count']
if 'time' in cfg['limit']:
frequency_time = cfg['limit']['time']
# 得到公告配置
if 'notice' in cfg:
print('[System] 公告配置: '+cfg['notice'])
announcement += cfg['notice']
try:
if 'uniqueSessionMode' in cfg and cfg['uniqueSessionMode']:
uniqueSession = True
else:
uniqueSession = False
print("[System] 独立会话: " + str(uniqueSession))
if 'dump_history_interval' in cfg:
print("[System] 历史记录转储时间周期: " + cfg['dump_history_interval'] + "分钟")
except BaseException:
print("[System-Error] 读取uniqueSessionMode/version/dump_history_interval配置文件失败, 使用默认值。")
print(f"[System] QQ开放平台AppID: {cfg['qqbot']['appid']} 令牌: {cfg['qqbot']['token']}")
print("\n[System] 如果有任何问题, 请在 https://github.com/Soulter/QQChannelChatGPT 上提交issue说明问题或者添加QQ905617992")
print("[System] 请给 https://github.com/Soulter/QQChannelChatGPT 点个star!")
if chosen_provider is None:
print("[System-Warning] 检测到没有启动任何一个语言模型。请至少在配置文件中启用一个语言模型。")
# 得到指令设置(cmd_config.json)
if os.path.exists("cmd_config.json"):
with open("cmd_config.json", 'r', encoding='utf-8') as f:
cmd_config = json.load(f)
# QQ机器人昵称
if 'nick_qq' in cmd_config:
global nick_qq
nick_qq = cmd_config['nick_qq']
thread_inst = None
# QQ频道
if 'qqbot' in cfg and cfg['qqbot']['enable']:
print("[System] 启用QQ频道机器人")
global qqchannel_bot, qqchan_loop
qqchannel_bot = QQChan()
qqchan_loop = asyncio.new_event_loop()
thread_inst = threading.Thread(target=run_qqchan_bot, args=(cfg, qqchan_loop, qqchannel_bot), daemon=False)
thread_inst.start()
# thread.join()
# GOCQ
if 'gocqbot' in cfg and cfg['gocqbot']['enable']:
print("[System] 启用QQ机器人")
global gocq_app, gocq_bot, gocq_loop
gocq_bot = QQ()
gocq_loop = asyncio.new_event_loop()
thread_inst = threading.Thread(target=run_gocq_bot, args=(gocq_loop, gocq_bot, gocq_app), daemon=False)
thread_inst.start()
if thread_inst == None:
input("[System-Error] 没有启用任何机器人,程序退出")
exit()
thread_inst.join()
def run_qqchan_bot(cfg, loop, qqchannel_bot):
asyncio.set_event_loop(loop)
intents = botpy.Intents(public_guild_messages=True, direct_message=True)
global client
client = botClient(intents=intents)
try:
qqchannel_bot.run_bot(client, cfg['qqbot']['appid'], cfg['qqbot']['token'])
except BaseException as e:
input(f"\n[System-Error] 启动QQ频道机器人时出现错误原因如下{e}\n可能是没有填写QQBOT appid和token请在config中完善你的appid和token\n配置教程https://soulter.top/posts/qpdg.html\n")
def run_gocq_bot(loop, gocq_bot, gocq_app):
asyncio.set_event_loop(loop)
global gocq_client
gocq_client = gocqClient()
try:
gocq_bot.run_bot(gocq_app)
except BaseException as e:
input("启动QQ机器人出现错误"+str(e))
'''
检查发言频率
'''
def check_frequency(id) -> bool:
ts = int(time.time())
if id in user_frequency:
if ts-user_frequency[id]['time'] > frequency_time:
user_frequency[id]['time'] = ts
user_frequency[id]['count'] = 1
return True
else:
if user_frequency[id]['count'] >= frequency_count:
return False
else:
user_frequency[id]['count']+=1
return True
else:
t = {'time':ts,'count':1}
user_frequency[id] = t
return True
def save_provider_preference(chosen_provider):
with open('provider_preference.txt', 'w') as f:
f.write(chosen_provider)
'''
通用回复方法
'''
def send_message(platform, message, res, msg_ref = None, image = None, gocq_loop = None, qqchannel_bot = None, gocq_bot = None):
if platform == PLATFORM_QQCHAN:
if image != None:
qqchannel_bot.send_qq_msg(message, res, image_mode=True, msg_ref=msg_ref)
else:
qqchannel_bot.send_qq_msg(message, res, msg_ref=msg_ref)
if platform == PLATFORM_GOCQ: asyncio.run_coroutine_threadsafe(gocq_bot.send_qq_msg(message, res), gocq_loop).result()
'''
处理消息
group: 群聊模式
'''
def oper_msg(message, group=False, msg_ref = None, platform = None):
global session_dict, provider
qq_msg = ''
session_id = ''
user_id = ''
user_name = ''
global chosen_provider, reply_prefix, keywords, qqchannel_bot, gocq_bot, gocq_loop, bing_cache_loop
role = "member" # 角色
hit = False # 是否命中指令
command_result = () # 调用指令返回的结果
if platform == PLATFORM_QQCHAN:
print("[QQCHAN-BOT] 接收到消息:"+ str(message.content))
user_id = message.author.id
user_name = message.author.username
global qqchan_loop
if platform == PLATFORM_GOCQ:
if isinstance(message.message[0], Plain):
print("[GOCQ-BOT] 接收到消息:"+ str(message.message[0].text))
elif isinstance(message.message[0], At):
print("[GOCQ-BOT] 接收到消息:"+ str(message.message[1].text))
user_id = message.user_id
user_name = message.user_id
global gocq_loop
if chosen_provider is None:
send_message(platform, message, f"没有启动任何一个语言模型。请至少在配置文件中启用一个语言模型。", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
# 检查发言频率
if not check_frequency(user_id):
qqchannel_bot.send_qq_msg(message, f'{user_name}的发言超过频率限制(╯▔皿▔)╯。\n{frequency_time}秒内只能提问{frequency_count}次。')
return
if platform == PLATFORM_QQCHAN:
if group:
# 频道内
# 过滤@
qq_msg = message.content
lines = qq_msg.splitlines()
for i in range(len(lines)):
lines[i] = re.sub(r"<@!\d+>", "", lines[i])
qq_msg = "\n".join(lines).lstrip().strip()
if uniqueSession:
session_id = user_id
else:
session_id = message.channel_id
# 得到身份
if "2" in message.member.roles or "4" in message.member.roles or "5" in message.member.roles:
print("[QQCHAN-BOT] 检测到管理员身份")
role = "admin"
else:
role = "member"
else:
# 私信
qq_msg = message.content
session_id = user_id
if platform == PLATFORM_GOCQ:
if group:
if isinstance(message.message[0], Plain):
qq_msg = str(message.message[0].text)
elif isinstance(message.message[0], At):
qq_msg = str(message.message[1].text).strip()
else:
return
session_id = message.group_id
else:
qq_msg = message.message[0].text
session_id = message.user_id
# todo: 暂时将所有人设为管理员
role = "admin"
logf.write("[QQBOT] "+ qq_msg+'\n')
logf.flush()
# 关键词回复
for k in keywords:
if qq_msg == k:
send_message(platform, message, keywords[k], msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
return
# 关键词拦截器
for i in uw.unfit_words_q:
matches = re.match(i, qq_msg.strip(), re.I | re.M)
if matches:
send_message(platform, message, f"你的提问得到的回复未通过【自有关键词拦截】服务, 不予回复。", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
return
if baidu_judge != None:
check, msg = baidu_judge.judge(qq_msg)
if not check:
send_message(platform, message, f"你的提问得到的回复未通过【百度AI内容审核】服务, 不予回复。\n\n{msg}", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
return
# 检查是否是更换语言模型的请求
temp_switch = ""
if qq_msg.startswith('/bing') or qq_msg.startswith('/gpt') or qq_msg.startswith('/revgpt'):
target = chosen_provider
if qq_msg.startswith('/bing'):
target = REV_EDGEGPT
elif qq_msg.startswith('/gpt'):
target = OPENAI_OFFICIAL
elif qq_msg.startswith('/revgpt'):
target = REV_CHATGPT
l = qq_msg.split(' ')
if len(l) > 1 and l[1] != "":
# 临时对话模式,先记录下之前的语言模型,回答完毕后再切回
temp_switch = chosen_provider
chosen_provider = target
qq_msg = l[1]
else:
# if role != "admin":
# send_message(platform, message, "你没有权限更换语言模型。", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
# return
chosen_provider = target
save_provider_preference(chosen_provider)
send_message(platform, message, f"已切换至【{chosen_provider}", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
return
chatgpt_res = ""
if chosen_provider == OPENAI_OFFICIAL:
hit, command_result = command_openai_official.check_command(qq_msg, session_id, user_name, role, platform=platform)
# hit: 是否触发了指令.
if not hit:
# 请求ChatGPT获得结果
try:
chatgpt_res = chatgpt.text_chat(qq_msg, session_id)
if OPENAI_OFFICIAL in reply_prefix:
chatgpt_res = reply_prefix[OPENAI_OFFICIAL] + chatgpt_res
except (BaseException) as e:
print("[System-Err] OpenAI API错误。原因如下:\n"+str(e))
send_message(platform, message, f"OpenAI API错误。原因如下\n{str(e)} \n前往官方频道反馈~", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
elif chosen_provider == REV_CHATGPT:
hit, command_result = command_rev_chatgpt.check_command(qq_msg, role, platform=platform)
if not hit:
try:
chatgpt_res = str(rev_chatgpt.text_chat(qq_msg))
if REV_CHATGPT in reply_prefix:
chatgpt_res = reply_prefix[REV_CHATGPT] + chatgpt_res
except BaseException as e:
print("[System-Err] Rev ChatGPT API错误。原因如下:\n"+str(e))
send_message(platform, message, f"Rev ChatGPT API错误。原因如下\n{str(e)} \n前往官方频道反馈~", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
elif chosen_provider == REV_EDGEGPT:
if bing_cache_loop == None:
if platform == PLATFORM_GOCQ:
bing_cache_loop = gocq_loop
elif platform == PLATFORM_QQCHAN:
bing_cache_loop = qqchan_loop
hit, command_result = command_rev_edgegpt.check_command(qq_msg, bing_cache_loop, role, platform=platform)
if not hit:
try:
while rev_edgegpt.is_busy():
time.sleep(1)
res, res_code = asyncio.run_coroutine_threadsafe(rev_edgegpt.text_chat(qq_msg), bing_cache_loop).result()
if res_code == 0: # bing不想继续话题重置会话后重试。
send_message(platform, message, "Bing不想继续话题了, 正在自动重置会话并重试。", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
asyncio.run_coroutine_threadsafe(rev_edgegpt.forget(), bing_cache_loop).result()
res, res_code = asyncio.run_coroutine_threadsafe(rev_edgegpt.text_chat(qq_msg), bing_cache_loop).result()
if res_code == 0: # bing还是不想继续话题大概率说明提问有问题。
asyncio.run_coroutine_threadsafe(rev_edgegpt.forget(), bing_cache_loop).result()
send_message(platform, message, "Bing仍然不想继续话题, 会话已重置, 请检查您的提问后重试。", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
res = ""
chatgpt_res = str(res)
if REV_EDGEGPT in reply_prefix:
chatgpt_res = reply_prefix[REV_EDGEGPT] + chatgpt_res
except BaseException as e:
print("[System-Err] Rev NewBing API错误。原因如下:\n"+str(e))
send_message(platform, message, f"Rev NewBing API错误。原因如下\n{str(e)} \n前往官方频道反馈~", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
# 切换回原来的语言模型
if temp_switch != "":
chosen_provider = temp_switch
# 指令回复
if hit:
# 检查指令. command_result是一个元组(指令调用是否成功, 指令返回的文本结果, 指令类型)
if command_result != None:
command = command_result[2]
if command == "keyword":
with open("keyword.json", "r", encoding="utf-8") as f:
keywords = json.load(f)
# QQ昵称
if command == "nick":
with open("cmd_config.json", "r", encoding="utf-8") as f:
global nick_qq
nick_qq = json.load(f)["nick_qq"]
if command_result[0]:
# 是否是画图指令
if len(command_result) == 3 and command_result[2] == 'draw':
for i in command_result[1]:
send_message(platform, message, i, msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
else:
try:
send_message(platform, message, command_result[1], msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
except BaseException as e:
t = command_result[1].replace(".", " . ")
send_message(platform, message, t, msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
else:
send_message(platform, message, f"指令调用错误: \n{command_result[1]}", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
return
if chatgpt_res == "":
return
# 记录日志
logf.write(f"{reply_prefix} {str(chatgpt_res)}\n")
logf.flush()
# 敏感过滤
# 过滤不合适的词
judged_res = chatgpt_res
for i in uw.unfit_words:
judged_res = re.sub(i, "***", judged_res)
# 百度内容审核服务二次审核
if baidu_judge != None:
check, msg = baidu_judge.judge(judged_res)
if not check:
send_message(platform, message, f"你的提问得到的回复【百度内容审核】未通过,不予回复。\n\n{msg}", msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
return
# 发送qq信息
try:
send_message(platform, message, chatgpt_res, msg_ref=msg_ref, gocq_loop=gocq_loop, qqchannel_bot=qqchannel_bot, gocq_bot=gocq_bot)
except BaseException as e:
print("回复消息错误: \n"+str(e))
'''
获取统计信息
'''
def get_stat(self):
try:
f = open(abs_path+"configs/stat", "r", encoding="utf-8")
fjson = json.loads(f.read())
f.close()
guild_count = 0
guild_msg_count = 0
guild_direct_msg_count = 0
for k,v in fjson.items():
guild_count += 1
guild_msg_count += v['count']
guild_direct_msg_count += v['direct_count']
session_count = 0
f = open(abs_path+"configs/session", "r", encoding="utf-8")
fjson = json.loads(f.read())
f.close()
for k,v in fjson.items():
session_count += 1
return guild_count, guild_msg_count, guild_direct_msg_count, session_count
except:
return -1, -1, -1, -1
# QQ频道机器人
class botClient(botpy.Client):
# 收到频道消息
async def on_at_message_create(self, message: Message):
toggle_count(at=True, message=message)
message_reference = Reference(message_id=message.id, ignore_get_message_error=False)
new_sub_thread(oper_msg, (message, True, message_reference, PLATFORM_QQCHAN))
# 收到私聊消息
async def on_direct_message_create(self, message: DirectMessage):
if direct_message_mode:
toggle_count(at=False, message=message)
new_sub_thread(oper_msg, (message, False, None, PLATFORM_QQCHAN))
# QQ机器人
class gocqClient():
# 收到群聊消息
@gocq_app.receiver("GroupMessage")
async def _(app: CQHTTP, source: GroupMessage):
global nick_qq
if isinstance(source.message[0], Plain):
if source.message[0].text.startswith(nick_qq):
source.message[0].text = source.message[0].text[len(nick_qq):]
new_sub_thread(oper_msg, (source, True, None, PLATFORM_GOCQ))
if isinstance(source.message[0], At):
if source.message[0].qq == source.self_id:
if source.message[1].text.startswith(nick_qq):
source.message[1].text = source.message[0].text[len(nick_qq):]
new_sub_thread(oper_msg, (source, True, None, PLATFORM_GOCQ))
else:
return
@gocq_app.receiver("FriendMessage")
async def _(app: CQHTTP, source: FriendMessage):
if isinstance(source.message[0], Plain):
new_sub_thread(oper_msg, (source, False, None, PLATFORM_GOCQ))
else:
return
@gocq_app.receiver("GroupMemberIncrease")
async def _(app: CQHTTP, source: GroupMemberIncrease):
global nick_qq
await app.sendGroupMessage(source.group_id, [
Plain(text=f"欢迎加入本群!\n欢迎给https://github.com/Soulter/QQChannelChatGPT项目一个Star😊~\n@我输入help查看帮助~\n我叫{nick_qq}, 你也可以以【{nick_qq}+问题】的格式来提醒我并问我问题哦~\n")
])

11
dashboard/__init__.py Normal file
View File

@@ -0,0 +1,11 @@
from dataclasses import dataclass
class DashBoardData():
stats: dict = {}
configs: dict = {}
@dataclass
class Response():
status: str
message: str
data: dict

1
dashboard/dist/_redirects vendored Normal file
View File

@@ -0,0 +1 @@
/* /index.html 200

View File

@@ -0,0 +1 @@
.page-breadcrumb .v-toolbar{background:transparent}

View File

@@ -0,0 +1 @@
import{x as i,o as l,c as _,w as s,a as e,f as a,J as m,V as c,b as t,t as u,ae as p,B as n,af as o,j as f}from"./index-5ac7c267.js";const b={class:"text-h3"},h={class:"d-flex align-center"},g={class:"d-flex align-center"},V=i({__name:"BaseBreadcrumb",props:{title:String,breadcrumbs:Array,icon:String},setup(d){const r=d;return(x,B)=>(l(),_(c,{class:"page-breadcrumb mb-1 mt-1"},{default:s(()=>[e(a,{cols:"12",md:"12"},{default:s(()=>[e(m,{variant:"outlined",elevation:"0",class:"px-4 py-3 withbg"},{default:s(()=>[e(c,{"no-gutters":"",class:"align-center"},{default:s(()=>[e(a,{md:"5"},{default:s(()=>[t("h3",b,u(r.title),1)]),_:1}),e(a,{md:"7",sm:"12",cols:"12"},{default:s(()=>[e(p,{items:r.breadcrumbs,class:"text-h5 justify-md-end pa-1"},{divider:s(()=>[t("div",h,[e(n(o),{size:"17"})])]),prepend:s(()=>[e(f,{size:"small",icon:"mdi-home",class:"text-secondary mr-2"}),t("div",g,[e(n(o),{size:"17"})])]),_:1},8,["items"])]),_:1})]),_:1})]),_:1})]),_:1})]),_:1}))}});export{V as _};

View File

@@ -0,0 +1 @@
import{x as e,o as a,c as t,w as o,a as s,B as n,Z as r,W as c}from"./index-5ac7c267.js";const f=e({__name:"BlankLayout",setup(p){return(u,_)=>(a(),t(c,null,{default:o(()=>[s(n(r))]),_:1}))}});export{f as default};

View File

@@ -0,0 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as p,D as a,o as r,s,a as e,w as t,f as o,V as i,F as n,u as g,c as h,a0 as b,e as x,t as y}from"./index-5ac7c267.js";const P=p({__name:"ColorPage",setup(C){const c=a({title:"Colors Page"}),d=a([{title:"Utilities",disabled:!1,href:"#"},{title:"Colors",disabled:!0,href:"#"}]),u=a(["primary","lightprimary","secondary","lightsecondary","info","success","accent","warning","error","darkText","lightText","borderLight","inputBorder","containerBg"]);return(V,k)=>(r(),s(n,null,[e(m,{title:c.value.title,breadcrumbs:d.value},null,8,["title","breadcrumbs"]),e(i,null,{default:t(()=>[e(o,{cols:"12",md:"12"},{default:t(()=>[e(_,{title:"Color Palette"},{default:t(()=>[e(i,null,{default:t(()=>[(r(!0),s(n,null,g(u.value,(l,f)=>(r(),h(o,{md:"3",cols:"12",key:f},{default:t(()=>[e(b,{rounded:"md",class:"align-center justify-center d-flex",height:"100",width:"100%",color:l},{default:t(()=>[x("class: "+y(l),1)]),_:2},1032,["color"])]),_:2},1024))),128))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{P as default};

View File

@@ -0,0 +1 @@
import{o as l,s as o,u as c,c as n,w as u,Q as g,b as d,R as k,F as t,ac as h,O as p,t as m,a as V,ad as f,i as C,q as x,k as v,A as U}from"./index-5ac7c267.js";import{_ as w}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";const S={__name:"ConfigDetailCard",props:{config:Array},setup(s){return(y,B)=>(l(!0),o(t,null,c(s.config,r=>(l(),n(w,{key:r.name,title:r.name,style:{"margin-bottom":"16px"}},{default:u(()=>[g(d("a",null,"No data",512),[[k,s.config.length===0]]),(l(!0),o(t,null,c(r.body,e=>(l(),o(t,null,[e.config_type==="item"?(l(),o(t,{key:0},[e.val_type==="bool"?(l(),n(h,{key:0,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,color:"primary",inset:""},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="str"?(l(),n(p,{key:1,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="int"?(l(),n(p,{key:2,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="list"?(l(),o(t,{key:3},[d("span",null,m(e.name),1),V(f,{modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,chips:"",clearable:"",label:"请添加",multiple:"","prepend-icon":"mdi-tag-multiple-outline"},{selection:u(({attrs:a,item:i,select:b,selected:_})=>[V(C,x(a,{"model-value":_,closable:"",onClick:b,"onClick:close":D=>y.remove(i)}),{default:u(()=>[d("strong",null,m(i),1)]),_:2},1040,["model-value","onClick","onClick:close"])]),_:2},1032,["modelValue","onUpdate:modelValue"])],64)):v("",!0)],64)):e.config_type==="divider"?(l(),n(U,{key:1,style:{"margin-top":"8px","margin-bottom":"8px"}})):v("",!0)],64))),256))]),_:2},1032,["title"]))),128))}};export{S as _};

View File

@@ -0,0 +1 @@
import{_ as b}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as h,o,c as u,w as t,a,a8 as y,b as c,K as x,e as f,t as g,G as V,A as w,L as S,a9 as $,J as B,s as _,d as v,F as d,u as p,f as G,V as T,ab as j,T as l}from"./index-5ac7c267.js";import{_ as m}from"./ConfigDetailCard-756c045d.js";const D={class:"d-sm-flex align-center justify-space-between"},C=h({__name:"ConfigGroupCard",props:{title:String},setup(e){const s=e;return(i,n)=>(o(),u(B,{variant:"outlined",elevation:"0",class:"withbg",style:{width:"50%"}},{default:t(()=>[a(y,{style:{padding:"10px 20px"}},{default:t(()=>[c("div",D,[a(x,null,{default:t(()=>[f(g(s.title),1)]),_:1}),a(V)])]),_:1}),a(w),a(S,null,{default:t(()=>[$(i.$slots,"default")]),_:3})]),_:3}))}}),I={style:{display:"flex","flex-direction":"row","justify-content":"space-between","align-items":"center","margin-bottom":"12px"}},N={style:{display:"flex","flex-direction":"row"}},R={style:{"margin-right":"10px",color:"black"}},F={style:{color:"#222"}},k=h({__name:"ConfigGroupItem",props:{title:String,desc:String,btnRoute:String,namespace:String},setup(e){const s=e;return(i,n)=>(o(),_("div",I,[c("div",N,[c("h3",R,g(s.title),1),c("p",F,g(s.desc),1)]),a(v,{to:s.btnRoute,color:"primary",class:"ml-2",style:{"border-radius":"10px"}},{default:t(()=>[f("配置")]),_:1},8,["to"])]))}}),L={style:{display:"flex","flex-direction":"row",padding:"16px",gap:"16px",width:"100%"}},P={name:"ConfigPage",components:{UiParentCard:b,ConfigGroupCard:C,ConfigGroupItem:k,ConfigDetailCard:m},data(){return{config_data:[],config_base:[],save_message_snack:!1,save_message:"",save_message_success:"",config_outline:[],namespace:""}},mounted(){this.getConfig()},methods:{switchConfig(e){l.get("/api/configs?namespace="+e).then(s=>{this.namespace=e,this.config_data=s.data.data,console.log(this.config_data)}).catch(s=>{save_message=s,save_message_snack=!0,save_message_success="error"})},getConfig(){l.get("/api/config_outline").then(e=>{this.config_outline=e.data.data,console.log(this.config_outline)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"}),l.get("/api/configs").then(e=>{this.config_base=e.data.data,console.log(this.config_data)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"})},updateConfig(){l.post("/api/configs",{base_config:this.config_base,config:this.config_data,namespace:this.namespace}).then(e=>{e.data.status==="success"?(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="success"):(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="error")}).catch(e=>{this.save_message=e,this.save_message_snack=!0,this.save_message_success="error"})}}},J=Object.assign(P,{setup(e){return(s,i)=>(o(),_(d,null,[a(T,null,{default:t(()=>[c("div",L,[(o(!0),_(d,null,p(s.config_outline,n=>(o(),u(C,{key:n.name,title:n.name},{default:t(()=>[(o(!0),_(d,null,p(n.body,r=>(o(),u(k,{title:r.title,desc:r.desc,namespace:r.namespace,onClick:U=>s.switchConfig(r.namespace)},null,8,["title","desc","namespace","onClick"]))),256))]),_:2},1032,["title"]))),128))]),a(G,{cols:"12",md:"12"},{default:t(()=>[a(m,{config:s.config_data},null,8,["config"]),a(m,{config:s.config_base},null,8,["config"])]),_:1})]),_:1}),a(v,{icon:"mdi-content-save",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary",onClick:s.updateConfig},null,8,["onClick"]),a(j,{timeout:2e3,elevation:"24",color:s.save_message_success,modelValue:s.save_message_snack,"onUpdate:modelValue":i[0]||(i[0]=n=>s.save_message_snack=n)},{default:t(()=>[f(g(s.save_message),1)]),_:1},8,["color","modelValue"])],64))}});export{J as default};

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,32 @@
/**
* Copyright (c) 2014 The xterm.js authors. All rights reserved.
* Copyright (c) 2012-2013, Christopher Jeffrey (MIT License)
* https://github.com/chjj/term.js
* @license MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*
* Originally forked from (with the author's permission):
* Fabrice Bellard's javascript vt100 for jslinux:
* http://bellard.org/jslinux/
* Copyright (c) 2011 Fabrice Bellard
* The original design remains. The terminal itself
* has been extended to include xterm CSI codes, among
* other features.
*/.xterm{cursor:text;position:relative;user-select:none;-ms-user-select:none;-webkit-user-select:none}.xterm.focus,.xterm:focus{outline:none}.xterm .xterm-helpers{position:absolute;top:0;z-index:5}.xterm .xterm-helper-textarea{padding:0;border:0;margin:0;position:absolute;opacity:0;left:-9999em;top:0;width:0;height:0;z-index:-5;white-space:nowrap;overflow:hidden;resize:none}.xterm .composition-view{background:#000;color:#fff;display:none;position:absolute;white-space:nowrap;z-index:1}.xterm .composition-view.active{display:block}.xterm .xterm-viewport{background-color:#000;overflow-y:scroll;cursor:default;position:absolute;right:0;left:0;top:0;bottom:0}.xterm .xterm-screen{position:relative}.xterm .xterm-screen canvas{position:absolute;left:0;top:0}.xterm .xterm-scroll-area{visibility:hidden}.xterm-char-measure-element{display:inline-block;visibility:hidden;position:absolute;top:0;left:-9999em;line-height:normal}.xterm.enable-mouse-events{cursor:default}.xterm.xterm-cursor-pointer,.xterm .xterm-cursor-pointer{cursor:pointer}.xterm.column-select.focus{cursor:crosshair}.xterm .xterm-accessibility,.xterm .xterm-message{position:absolute;left:0;top:0;bottom:0;right:0;z-index:10;color:transparent;pointer-events:none}.xterm .live-region{position:absolute;left:-9999px;width:1px;height:1px;overflow:hidden}.xterm-dim{opacity:1!important}.xterm-underline-1{text-decoration:underline}.xterm-underline-2{text-decoration:double underline}.xterm-underline-3{text-decoration:wavy underline}.xterm-underline-4{text-decoration:dotted underline}.xterm-underline-5{text-decoration:dashed underline}.xterm-overline{text-decoration:overline}.xterm-overline.xterm-underline-1{text-decoration:overline underline}.xterm-overline.xterm-underline-2{text-decoration:overline double underline}.xterm-overline.xterm-underline-3{text-decoration:overline wavy underline}.xterm-overline.xterm-underline-4{text-decoration:overline dotted underline}.xterm-overline.xterm-underline-5{text-decoration:overline dashed underline}.xterm-strikethrough{text-decoration:line-through}.xterm-screen .xterm-decoration-container .xterm-decoration{z-index:6;position:absolute}.xterm-screen .xterm-decoration-container .xterm-decoration.xterm-decoration-top-layer{z-index:7}.xterm-decoration-overview-ruler{z-index:8;position:absolute;top:0;right:0;pointer-events:none}.xterm-decoration-top{z-index:2;position:relative}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
.CardMediaWrapper{max-width:720px;margin:0 auto;position:relative}.CardMediaBuild{position:absolute;top:0;left:0;width:100%;animation:5s bounce ease-in-out infinite}.CardMediaParts{position:absolute;top:0;left:0;width:100%;animation:10s blink ease-in-out infinite}

View File

@@ -0,0 +1 @@
import{_ as a}from"./_plugin-vue_export-helper-c27b6911.js";import{o,c,w as s,V as i,a as t,b as e,d as l,e as r,f as d}from"./index-5ac7c267.js";const n="/assets/img-error-bg-41f65efa.svg",_="/assets/img-error-blue-f50c8e77.svg",m="/assets/img-error-text-630dc36d.svg",g="/assets/img-error-purple-b97a483b.svg";const p={},u={class:"text-center"},f=e("div",{class:"CardMediaWrapper"},[e("img",{src:n,alt:"grid",class:"w-100"}),e("img",{src:_,alt:"grid",class:"CardMediaParts"}),e("img",{src:m,alt:"build",class:"CardMediaBuild"}),e("img",{src:g,alt:"build",class:"CardMediaBuild"})],-1),h=e("h1",{class:"text-h1"},"Something is wrong",-1),v=e("p",null,[e("small",null,[r("The page you are looking was moved, removed, "),e("br"),r("renamed, or might never exist! ")])],-1);function x(b,V){return o(),c(i,{"no-gutters":"",class:"h-100vh"},{default:s(()=>[t(d,{class:"d-flex align-center justify-center"},{default:s(()=>[e("div",u,[f,h,v,t(l,{variant:"flat",color:"primary",class:"mt-4",to:"/","prepend-icon":"mdi-home"},{default:s(()=>[r(" Home")]),_:1})])]),_:1})]),_:1})}const C=a(p,[["render",x]]);export{C as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
.custom-devider{border-color:#00000014!important}.googleBtn{border-color:#00000014;margin:30px 0 20px}.outlinedInput .v-field{border:1px solid rgba(0,0,0,.08);box-shadow:none}.orbtn{padding:2px 40px;border-color:#00000014;margin:20px 15px}.pwdInput{position:relative}.pwdInput .v-input__append{position:absolute;right:10px;top:50%;transform:translateY(-50%)}.loginForm .v-text-field .v-field--active input{font-weight:500}.loginBox{max-width:475px;margin:0 auto}

View File

@@ -0,0 +1 @@
import{aw as _,x as d,D as n,o as c,s as m,a as f,w as p,Q as r,b as a,R as o,B as t,ax as h}from"./index-5ac7c267.js";const s={Sidebar_drawer:!0,Customizer_drawer:!1,mini_sidebar:!1,fontTheme:"Roboto",inputBg:!1},l=_({id:"customizer",state:()=>({Sidebar_drawer:s.Sidebar_drawer,Customizer_drawer:s.Customizer_drawer,mini_sidebar:s.mini_sidebar,fontTheme:"Poppins",inputBg:s.inputBg}),getters:{},actions:{SET_SIDEBAR_DRAWER(){this.Sidebar_drawer=!this.Sidebar_drawer},SET_MINI_SIDEBAR(e){this.mini_sidebar=e},SET_FONT(e){this.fontTheme=e}}}),u={class:"logo",style:{display:"flex","align-items":"center"}},b={style:{"font-size":"24px","font-weight":"1000"}},w={style:{"font-size":"20px","font-weight":"1000"}},S={style:{"font-size":"20px"}},z=d({__name:"LogoDark",setup(e){n("rgb(var(--v-theme-primary))"),n("rgb(var(--v-theme-secondary))");const i=l();return(g,B)=>(c(),m("div",u,[f(t(h),{to:"/",style:{"text-decoration":"none",color:"black"}},{default:p(()=>[r(a("span",b,"AstrBot 仪表盘",512),[[o,!t(i).mini_sidebar]]),r(a("span",w,"Astr",512),[[o,t(i).mini_sidebar]]),r(a("span",S,"Bot",512),[[o,t(i).mini_sidebar]])]),_:1})]))}});export{z as _,l as u};

View File

@@ -0,0 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as i}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as n,D as a,o as c,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-5ac7c267.js";const p=["innerHTML"],v=n({__name:"MaterialIcons",setup(b){const s=a({title:"Material Icons"}),r=a('<iframe src="https://materialdesignicons.com/" frameborder="0" width="100%" height="1000"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Material Icons",disabled:!0,href:"#"}]);return(h,M)=>(c(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(i,{title:"Material Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,p)]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -0,0 +1 @@
.custom-devider{border-color:#00000014!important}.googleBtn{border-color:#00000014;margin:30px 0 20px}.outlinedInput .v-field{border:1px solid rgba(0,0,0,.08);box-shadow:none}.orbtn{padding:2px 40px;border-color:#00000014;margin:20px 15px}.pwdInput{position:relative}.pwdInput .v-input__append{position:absolute;right:10px;top:50%;transform:translateY(-50%)}.loginBox{max-width:475px;margin:0 auto}

View File

@@ -0,0 +1 @@
import{_ as B}from"./LogoDark.vue_vue_type_script_setup_true_lang-d555e5be.js";import{x as y,D as o,o as b,s as U,a as e,w as a,b as n,B as $,d as u,f as d,A as _,e as f,V as r,O as m,aq as q,av as A,F as E,c as F,N as T,J as V,L as P}from"./index-5ac7c267.js";const z="/assets/social-google-9b2fa67a.svg",N=["src"],S=n("span",{class:"ml-2"},"Sign up with Google",-1),D=n("h5",{class:"text-h5 text-center my-4 mb-8"},"Sign up with Email address",-1),G={class:"d-sm-inline-flex align-center mt-2 mb-7 mb-sm-0 font-weight-bold"},L=n("a",{href:"#",class:"ml-1 text-lightText"},"Terms and Condition",-1),O={class:"mt-5 text-right"},j=y({__name:"AuthRegister",setup(w){const c=o(!1),i=o(!1),p=o(""),v=o(""),g=o(),h=o(""),x=o(""),k=o([s=>!!s||"Password is required",s=>s&&s.length<=10||"Password must be less than 10 characters"]),C=o([s=>!!s||"E-mail is required",s=>/.+@.+\..+/.test(s)||"E-mail must be valid"]);function R(){g.value.validate()}return(s,l)=>(b(),U(E,null,[e(u,{block:"",color:"primary",variant:"outlined",class:"text-lightText googleBtn"},{default:a(()=>[n("img",{src:$(z),alt:"google"},null,8,N),S]),_:1}),e(r,null,{default:a(()=>[e(d,{class:"d-flex align-center"},{default:a(()=>[e(_,{class:"custom-devider"}),e(u,{variant:"outlined",class:"orbtn",rounded:"md",size:"small"},{default:a(()=>[f("OR")]),_:1}),e(_,{class:"custom-devider"})]),_:1})]),_:1}),D,e(A,{ref_key:"Regform",ref:g,"lazy-validation":"",action:"/dashboards/analytical",class:"mt-7 loginForm"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:h.value,"onUpdate:modelValue":l[0]||(l[0]=t=>h.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Firstname"},null,8,["modelValue"])]),_:1}),e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:x.value,"onUpdate:modelValue":l[1]||(l[1]=t=>x.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Lastname"},null,8,["modelValue"])]),_:1})]),_:1}),e(m,{modelValue:v.value,"onUpdate:modelValue":l[2]||(l[2]=t=>v.value=t),rules:C.value,label:"Email Address / Username",class:"mt-4 mb-4",required:"",density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary"},null,8,["modelValue","rules"]),e(m,{modelValue:p.value,"onUpdate:modelValue":l[3]||(l[3]=t=>p.value=t),rules:k.value,label:"Password",required:"",density:"comfortable",variant:"outlined",color:"primary","hide-details":"auto","append-icon":i.value?"mdi-eye":"mdi-eye-off",type:i.value?"text":"password","onClick:append":l[4]||(l[4]=t=>i.value=!i.value),class:"pwdInput"},null,8,["modelValue","rules","append-icon","type"]),n("div",G,[e(q,{modelValue:c.value,"onUpdate:modelValue":l[5]||(l[5]=t=>c.value=t),rules:[t=>!!t||"You must agree to continue!"],label:"Agree with?",required:"",color:"primary",class:"ms-n2","hide-details":""},null,8,["modelValue","rules"]),L]),e(u,{color:"secondary",block:"",class:"mt-2",variant:"flat",size:"large",onClick:l[6]||(l[6]=t=>R())},{default:a(()=>[f("Sign Up")]),_:1})]),_:1},512),n("div",O,[e(_),e(u,{variant:"plain",to:"/auth/login",class:"mt-2 text-capitalize mr-n2"},{default:a(()=>[f("Already have an account?")]),_:1})])],64))}});const I={class:"pa-7 pa-sm-12"},J=n("h2",{class:"text-secondary text-h2 mt-8"},"Sign up",-1),Y=n("h4",{class:"text-disabled text-h4 mt-3"},"Enter credentials to continue",-1),M=y({__name:"RegisterPage",setup(w){return(c,i)=>(b(),F(r,{class:"h-100vh","no-gutters":""},{default:a(()=>[e(d,{cols:"12",class:"d-flex align-center bg-lightprimary"},{default:a(()=>[e(T,null,{default:a(()=>[n("div",I,[e(r,{justify:"center"},{default:a(()=>[e(d,{cols:"12",lg:"10",xl:"6",md:"7"},{default:a(()=>[e(V,{elevation:"0",class:"loginBox"},{default:a(()=>[e(V,{variant:"outlined"},{default:a(()=>[e(P,{class:"pa-9"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",class:"text-center"},{default:a(()=>[e(B),J,Y]),_:1})]),_:1}),e(j)]),_:1})]),_:1})]),_:1})]),_:1})]),_:1})])]),_:1})]),_:1})]),_:1}))}});export{M as default};

View File

@@ -0,0 +1 @@
import{_ as c}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as f}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as m,D as s,o as l,s as r,a as e,w as a,f as i,V as o,F as d,u as _,J as p,X as b,b as h,t as g}from"./index-5ac7c267.js";const v=m({__name:"ShadowPage",setup(w){const n=s({title:"Shadow Page"}),u=s([{title:"Utilities",disabled:!1,href:"#"},{title:"Shadow",disabled:!0,href:"#"}]);return(V,x)=>(l(),r(d,null,[e(c,{title:n.value.title,breadcrumbs:u.value},null,8,["title","breadcrumbs"]),e(o,null,{default:a(()=>[e(i,{cols:"12",md:"12"},{default:a(()=>[e(f,{title:"Basic Shadow"},{default:a(()=>[e(o,{justify:"center"},{default:a(()=>[(l(),r(d,null,_(25,t=>e(i,{key:t,cols:"auto"},{default:a(()=>[e(p,{height:"100",width:"100",class:b(["mb-5",["d-flex justify-center align-center bg-primary",`elevation-${t}`]])},{default:a(()=>[h("div",null,g(t-1),1)]),_:2},1032,["class"])]),_:2},1024)),64))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -0,0 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as n}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as c,D as a,o as i,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-5ac7c267.js";const b=["innerHTML"],w=c({__name:"TablerIcons",setup(p){const s=a({title:"Tabler Icons"}),r=a('<iframe src="https://tablericons.com/" frameborder="0" width="100%" height="600"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Tabler Icons",disabled:!0,href:"#"}]);return(h,T)=>(i(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(n,{title:"Tabler Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,b)]),_:1})]),_:1})]),_:1})],64))}});export{w as default};

View File

@@ -0,0 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as v}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as f,o as i,c as g,w as e,a,a8 as y,K as b,e as w,t as d,A as C,L as V,a9 as L,J as _,D as o,s as h,f as k,b as t,F as x,u as B,X as H,V as T}from"./index-5ac7c267.js";const s=f({__name:"UiChildCard",props:{title:String},setup(r){const l=r;return(n,c)=>(i(),g(_,{variant:"outlined"},{default:e(()=>[a(y,{class:"py-3"},{default:e(()=>[a(b,{class:"text-h5"},{default:e(()=>[w(d(l.title),1)]),_:1})]),_:1}),a(C),a(V,null,{default:e(()=>[L(n.$slots,"default")]),_:3})]),_:3}))}}),D={class:"d-flex flex-column gap-1"},S={class:"text-caption pa-2 bg-lightprimary"},z=t("div",{class:"text-grey"},"Class",-1),N={class:"font-weight-medium"},$=t("div",null,[t("p",{class:"text-left"},"Left aligned on all viewport sizes."),t("p",{class:"text-center"},"Center aligned on all viewport sizes."),t("p",{class:"text-right"},"Right aligned on all viewport sizes."),t("p",{class:"text-sm-left"},"Left aligned on viewports SM (small) or wider."),t("p",{class:"text-right text-md-left"},"Left aligned on viewports MD (medium) or wider."),t("p",{class:"text-right text-lg-left"},"Left aligned on viewports LG (large) or wider."),t("p",{class:"text-right text-xl-left"},"Left aligned on viewports XL (extra-large) or wider.")],-1),M=t("div",{class:"d-flex justify-space-between flex-row"},[t("a",{href:"#",class:"text-decoration-none"},"Non-underlined link"),t("div",{class:"text-decoration-line-through"},"Line-through text"),t("div",{class:"text-decoration-overline"},"Overline text"),t("div",{class:"text-decoration-underline"},"Underline text")],-1),O=t("div",null,[t("p",{class:"text-high-emphasis"},"High-emphasis has an opacity of 87% in light theme and 100% in dark."),t("p",{class:"text-medium-emphasis"},"Medium-emphasis text and hint text have opacities of 60% in light theme and 70% in dark."),t("p",{class:"text-disabled"},"Disabled text has an opacity of 38% in light theme and 50% in dark.")],-1),j=f({__name:"TypographyPage",setup(r){const l=o({title:"Typography Page"}),n=o([["Heading 1","text-h1"],["Heading 2","text-h2"],["Heading 3","text-h3"],["Heading 4","text-h4"],["Heading 5","text-h5"],["Heading 6","text-h6"],["Subtitle 1","text-subtitle-1"],["Subtitle 2","text-subtitle-2"],["Body 1","text-body-1"],["Body 2","text-body-2"],["Button","text-button"],["Caption","text-caption"],["Overline","text-overline"]]),c=o([{title:"Utilities",disabled:!1,href:"#"},{title:"Typography",disabled:!0,href:"#"}]);return(U,F)=>(i(),h(x,null,[a(m,{title:l.value.title,breadcrumbs:c.value},null,8,["title","breadcrumbs"]),a(T,null,{default:e(()=>[a(k,{cols:"12",md:"12"},{default:e(()=>[a(v,{title:"Basic Typography"},{default:e(()=>[a(s,{title:"Heading"},{default:e(()=>[t("div",D,[(i(!0),h(x,null,B(n.value,([p,u])=>(i(),g(_,{variant:"outlined",key:p,class:"my-4"},{default:e(()=>[t("div",{class:H([u,"pa-2"])},d(p),3),t("div",S,[z,t("div",N,d(u),1)])]),_:2},1024))),128))])]),_:1}),a(s,{title:"Text-alignment",class:"mt-8"},{default:e(()=>[$]),_:1}),a(s,{title:"Decoration",class:"mt-8"},{default:e(()=>[M]),_:1}),a(s,{title:"Opacity",class:"mt-8"},{default:e(()=>[O]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{j as default};

View File

@@ -0,0 +1 @@
import{x as n,o,c as i,w as e,a,a8 as d,b as c,K as u,e as p,t as _,a9 as s,A as f,L as V,J as m}from"./index-5ac7c267.js";const C={class:"d-sm-flex align-center justify-space-between"},h=n({__name:"UiParentCard",props:{title:String},setup(l){const r=l;return(t,x)=>(o(),i(m,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(d,null,{default:e(()=>[c("div",C,[a(u,null,{default:e(()=>[p(_(r.title),1)]),_:1}),s(t.$slots,"action")])]),_:3}),a(f),a(V,null,{default:e(()=>[s(t.$slots,"default")]),_:3})]),_:3}))}});export{h as _};

View File

@@ -0,0 +1 @@
const s=(t,r)=>{const o=t.__vccOpts||t;for(const[c,e]of r)o[c]=e;return o};export{s as _};

View File

@@ -0,0 +1,34 @@
<svg width="676" height="391" viewBox="0 0 676 391" fill="none" xmlns="http://www.w3.org/2000/svg">
<g opacity="0.09">
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 4.49127 197.53)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 342.315 387.578)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 28.0057 211.105)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 365.829 374.002)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 51.52 224.68)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 389.344 360.428)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 75.0345 238.255)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 412.858 346.852)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 98.5488 251.83)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 436.372 333.277)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 122.063 265.405)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 459.887 319.703)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 145.578 278.979)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 483.401 306.127)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 169.092 292.556)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 506.916 292.551)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 192.597 306.127)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 530.43 278.977)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 216.111 319.703)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 553.944 265.402)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 239.626 333.277)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 577.459 251.827)" stroke="black"/>
<path d="M263.231 346.905L601.064 151.871" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 600.973 238.252)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 286.654 360.428)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 624.487 224.677)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 310.169 374.002)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 648.002 211.102)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 333.683 387.578)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 671.516 197.527)" stroke="black"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 3.9 KiB

View File

@@ -0,0 +1,43 @@
<svg width="676" height="395" viewBox="0 0 676 395" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="26.998" height="26.8293" transform="matrix(0.866041 -0.499972 0.866041 0.499972 361.873 290.126)" fill="#E3F2FD"/>
<rect width="24.2748" height="24.1231" transform="matrix(0.866041 -0.499972 0.866041 0.499972 364.249 291.115)" fill="#90CAF9"/>
<rect width="26.998" height="26.8293" transform="matrix(0.866041 -0.499972 0.866041 0.499972 291.67 86.4912)" fill="#E3F2FD"/>
<rect width="24.2748" height="24.1231" transform="matrix(0.866041 -0.499972 0.866041 0.499972 294.046 87.48)" fill="#90CAF9"/>
<g filter="url(#filter0_d)">
<path d="M370.694 211.828L365.394 208.768V215.835L365.404 215.829C365.459 216.281 365.785 216.724 366.383 217.069L417.03 246.308C418.347 247.068 420.481 247.068 421.798 246.308L468.671 219.248C469.374 218.842 469.702 218.301 469.654 217.77V210.861L464.282 213.962L418.024 187.257C416.708 186.497 414.573 186.497 413.257 187.257L370.694 211.828Z" fill="url(#paint0_linear)"/>
</g>
<rect width="59.6284" height="63.9858" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 364 208.812)" fill="#90CAF9"/>
<rect width="59.6284" height="63.9858" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 364 208.812)" fill="url(#paint1_linear)"/>
<rect width="56.6816" height="60.8238" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 366.645 208.761)" fill="url(#paint2_linear)"/>
<path d="M421.238 206.161C421.238 206.434 421.62 206.655 422.092 206.655L432.159 206.656C435.164 206.656 437.6 208.063 437.601 209.798C437.602 211.533 435.166 212.939 432.162 212.938L422.09 212.937C421.62 212.937 421.24 213.157 421.24 213.428L421.241 215.814C421.241 216.087 421.624 216.308 422.096 216.308L432.689 216.309C438.917 216.31 443.967 213.395 443.965 209.799C443.964 206.202 438.914 203.286 432.684 203.286L422.086 203.284C421.617 203.284 421.236 203.504 421.237 203.775L421.238 206.161Z" fill="#1E88E5"/>
<path d="M413.422 213.43C413.422 213.157 413.039 212.936 412.567 212.936L402.896 212.935C399.891 212.935 397.455 211.528 397.454 209.793C397.453 208.059 399.889 206.652 402.894 206.653L412.57 206.654C413.039 206.654 413.419 206.435 413.419 206.164L413.418 203.777C413.418 203.504 413.035 203.283 412.563 203.283L402.366 203.282C396.138 203.281 391.089 206.197 391.09 209.793C391.091 213.389 396.141 216.305 402.371 216.306L412.573 216.307C413.042 216.307 413.423 216.088 413.423 215.817L413.422 213.43Z" fill="#1E88E5"/>
<path d="M407.999 198.145L411.211 201.235C411.266 201.288 411.332 201.336 411.405 201.379C411.813 201.614 412.461 201.669 412.979 201.49C413.59 201.278 413.787 200.821 413.421 200.469L410.209 197.379C409.843 197.027 409.051 196.913 408.441 197.124C407.831 197.335 407.633 197.793 407.999 198.145Z" fill="#1E88E5"/>
<path d="M416.235 200.853C416.235 201.058 416.38 201.244 416.613 201.379C416.846 201.513 417.168 201.597 417.524 201.597C418.236 201.596 418.813 201.263 418.813 200.852L418.812 197.021C418.811 196.61 418.234 196.277 417.522 196.277C416.811 196.278 416.234 196.611 416.234 197.022L416.235 200.853Z" fill="#1E88E5"/>
<path d="M421.627 200.47C421.317 200.769 421.412 201.143 421.82 201.379C421.893 201.421 421.977 201.459 422.069 201.491C422.68 201.703 423.472 201.588 423.838 201.236L427.047 198.147C427.413 197.794 427.215 197.337 426.605 197.126C425.994 196.915 425.203 197.029 424.836 197.381L421.627 200.47Z" fill="#1E88E5"/>
<path d="M427.056 221.447L423.844 218.357C423.478 218.005 422.686 217.891 422.076 218.102C421.466 218.314 421.268 218.771 421.634 219.123L424.846 222.213C424.901 222.266 424.967 222.314 425.04 222.357C425.448 222.592 426.097 222.647 426.614 222.468C427.225 222.257 427.423 221.799 427.056 221.447Z" fill="#1E88E5"/>
<path d="M418.82 218.739C418.82 218.328 418.243 217.995 417.531 217.995C416.819 217.995 416.242 218.329 416.242 218.74L416.243 222.57C416.244 222.776 416.388 222.962 416.621 223.096C416.854 223.231 417.177 223.314 417.533 223.314C418.245 223.314 418.822 222.981 418.821 222.57L418.82 218.739Z" fill="#1E88E5"/>
<path d="M413.428 219.122C413.794 218.77 413.596 218.312 412.986 218.101C412.375 217.89 411.584 218.004 411.217 218.356L408.008 221.445C407.698 221.744 407.793 222.118 408.201 222.354C408.274 222.396 408.358 222.434 408.45 222.466C409.061 222.678 409.853 222.563 410.219 222.211L413.428 219.122Z" fill="#1E88E5"/>
<defs>
<filter id="filter0_d" x="301.394" y="186.687" width="232.264" height="208.191" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feColorMatrix in="SourceAlpha" type="matrix" values="0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 127 0"/>
<feOffset dy="84"/>
<feGaussianBlur stdDeviation="32"/>
<feColorMatrix type="matrix" values="0 0 0 0 0.129412 0 0 0 0 0.588235 0 0 0 0 0.952941 0 0 0 0.2 0"/>
<feBlend mode="normal" in2="BackgroundImageFix" result="effect1_dropShadow"/>
<feBlend mode="normal" in="SourceGraphic" in2="effect1_dropShadow" result="shape"/>
</filter>
<linearGradient id="paint0_linear" x1="417.526" y1="205.789" x2="365.394" y2="216.782" gradientUnits="userSpaceOnUse">
<stop stop-color="#2196F3"/>
<stop offset="1" stop-color="#B1DCFF"/>
</linearGradient>
<linearGradient id="paint1_linear" x1="0.503035" y1="2.68177" x2="20.3032" y2="42.2842" gradientUnits="userSpaceOnUse">
<stop stop-color="#FAFAFA" stop-opacity="0.74"/>
<stop offset="1" stop-color="#91CBFA"/>
</linearGradient>
<linearGradient id="paint2_linear" x1="-18.5494" y1="-44.8799" x2="14.7845" y2="40.5766" gradientUnits="userSpaceOnUse">
<stop stop-color="#FAFAFA" stop-opacity="0.74"/>
<stop offset="1" stop-color="#91CBFA"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

@@ -0,0 +1,42 @@
<svg width="710" height="391" viewBox="0 0 710 391" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="26.9258" height="26.7576" transform="matrix(0.866041 -0.499972 0.866041 0.499972 161.088 154.333)" fill="#EDE7F6"/>
<rect width="24.9267" height="24.7709" transform="matrix(0.866041 -0.499972 0.866041 0.499972 162.809 155.327)" fill="#B39DDB"/>
<rect width="26.9258" height="26.7576" transform="matrix(0.866041 -0.499972 0.866041 0.499972 536.744 181.299)" fill="#EDE7F6"/>
<rect width="24.9267" height="24.7709" transform="matrix(0.866041 -0.499972 0.866041 0.499972 538.465 182.292)" fill="#B39DDB"/>
<g filter="url(#filter0_d)">
<path d="M67.7237 137.573V134.673H64.009V140.824L64.0177 140.829C64.0367 141.477 64.4743 142.121 65.3305 142.615L103.641 164.733C105.393 165.744 108.232 165.744 109.983 164.733L204.044 110.431C204.879 109.949 205.316 109.324 205.355 108.693L205.355 108.692V108.68C205.358 108.628 205.358 108.576 205.355 108.523L205.362 102.335L200.065 104.472L165.733 84.6523C163.982 83.6413 161.142 83.6413 159.391 84.6523L67.7237 137.573Z" fill="url(#paint0_linear)"/>
</g>
<rect width="115.933" height="51.5596" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 62.1588 134.683)" fill="#673AB7"/>
<rect width="115.933" height="51.5596" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 62.1588 134.683)" fill="url(#paint1_linear)" fill-opacity="0.3"/>
<mask id="mask0" mask-type="alpha" maskUnits="userSpaceOnUse" x="64" y="78" width="141" height="81">
<rect width="115.933" height="51.5596" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 62.1588 134.683)" fill="#673AB7"/>
</mask>
<g mask="url(#mask0)">
</g>
<mask id="mask1" mask-type="alpha" maskUnits="userSpaceOnUse" x="64" y="78" width="141" height="81">
<rect width="115.933" height="51.5596" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 62.1588 134.683)" fill="#673AB7"/>
</mask>
<g mask="url(#mask1)">
<rect width="64.3732" height="64.3732" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 111.303 81.6006)" fill="#5E35B1"/>
<rect opacity="0.7" x="0.866041" width="63.3732" height="63.3732" rx="4.5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 79.1848 87.8305)" stroke="#5E35B1"/>
</g>
<defs>
<filter id="filter0_d" x="0.0090332" y="83.894" width="269.353" height="229.597" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feColorMatrix in="SourceAlpha" type="matrix" values="0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 127 0"/>
<feOffset dy="84"/>
<feGaussianBlur stdDeviation="32"/>
<feColorMatrix type="matrix" values="0 0 0 0 0.403922 0 0 0 0 0.227451 0 0 0 0 0.717647 0 0 0 0.2 0"/>
<feBlend mode="normal" in2="BackgroundImageFix" result="effect1_dropShadow"/>
<feBlend mode="normal" in="SourceGraphic" in2="effect1_dropShadow" result="shape"/>
</filter>
<linearGradient id="paint0_linear" x1="200.346" y1="102.359" x2="71.0293" y2="158.071" gradientUnits="userSpaceOnUse">
<stop stop-color="#A491C8"/>
<stop offset="1" stop-color="#D7C5F8"/>
</linearGradient>
<linearGradient id="paint1_linear" x1="8.1531" y1="-0.145767" x2="57.1962" y2="72.3003" gradientUnits="userSpaceOnUse">
<stop stop-color="white"/>
<stop offset="1" stop-color="white" stop-opacity="0"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 3.3 KiB

View File

@@ -0,0 +1,27 @@
<svg width="676" height="391" viewBox="0 0 676 391" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M267.744 237.142L279.699 230.24L300.636 242.329L288.682 249.231L313.566 263.598L286.344 279.314L261.46 264.947L215.984 291.203L197.779 282.558L169.334 211.758L169.092 211.618L196.313 195.902L267.744 237.142ZM219.359 265.077L240.523 252.859L204.445 232.029L205.487 234.589L219.359 265.077Z" fill="#FFAB91"/>
<path d="M469.959 120.206L481.913 113.304L502.851 125.392L490.897 132.294L515.78 146.661L488.559 162.377L463.675 148.011L418.199 174.266L399.994 165.621L371.548 94.8211L371.307 94.6816L398.528 78.9654L469.959 120.206ZM421.574 148.141L442.737 135.922L406.66 115.093L407.701 117.653L421.574 148.141Z" fill="#FFAB91"/>
<path d="M204.523 235.027V232.237L219.401 265.014L240.555 252.926V255.018L218.936 267.339L204.523 235.027Z" fill="#D84315"/>
<path d="M406.738 118.09V115.301L421.616 148.078L442.77 135.99V138.082L421.151 150.402L406.738 118.09Z" fill="#D84315"/>
<rect width="109.114" height="136.405" transform="matrix(0.866025 -0.5 0.866025 0.5 220.507 181.925)" fill="url(#paint0_linear)"/>
<rect width="40.2357" height="70.0545" transform="matrix(0.866025 -0.5 0.866025 0.5 280.437 201.886)" fill="url(#paint1_linear)"/>
<rect x="25.1147" width="80.1144" height="107.405" transform="matrix(0.866025 -0.5 0.866025 0.5 223.872 194.482)" stroke="#1565C0" stroke-width="29"/>
<rect x="25.1147" width="80.1144" height="107.405" transform="matrix(0.866025 -0.5 0.866025 0.5 223.872 194.482)" stroke="url(#paint2_linear)" stroke-width="29"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M279.517 230.177L267.662 237.15L196.064 195.772L168.866 211.58L169.331 212.097L170.096 214.002L196.436 198.795L267.866 240.035L279.821 233.133L298.211 243.751L300.787 242.265L279.517 230.177ZM291.278 250.695L288.804 252.124L311.1 264.996L313.805 263.418L291.278 250.695Z" fill="#D84315"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M481.732 113.24L469.877 120.214L398.279 78.8359L371.081 94.6433L371.546 95.1603L372.311 97.0652L398.651 81.8581L470.081 123.099L482.036 116.196L500.426 126.814L503.002 125.328L481.732 113.24ZM493.493 133.759L491.019 135.187L513.315 148.06L516.02 146.482L493.493 133.759Z" fill="#D84315"/>
<path d="M288.674 252.229V249.207L291.929 251.067L288.674 252.229Z" fill="#D84315"/>
<defs>
<linearGradient id="paint0_linear" x1="77.7511" y1="139.902" x2="-10.8629" y2="8.75671" gradientUnits="userSpaceOnUse">
<stop stop-color="#3076C8"/>
<stop offset="0.992076" stop-color="#91CBFA"/>
</linearGradient>
<linearGradient id="paint1_linear" x1="25.8162" y1="51.0447" x2="68.7073" y2="-5.41524" gradientUnits="userSpaceOnUse">
<stop stop-color="#2E75C7"/>
<stop offset="1" stop-color="#4283CC"/>
</linearGradient>
<linearGradient id="paint2_linear" x1="-16.1224" y1="-47.972" x2="123.494" y2="290.853" gradientUnits="userSpaceOnUse">
<stop stop-color="white"/>
<stop offset="1" stop-color="white" stop-opacity="0"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 2.9 KiB

File diff suppressed because one or more lines are too long

720
dashboard/dist/assets/index-5ac7c267.js vendored Normal file

File diff suppressed because one or more lines are too long

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

9
dashboard/dist/assets/md5-086248bf.js vendored Normal file

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,6 @@
<svg width="22" height="22" viewBox="0 0 22 22" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5.06129 13.2253L4.31871 15.9975L1.60458 16.0549C0.793457 14.5504 0.333374 12.8292 0.333374 11C0.333374 9.23119 0.763541 7.56319 1.52604 6.09448H1.52662L3.94296 6.53748L5.00146 8.93932C4.77992 9.58519 4.65917 10.2785 4.65917 11C4.65925 11.783 4.80108 12.5332 5.06129 13.2253Z" fill="#FBBB00"/>
<path d="M21.4804 9.00732C21.6029 9.65257 21.6668 10.3189 21.6668 11C21.6668 11.7637 21.5865 12.5086 21.4335 13.2271C20.9143 15.6722 19.5575 17.8073 17.678 19.3182L17.6774 19.3177L14.6339 19.1624L14.2031 16.4734C15.4503 15.742 16.425 14.5974 16.9384 13.2271H11.2346V9.00732H17.0216H21.4804Z" fill="#518EF8"/>
<path d="M17.6772 19.3176L17.6777 19.3182C15.8498 20.7875 13.5277 21.6666 11 21.6666C6.93783 21.6666 3.40612 19.3962 1.60449 16.0549L5.0612 13.2253C5.96199 15.6294 8.28112 17.3408 11 17.3408C12.1686 17.3408 13.2634 17.0249 14.2029 16.4734L17.6772 19.3176Z" fill="#28B446"/>
<path d="M17.8085 2.78892L14.353 5.61792C13.3807 5.01017 12.2313 4.65908 11 4.65908C8.21963 4.65908 5.85713 6.44896 5.00146 8.93925L1.52658 6.09442H1.526C3.30125 2.67171 6.8775 0.333252 11 0.333252C13.5881 0.333252 15.9612 1.25517 17.8085 2.78892Z" fill="#F14336"/>
</svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

1
dashboard/dist/favicon.svg vendored Normal file
View File

@@ -0,0 +1 @@
<svg t="1702013028016" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="1541" width="200" height="200"><path d="M0 0m204.8 0l614.4 0q204.8 0 204.8 204.8l0 614.4q0 204.8-204.8 204.8l-614.4 0q-204.8 0-204.8-204.8l0-614.4q0-204.8 204.8-204.8Z" fill="#FFEC9C" p-id="1542"></path><path d="M819.2 0H534.272A756.48 756.48 0 0 0 0 483.584V819.2a204.8 204.8 0 0 0 204.8 204.8h614.4a204.8 204.8 0 0 0 204.8-204.8V204.8a204.8 204.8 0 0 0-204.8-204.8z" fill="#FFE98A" p-id="1543"></path><path d="M819.2 0h-3.84a755.2 755.2 0 0 0-539.392 1024H819.2a204.8 204.8 0 0 0 204.8-204.8V204.8a204.8 204.8 0 0 0-204.8-204.8z" fill="#FFE471" p-id="1544"></path><path d="M497.152 721.152A752.384 752.384 0 0 0 560.384 1024H819.2a204.8 204.8 0 0 0 204.8-204.8V204.8a204.8 204.8 0 0 0-89.088-168.96 755.2 755.2 0 0 0-437.76 685.312z" fill="#FFE161" p-id="1545"></path><path d="M526.08 140.032l98.304 199.168L844.8 371.2a15.616 15.616 0 0 1 8.704 25.6l-159.744 156.16 37.632 219.136a15.616 15.616 0 0 1-22.528 16.384l-196.608-102.4-196.608 102.4a15.616 15.616 0 0 1-22.528-16.384l37.12-219.136-159.232-155.136a15.616 15.616 0 0 1 8.704-25.6l219.904-32 98.304-199.168a15.616 15.616 0 0 1 28.16-1.024z" fill="#FFF5CC" p-id="1546"></path><path d="M665.6 409.6a444.16 444.16 0 0 0 25.6-61.44l-65.536-9.472-99.584-198.656a15.616 15.616 0 0 0-27.904 0l-98.304 199.168L179.2 371.2a15.616 15.616 0 0 0-8.704 25.6l159.744 156.16-15.104 87.04A407.808 407.808 0 0 0 665.6 409.6z" fill="#FFFFFF" p-id="1547"></path></svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

21
dashboard/dist/index.html vendored Normal file
View File

@@ -0,0 +1,21 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" href="/favicon.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="keywords" content="AstrBot Soulter" />
<meta name="description" content="AstrBot Dashboard" />
<link
rel="stylesheet"
href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&family=Poppins:wght@400;500;600;700&family=Roboto:wght@400;500;700&display=swap"
/>
<title>AstrBot - 仪表盘</title>
<script type="module" crossorigin src="/assets/index-5ac7c267.js"></script>
<link rel="stylesheet" href="/assets/index-0f1523f3.css">
</head>
<body>
<div id="app"></div>
</body>
</html>

537
dashboard/helper.py Normal file
View File

@@ -0,0 +1,537 @@
import threading
import asyncio
from . import DashBoardData
from typing import Union, Optional
from util.cmd_config import CmdConfig
from dataclasses import dataclass
from util.plugin_dev.api.v1.config import update_config
from SparkleLogging.utils.core import LogManager
from logging import Logger
from type.types import Context
logger: Logger = LogManager.GetLogger(log_name='astrbot')
@dataclass
class DashBoardConfig():
config_type: str
name: Optional[str] = None
description: Optional[str] = None
path: Optional[str] = None # 仅 item 才需要
body: Optional[list['DashBoardConfig']] = None # 仅 group 才需要
value: Optional[Union[list, dict, str, int, bool]] = None # 仅 item 才需要
val_type: Optional[str] = None # 仅 item 才需要
class DashBoardHelper():
def __init__(self, context: Context, dashboard_data: DashBoardData):
dashboard_data.configs = {
"data": []
}
self.context = context
self.parse_default_config(dashboard_data, context.base_config)
# 将 config.yaml、 中的配置解析到 dashboard_data.configs 中
def parse_default_config(self, dashboard_data: DashBoardData, config: dict):
try:
qq_official_platform_group = DashBoardConfig(
config_type="group",
name="QQ(官方)",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用 QQ_OFFICIAL 平台",
description="官方的接口,仅支持 QQ 频道。详见 q.qq.com",
value=config['qqbot']['enable'],
path="qqbot.enable",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="QQ机器人APPID",
description="详见 q.qq.com",
value=config['qqbot']['appid'],
path="qqbot.appid",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="QQ机器人令牌",
description="详见 q.qq.com",
value=config['qqbot']['token'],
path="qqbot.token",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="QQ机器人 Secret",
description="详见 q.qq.com",
value=config['qqbot_secret'],
path="qqbot_secret",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否允许 QQ 频道私聊",
description="如果启用,机器人会响应私聊消息。",
value=config['direct_message_mode'],
path="direct_message_mode",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否接收QQ群消息",
description="需要机器人有相应的群消息接收权限。在 q.qq.com 上查看。",
value=config['qqofficial_enable_group_message'],
path="qqofficial_enable_group_message",
),
]
)
qq_gocq_platform_group = DashBoardConfig(
config_type="group",
name="QQ(nakuru)",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用",
description="",
value=config['gocqbot']['enable'],
path="gocqbot.enable",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="HTTP 服务器地址",
description="",
value=config['gocq_host'],
path="gocq_host",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="HTTP 服务器端口",
description="",
value=config['gocq_http_port'],
path="gocq_http_port",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="WebSocket 服务器端口",
description="目前仅支持正向 WebSocket",
value=config['gocq_websocket_port'],
path="gocq_websocket_port",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应群消息",
description="",
value=config['gocq_react_group'],
path="gocq_react_group",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应私聊消息",
description="",
value=config['gocq_react_friend'],
path="gocq_react_friend",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应群成员增加消息",
description="",
value=config['gocq_react_group_increase'],
path="gocq_react_group_increase",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应频道消息",
description="",
value=config['gocq_react_guild'],
path="gocq_react_guild",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="转发阈值(字符数)",
description="机器人回复的消息长度超出这个值后,会被折叠成转发卡片发出以减少刷屏。",
value=config['qq_forward_threshold'],
path="qq_forward_threshold",
),
]
)
qq_aiocqhttp_platform_group = DashBoardConfig(
config_type="group",
name="QQ(aiocqhttp)",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用",
description="",
value=config['aiocqhttp']['enable'],
path="aiocqhttp.enable",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="WebSocket 反向连接 host",
description="",
value=config['aiocqhttp']['ws_reverse_host'],
path="aiocqhttp.ws_reverse_host",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="WebSocket 反向连接 port",
description="",
value=config['aiocqhttp']['ws_reverse_port'],
path="aiocqhttp.ws_reverse_port",
),
]
)
general_platform_detail_group = DashBoardConfig(
config_type="group",
name="通用平台配置",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启动消息文字转图片",
description="启动后,机器人会将消息转换为图片发送,以降低风控风险。",
value=config['qq_pic_mode'],
path="qq_pic_mode",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="消息限制时间",
description="在此时间内,机器人不会回复同一个用户的消息。单位:秒",
value=config['limit']['time'],
path="limit.time",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="消息限制次数",
description="在上面的时间内,如果用户发送消息超过此次数,则机器人不会回复。单位:次",
value=config['limit']['count'],
path="limit.count",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="回复前缀",
description="[xxxx] 你好! 其中xxxx是你可以填写的前缀。如果为空则不显示。",
value=config['reply_prefix'],
path="reply_prefix",
),
DashBoardConfig(
config_type="item",
val_type="list",
name="通用管理员用户 ID支持多个管理员。通过 !myid 指令获取。",
description="",
value=config['other_admins'],
path="other_admins",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="独立会话",
description="是否启用独立会话模式,即 1 个用户自然账号 1 个会话。",
value=config['uniqueSessionMode'],
path="uniqueSessionMode",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="LLM 唤醒词",
description="如果不为空, 那么只有当消息以此词开头时,才会调用大语言模型进行回复。如设置为 /chat那么只有当消息以 /chat 开头时,才会调用大语言模型进行回复。",
value=config['llm_wake_prefix'],
path="llm_wake_prefix",
)
]
)
openai_official_llm_group = DashBoardConfig(
config_type="group",
name="OpenAI 官方接口类设置",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="list",
name="OpenAI API Key",
description="OpenAI API 的 Key。支持使用非官方但兼容的 API第三方中转key",
value=config['openai']['key'],
path="openai.key",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="OpenAI API 节点地址(api base)",
description="OpenAI API 的节点地址,配合非官方 API 使用。如果不想填写,那么请填写 none",
value=config['openai']['api_base'],
path="openai.api_base",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="OpenAI model",
description="OpenAI LLM 模型。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['model'],
path="openai.chatGPTConfigs.model",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="OpenAI max_tokens",
description="OpenAI 最大生成长度。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['max_tokens'],
path="openai.chatGPTConfigs.max_tokens",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI temperature",
description="OpenAI 温度。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['temperature'],
path="openai.chatGPTConfigs.temperature",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI top_p",
description="OpenAI top_p。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['top_p'],
path="openai.chatGPTConfigs.top_p",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI frequency_penalty",
description="OpenAI frequency_penalty。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['frequency_penalty'],
path="openai.chatGPTConfigs.frequency_penalty",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI presence_penalty",
description="OpenAI presence_penalty。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['presence_penalty'],
path="openai.chatGPTConfigs.presence_penalty",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="OpenAI 总生成长度限制",
description="OpenAI 总生成长度限制。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['total_tokens_limit'],
path="openai.total_tokens_limit",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="OpenAI 图像生成模型",
description="OpenAI 图像生成模型。",
value=config['openai_image_generate']['model'],
path="openai_image_generate.model",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="OpenAI 图像生成大小",
description="OpenAI 图像生成大小。",
value=config['openai_image_generate']['size'],
path="openai_image_generate.size",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="OpenAI 图像生成风格",
description="OpenAI 图像生成风格。修改前请参考 OpenAI 官方文档",
value=config['openai_image_generate']['style'],
path="openai_image_generate.style",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="OpenAI 图像生成质量",
description="OpenAI 图像生成质量。修改前请参考 OpenAI 官方文档",
value=config['openai_image_generate']['quality'],
path="openai_image_generate.quality",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="问题题首提示词",
description="如果填写了此项,在每个对大语言模型的请求中,都会在问题前加上此提示词。",
value=config['llm_env_prompt'],
path="llm_env_prompt",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="默认人格文本",
description="默认人格文本",
value=config['default_personality_str'],
path="default_personality_str",
),
]
)
baidu_aip_group = DashBoardConfig(
config_type="group",
name="百度内容审核",
description="需要去申请",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启动百度内容审核服务",
description="",
value=config['baidu_aip']['enable'],
path="baidu_aip.enable"
),
DashBoardConfig(
config_type="item",
val_type="str",
name="APP ID",
description="",
value=config['baidu_aip']['app_id'],
path="baidu_aip.app_id"
),
DashBoardConfig(
config_type="item",
val_type="str",
name="API KEY",
description="",
value=config['baidu_aip']['api_key'],
path="baidu_aip.api_key"
),
DashBoardConfig(
config_type="item",
val_type="str",
name="SECRET KEY",
description="",
value=config['baidu_aip']['secret_key'],
path="baidu_aip.secret_key"
)
]
)
other_group = DashBoardConfig(
config_type="group",
name="其他配置",
description="其他配置描述",
body=[
DashBoardConfig(
config_type="item",
val_type="str",
name="HTTP 代理地址",
description="建议上下一致",
value=config['http_proxy'],
path="http_proxy",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="HTTPS 代理地址",
description="建议上下一致",
value=config['https_proxy'],
path="https_proxy",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="面板用户名",
description="是的,就是你理解的这个面板的用户名",
value=config['dashboard_username'],
path="dashboard_username",
),
]
)
dashboard_data.configs['data'] = [
qq_official_platform_group,
qq_gocq_platform_group,
general_platform_detail_group,
openai_official_llm_group,
other_group,
baidu_aip_group,
qq_aiocqhttp_platform_group
]
except Exception as e:
logger.error(f"配置文件解析错误:{e}")
raise e
def save_config(self, post_config: list, namespace: str):
'''
根据 path 解析并保存配置
'''
queue = post_config
while len(queue) > 0:
config = queue.pop(0)
if config['config_type'] == "group":
for item in config['body']:
queue.append(item)
elif config['config_type'] == "item":
if config['path'] is None or config['path'] == "":
continue
path = config['path'].split('.')
if len(path) == 0:
continue
if config['val_type'] == "bool":
self._write_config(
namespace, config['path'], config['value'])
elif config['val_type'] == "str":
self._write_config(
namespace, config['path'], config['value'])
elif config['val_type'] == "int":
try:
self._write_config(
namespace, config['path'], int(config['value']))
except:
raise ValueError(f"配置项 {config['name']} 的值必须是整数")
elif config['val_type'] == "float":
try:
self._write_config(
namespace, config['path'], float(config['value']))
except:
raise ValueError(f"配置项 {config['name']} 的值必须是浮点数")
elif config['val_type'] == "list":
if config['value'] is None:
self._write_config(namespace, config['path'], [])
elif not isinstance(config['value'], list):
raise ValueError(f"配置项 {config['name']} 的值必须是列表")
self._write_config(
namespace, config['path'], config['value'])
else:
raise NotImplementedError(
f"未知或者未实现的配置项类型:{config['val_type']}")
def _write_config(self, namespace: str, key: str, value):
if namespace == "" or namespace.startswith("internal_"):
# 机器人自带配置,存到 config.yaml
self.context.config_helper.put_by_dot_str(key, value)
else:
update_config(namespace, key, value)

505
dashboard/server.py Normal file
View File

@@ -0,0 +1,505 @@
import websockets
import json
import threading
import asyncio
import os
import uuid
import logging
import traceback
from . import DashBoardData, Response
from flask import Flask, request
from werkzeug.serving import make_server
from astrbot.persist.helper import dbConn
from type.types import Context
from typing import List
from SparkleLogging.utils.core import LogManager
from logging import Logger
from dashboard.helper import DashBoardHelper
from util.io import get_local_ip_addresses
from model.plugin.manager import PluginManager
from util.updator.astrbot_updator import AstrBotUpdator
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class AstrBotDashBoard():
def __init__(self, context: Context, plugin_manager: PluginManager, astrbot_updator: AstrBotUpdator):
self.context = context
self.plugin_manager = plugin_manager
self.astrbot_updator = astrbot_updator
self.dashboard_data = DashBoardData()
self.dashboard_helper = DashBoardHelper(self.context, self.dashboard_data)
self.dashboard_be = Flask(__name__, static_folder="dist", static_url_path="/")
logging.getLogger('werkzeug').setLevel(logging.ERROR)
self.dashboard_be.logger.setLevel(logging.ERROR)
self.ws_clients = {} # remote_ip: ws
self.loop = asyncio.get_event_loop()
self.http_server_thread: threading.Thread = None
@self.dashboard_be.get("/")
def index():
# 返回页面
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/config")
def rt_config():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/logs")
def rt_logs():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/extension")
def rt_extension():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/dashboard/default")
def rt_dashboard():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.post("/api/authenticate")
def authenticate():
username = self.context.base_config.get("dashboard_username", "")
password = self.context.base_config.get("dashboard_password", "")
# 获得请求体
post_data = request.json
if post_data["username"] == username and post_data["password"] == password:
return Response(
status="success",
message="登录成功。",
data={
"token": "astrbot-test-token",
"username": username
}
).__dict__
else:
return Response(
status="error",
message="用户名或密码错误。",
data=None
).__dict__
@self.dashboard_be.post("/api/change_password")
def change_password():
password = self.context.base_config("dashboard_password", "")
# 获得请求体
post_data = request.json
if post_data["password"] == password:
self.context.config_helper.put("dashboard_password", post_data["new_password"])
self.context.base_config['dashboard_password'] = post_data["new_password"]
return Response(
status="success",
message="修改成功。",
data=None
).__dict__
else:
return Response(
status="error",
message="原密码错误。",
data=None
).__dict__
@self.dashboard_be.get("/api/stats")
def get_stats():
db_inst = dbConn()
all_session = db_inst.get_all_stat_session()
last_24_message = db_inst.get_last_24h_stat_message()
# last_24_platform = db_inst.get_last_24h_stat_platform()
platforms = db_inst.get_platform_cnt_total()
self.dashboard_data.stats["session"] = []
self.dashboard_data.stats["session_total"] = db_inst.get_session_cnt_total(
)
self.dashboard_data.stats["message"] = last_24_message
self.dashboard_data.stats["message_total"] = db_inst.get_message_cnt_total(
)
self.dashboard_data.stats["platform"] = platforms
return Response(
status="success",
message="",
data=self.dashboard_data.stats
).__dict__
@self.dashboard_be.get("/api/configs")
def get_configs():
# 如果params中有namespace则返回该namespace下的配置
# 否则返回所有配置
namespace = "" if "namespace" not in request.args else request.args["namespace"]
conf = self._get_configs(namespace)
return Response(
status="success",
message="",
data=conf
).__dict__
@self.dashboard_be.get("/api/config_outline")
def get_config_outline():
outline = self._generate_outline()
return Response(
status="success",
message="",
data=outline
).__dict__
@self.dashboard_be.post("/api/configs")
def post_configs():
post_configs = request.json
try:
self.on_post_configs(post_configs)
return Response(
status="success",
message="保存成功~ 机器人将在 2 秒内重启以应用新的配置。",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=self.dashboard_data.configs
).__dict__
@self.dashboard_be.get("/api/extensions")
def get_plugins():
_plugin_resp = []
for plugin in self.context.cached_plugins:
_p = plugin.metadata
_t = {
"name": _p.plugin_name,
"repo": '' if _p.repo is None else _p.repo,
"author": _p.author,
"desc": _p.desc,
"version": _p.version
}
_plugin_resp.append(_t)
return Response(
status="success",
message="",
data=_plugin_resp
).__dict__
@self.dashboard_be.post("/api/extensions/install")
def install_plugin():
post_data = request.json
repo_url = post_data["url"]
try:
logger.info(f"正在安装插件 {repo_url}")
self.plugin_manager.install_plugin(repo_url)
logger.info(f"安装插件 {repo_url} 成功")
return Response(
status="success",
message="安装成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/install: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/upload-install")
def upload_install_plugin():
try:
file = request.files['file']
print(file.filename)
logger.info(f"正在安装用户上传的插件 {file.filename}")
# save file to temp/
file_path = f"temp/{uuid.uuid4()}.zip"
file.save(file_path)
self.plugin_manager.install_plugin_from_file(file_path)
logger.info(f"安装插件 {file.filename} 成功")
return Response(
status="success",
message="安装成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/upload-install: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/uninstall")
def uninstall_plugin():
post_data = request.json
plugin_name = post_data["name"]
try:
logger.info(f"正在卸载插件 {plugin_name}")
self.plugin_manager.uninstall_plugin(plugin_name)
logger.info(f"卸载插件 {plugin_name} 成功")
return Response(
status="success",
message="卸载成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/uninstall: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/update")
def update_plugin():
post_data = request.json
plugin_name = post_data["name"]
try:
logger.info(f"正在更新插件 {plugin_name}")
self.plugin_manager.update_plugin(plugin_name)
logger.info(f"更新插件 {plugin_name} 成功")
return Response(
status="success",
message="更新成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/update: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/log")
def log():
for item in self.ws_clients:
try:
asyncio.run_coroutine_threadsafe(
self.ws_clients[item].send(request.data.decode()), self.loop).result()
except Exception as e:
pass
return 'ok'
@self.dashboard_be.get("/api/check_update")
def get_update_info():
try:
ret = self.astrbot_updator.check_update(None, None)
return Response(
status="success",
message=str(ret),
data={
"has_new_version": ret is not None
}
).__dict__
except Exception as e:
logger.error(f"/api/check_update: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/update_project")
def update_project_api():
version = request.json['version']
if version == "" or version == "latest":
latest = True
version = ''
else:
latest = False
try:
self.astrbot_updator.update(latest=latest, version=version)
threading.Thread(target=self.astrbot_updator._reboot, args=(3, )).start()
return Response(
status="success",
message="更新成功,机器人将在 3 秒内重启。",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/update_project: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.get("/api/llm/list")
def llm_list():
ret = []
for llm in self.context.llms:
ret.append(llm.llm_name)
return Response(
status="success",
message="",
data=ret
).__dict__
@self.dashboard_be.get("/api/llm")
def llm():
text = request.args["text"]
llm = request.args["llm"]
for llm_ in self.context.llms:
if llm_.llm_name == llm:
try:
ret = asyncio.run_coroutine_threadsafe(
llm_.llm_instance.text_chat(text), self.loop).result()
return Response(
status="success",
message="",
data=ret
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
return Response(
status="error",
message="LLM not found.",
data=None
).__dict__
def on_post_configs(self, post_configs: dict):
try:
if 'base_config' in post_configs:
self.dashboard_helper.save_config(
post_configs['base_config'], namespace='') # 基础配置
self.dashboard_helper.save_config(
post_configs['config'], namespace=post_configs['namespace']) # 选定配置
self.dashboard_helper.parse_default_config(
self.dashboard_data, self.context.config_helper.get_all())
# 重启
threading.Thread(target=self.astrbot_updator._reboot,
args=(2, ), daemon=True).start()
except Exception as e:
raise e
def _get_configs(self, namespace: str):
if namespace == "":
ret = [self.dashboard_data.configs['data'][4],
self.dashboard_data.configs['data'][5],]
elif namespace == "internal_platform_qq_official":
ret = [self.dashboard_data.configs['data'][0],]
elif namespace == "internal_platform_qq_gocq":
ret = [self.dashboard_data.configs['data'][1],]
elif namespace == "internal_platform_general": # 全局平台配置
ret = [self.dashboard_data.configs['data'][2],]
elif namespace == "internal_llm_openai_official":
ret = [self.dashboard_data.configs['data'][3],]
elif namespace == "internal_platform_qq_aiocqhttp":
ret = [self.dashboard_data.configs['data'][6],]
else:
path = f"data/config/{namespace}.json"
if not os.path.exists(path):
return []
with open(path, "r", encoding="utf-8-sig") as f:
ret = [{
"config_type": "group",
"name": namespace + " 插件配置",
"description": "",
"body": list(json.load(f).values())
},]
return ret
def _generate_outline(self):
'''
生成配置大纲。目前分为 platform(消息平台配置) 和 llm(语言模型配置) 两大类。
插件的info函数中如果带了plugin_type字段则会被归类到对应的大纲中。目前仅支持 platform 和 llm 两种类型。
'''
outline = [
{
"type": "platform",
"name": "配置通用消息平台",
"body": [
{
"title": "通用",
"desc": "通用平台配置",
"namespace": "internal_platform_general",
"tag": ""
},
{
"title": "QQ(官方)",
"desc": "QQ官方API。支持频道、群、私聊需获得群权限",
"namespace": "internal_platform_qq_official",
"tag": ""
},
{
"title": "QQ(nakuru)",
"desc": "适用于 go-cqhttp",
"namespace": "internal_platform_qq_gocq",
"tag": ""
},
{
"title": "QQ(aiocqhttp)",
"desc": "适用于 Lagrange, LLBot, Shamrock 等支持反向WS的协议实现。",
"namespace": "internal_platform_qq_aiocqhttp",
"tag": ""
}
]
},
{
"type": "llm",
"name": "配置 LLM",
"body": [
{
"title": "OpenAI Official",
"desc": "也支持使用官方接口的中转服务",
"namespace": "internal_llm_openai_official",
"tag": ""
}
]
}
]
for plugin in self.context.cached_plugins:
for item in outline:
if item['type'] == plugin.metadata.plugin_type:
item['body'].append({
"title": plugin.metadata.plugin_name,
"desc": plugin.metadata.desc,
"namespace": plugin.metadata.plugin_name,
"tag": plugin.metadata.plugin_name
})
return outline
async def get_log_history(self):
try:
with open("logs/astrbot/astrbot.log", "r", encoding="utf-8") as f:
return f.readlines()[-100:]
except Exception as e:
logger.warning(f"读取日志历史失败: {e.__str__()}")
return []
async def __handle_msg(self, websocket, path):
address = websocket.remote_address
self.ws_clients[address] = websocket
data = await self.get_log_history()
data = ''.join(data).replace('\n', '\r\n')
await websocket.send(data)
while True:
try:
msg = await websocket.recv()
except websockets.exceptions.ConnectionClosedError:
# logger.info(f"和 {address} 的 websocket 连接已断开")
del self.ws_clients[address]
break
except Exception as e:
# logger.info(f"和 {path} 的 websocket 连接发生了错误: {e.__str__()}")
del self.ws_clients[address]
break
async def ws_server(self):
ws_server = websockets.serve(self.__handle_msg, "0.0.0.0", 6186)
logger.info("WebSocket 服务器已启动。")
await ws_server
def http_server(self):
http_server = make_server(
'0.0.0.0', 6185, self.dashboard_be, threaded=True)
http_server.serve_forever()
def run_http_server(self):
self.http_server_thread = threading.Thread(target=self.http_server, daemon=True).start()
ip_address = get_local_ip_addresses()
ip_str = f"http://{ip_address}:6185"
logger.info(f"HTTP 服务器已启动,可访问: {ip_str} 等来登录可视化面板。")

View File

@@ -1,56 +0,0 @@
# -*- coding: utf-8 -*-
from git.repo import Repo
import git
import os
# import zipfile
if __name__ == "__main__":
try:
# 检测文件夹
if not os.path.exists('QQChannelChatGPT'):
os.mkdir('QQChannelChatGPT')
project_path = os.path.join('QQChannelChatGPT')
try:
repo = Repo(project_path)
# 检查当前commit的hash值
commit_hash = repo.head.object.hexsha
print("当前commit的hash值为: " + commit_hash)
# 得到远程仓库的origin的commit的列表
origin = repo.remotes.origin
try:
origin.fetch()
except:
pass
# 得到远程仓库的commit的hash值
remote_commit_hash = origin.refs.master.commit.hexsha
print("https://github.com/Soulter/QQChannelChatGPT的commit的hash值为: " + remote_commit_hash)
# 比较两个commit的hash值
if commit_hash != remote_commit_hash:
res = input("检测到项目有更新, 是否更新? (y/n): ")
if res == 'y':
repo.remotes.origin.pull()
print("项目更新完毕")
if res == 'n':
print("已取消更新")
except:
print("正在从https://github.com/Soulter/QQChannelChatGPT.git拉取项目...")
Repo.clone_from('https://github.com/Soulter/QQChannelChatGPT.git',to_path=project_path,branch='master')
print("项目拉取完毕")
print("【重要提醒】如果你没有Python版本>=3.8或者Git环境, 请先安装, 否则接下来的操作会造成闪退。")
print("【重要提醒】Python下载地址: https://npm.taobao.org/mirrors/python/3.9.7/python-3.9.7-amd64.exe ")
print("【重要提醒】Git下载地址: https://registry.npmmirror.com/-/binary/git-for-windows/v2.39.2.windows.1/Git-2.39.2-64-bit.exe")
print("【重要提醒】安装时, 请务必勾选“Add Python to PATH”选项。")
input("已确保安装了Python3.9+的版本,按下回车继续...")
print("正在安装依赖库")
os.system('python -m pip install -r QQChannelChatGPT\\requirements.txt')
print("依赖库安装完毕")
input("初次启动, 请先在QQChannelChatGPT/configs/config.yaml填写相关配置! 按任意键继续...")
finally:
print("正在启动项目...")
os.system('python QQChannelChatGPT\main.py')
except BaseException as e:
print(e)
input("程序出错。可以截图发给QQ905617992.按下回车键退出...")

116
main.py
View File

@@ -1,81 +1,55 @@
import threading
import os
import asyncio
import os, sys
import sys
import warnings
import traceback
from astrbot.bootstrap import AstrBotBootstrap
from SparkleLogging.utils.core import LogManager
from logging import Formatter
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
warnings.filterwarnings("ignore")
logo_tmpl = """
___ _______.___________..______ .______ ______ .___________.
/ \ / | || _ \ | _ \ / __ \ | |
/ ^ \ | (----`---| |----`| |_) | | |_) | | | | | `---| |----`
/ /_\ \ \ \ | | | / | _ < | | | | | |
/ _____ \ .----) | | | | |\ \----.| |_) | | `--' | | |
/__/ \__\ |_______/ |__| | _| `._____||______/ \______/ |__|
"""
def main():
global logger
try:
import botpy, logging
# delete qqbotpy's logger
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
def main(loop, event):
import cores.qqbot.core as qqBot
import yaml
ymlfile = open(abs_path+"configs/config.yaml", 'r', encoding='utf-8')
cfg = yaml.safe_load(ymlfile)
bootstrap = AstrBotBootstrap()
asyncio.run(bootstrap.run())
except KeyboardInterrupt:
logger.info("AstrBot 已退出。")
if 'http_proxy' in cfg:
os.environ['HTTP_PROXY'] = cfg['http_proxy']
if 'https_proxy' in cfg:
os.environ['HTTPS_PROXY'] = cfg['https_proxy']
provider = privider_chooser(cfg)
print('[System] 当前语言模型提供商: ' + str(provider))
# 执行Bot
qqBot.initBot(cfg, provider)
# 语言模型提供商选择器
# 目前有OpenAI官方API、逆向库
def privider_chooser(cfg):
l = []
if 'rev_ChatGPT' in cfg and cfg['rev_ChatGPT']['enable']:
l.append('rev_chatgpt')
if 'rev_ernie' in cfg and cfg['rev_ernie']['enable']:
l.append('rev_ernie')
if 'rev_edgegpt' in cfg and cfg['rev_edgegpt']['enable']:
l.append('rev_edgegpt')
if 'openai' in cfg and cfg['openai']['key'] != None and len(cfg['openai']['key'])>0:
l.append('openai_official')
return l
except BaseException as e:
logger.error(traceback.format_exc())
def check_env():
if not (sys.version_info.major == 3 and sys.version_info.minor >= 8):
print("请使用Python3.8运行本项目")
input("按任意键退出...")
if not (sys.version_info.major == 3 and sys.version_info.minor >= 9):
logger.error("请使用 Python3.9+ 运行本项目")
exit()
# try:
# print("检查依赖库中...")
# if os.path.exists('requirements.txt'):
# os.system("pip3 install -r requirements.txt")
# elif os.path.exists('QQChannelChatGPT'+ os.sep +'requirements.txt'):
# os.system('pip3 install -r QQChannelChatGPT'+ os.sep +'requirements.txt')
# os.system("clear")
# print("安装依赖库完毕...")
# except BaseException as e:
# print("安装依赖库失败,请手动安装依赖库。")
# print(e)
# input("按任意键退出...")
# exit()
os.makedirs("data/config", exist_ok=True)
os.makedirs("temp", exist_ok=True)
# 检查key
with open(abs_path+"configs/config.yaml", 'r', encoding='utf-8') as ymlfile:
import yaml
cfg = yaml.safe_load(ymlfile)
if cfg['openai']['key'] == '' or cfg['openai']['key'] == None:
print("请先在configs/config.yaml下添加一个可用的OpenAI Key。详情请前往https://beta.openai.com/account/api-keys")
if cfg['qqbot']['appid'] == '' or cfg['qqbot']['token'] == '' or cfg['qqbot']['appid'] == None or cfg['qqbot']['token'] == None:
print("请先在configs/config.yaml下完善appid和token令牌(在https://q.qq.com/上注册一个QQ机器人即可获得)")
def get_platform():
import platform
sys_platform = platform.platform().lower()
if "windows" in sys_platform:
return "win"
elif "macos" in sys_platform:
return "mac"
elif "linux" in sys_platform:
return "linux"
else:
print("other")
if __name__ == "__main__":
check_env()
bot_event = threading.Event()
loop = asyncio.get_event_loop()
main(loop, bot_event)
logger = LogManager.GetLogger(
log_name='astrbot',
out_to_console=True,
custom_formatter=Formatter('[%(asctime)s| %(name)s - %(levelname)s|%(filename)s:%(lineno)d]: %(message)s', datefmt="%H:%M:%S")
)
logger.info(logo_tmpl)
main()

View File

@@ -1,194 +0,0 @@
import abc
import json
import git.exc
from git.repo import Repo
import os
import sys
import requests
from model.provider.provider import Provider
import json
PLATFORM_QQCHAN = 'qqchan'
PLATFORM_GOCQ = 'gocq'
class Command:
def __init__(self, provider: Provider):
self.provider = Provider
@abc.abstractmethod
def check_command(self, message, role, platform):
if self.command_start_with(message, "nick"):
return True, self.set_nick(message, platform)
return False, None
'''
存储机器人的昵称
'''
def set_nick(self, message: str, platform: str):
if platform == PLATFORM_GOCQ:
nick = message.split(" ")[1]
self.general_command_storer("nick_qq", nick)
return True, f"设置成功!现在你可以叫我{nick}来提问我啦~", "nick"
elif platform == PLATFORM_QQCHAN:
nick = message.split(" ")[2]
return False, "QQ频道平台不支持为机器人设置昵称。", "nick"
"""
存储指令结果到cmd_config.json
"""
def general_command_storer(self, key, value):
if not os.path.exists("cmd_config.json"):
config = {}
else:
with open("cmd_config.json", "r", encoding="utf-8") as f:
config = json.load(f)
config[key] = value
with open("cmd_config.json", "w", encoding="utf-8") as f:
json.dump(config, f, indent=4, ensure_ascii=False)
f.flush()
def general_commands(self):
return {
"help": "帮助",
"keyword": "设置关键词/关键指令回复",
"update": "更新面板",
"update latest": "更新到最新版本",
"update r": "重启程序",
"reset": "重置会话",
"nick": "设置机器人昵称",
"/bing": "切换到bing模型",
"/gpt": "切换到OpenAI ChatGPT API",
"/revgpt": "切换到网页版ChatGPT",
"/bing 问题": "临时使用一次bing模型进行会话",
"/gpt 问题": "临时使用一次OpenAI ChatGPT API进行会话",
"/revgpt 问题": "临时使用一次网页版ChatGPT进行会话",
}
def help_messager(self, commands: dict):
try:
resp = requests.get("https://soulter.top/channelbot/notice.json").text
notice = json.loads(resp)["notice"]
except BaseException as e:
notice = ""
msg = "Github项目名QQChannelChatGPT, 有问题提交issue, 欢迎Star\n【指令列表】\n"
for key, value in commands.items():
msg += key + ": " + value + "\n"
msg += notice
return msg
# 接受可变参数
def command_start_with(self, message: str, *args):
for arg in args:
if message.startswith(arg) or message.startswith('/'+arg):
return True
return False
def keyword(self, message: str, role: str):
if role != "admin":
return True, "你没有权限使用该指令", "keyword"
if len(message.split(" ")) != 3:
return True, "【设置关键词/关键指令回复】示例:\nkeyword hi 你好\n当发送hi的时候会回复你好\nkeyword /hi 你好\n当发送/hi时会回复你好", "keyword"
l = message.split(" ")
try:
if os.path.exists("keyword.json"):
with open("keyword.json", "r", encoding="utf-8") as f:
keyword = json.load(f)
keyword[l[1]] = l[2]
else:
keyword = {l[1]: l[2]}
with open("keyword.json", "w", encoding="utf-8") as f:
json.dump(keyword, f, ensure_ascii=False, indent=4)
f.flush()
return True, "设置成功: "+l[1]+" -> "+l[2], "keyword"
except BaseException as e:
return False, "设置失败: "+str(e), "keyword"
def update(self, message: str, role: str):
if role != "admin":
return True, "你没有权限使用该指令", "keyword"
l = message.split(" ")
if len(l) == 1:
# 得到本地版本号和最新版本号
try:
repo = Repo()
except git.exc.InvalidGitRepositoryError:
repo = Repo(path="QQChannelChatGPT")
now_commit = repo.head.commit
# 得到远程3条commit列表, 包含commit信息
origin = repo.remotes.origin
origin.fetch()
commits = list(repo.iter_commits('master', max_count=3))
commits_log = ''
index = 1
for commit in commits:
if commit.message.endswith("\n"):
commits_log += f"[{index}] {commit.message}-----------\n"
else:
commits_log += f"[{index}] {commit.message}\n-----------\n"
index+=1
remote_commit_hash = origin.refs.master.commit.hexsha[:6]
return True, f"当前版本: {now_commit.hexsha[:6]}\n最新版本: {remote_commit_hash}\n\n3条commit(非最新):\n{str(commits_log)}\n使用update latest更新至最新版本\n", "update"
else:
if l[1] == "latest":
pash_tag = ""
try:
try:
repo = Repo()
except git.exc.InvalidGitRepositoryError:
repo = Repo(path="QQChannelChatGPT")
pash_tag = "QQChannelChatGPT"+os.sep
repo.remotes.origin.pull()
# 检查是否是windows环境
# if platform.system().lower() == "windows":
# if os.path.exists("launcher.exe"):
# os.system("start launcher.exe")
# elif os.path.exists("QQChannelChatGPT\\main.py"):
# os.system("start python QQChannelChatGPT\\main.py")
# else:
# return True, "更新成功,未发现启动项,因此需要手动重启程序。"
# exit()
# else:
# py = sys.executable
# os.execl(py, py, *sys.argv)
return True, "更新成功~是否重启输入update r重启重启指令不返回任何确认信息", "update"
except BaseException as e:
return False, "更新失败: "+str(e), "update"
if l[1] == "r":
py = sys.executable
os.execl(py, py, *sys.argv)
def reset(self):
return False
def set(self):
return False
def unset(self):
return False
def key(self):
return False
def help(self):
return False
def status(self):
return False
def token(self):
return False
def his(self):
return False
def draw(self):
return False

View File

@@ -1,194 +0,0 @@
from model.command.command import Command
from model.provider.provider_openai_official import ProviderOpenAIOfficial
from cores.qqbot.personality import personalities
class CommandOpenAIOfficial(Command):
def __init__(self, provider: ProviderOpenAIOfficial):
self.provider = provider
def check_command(self, message: str, session_id: str, user_name: str, role, platform: str):
hit, res = super().check_command(message, role, platform)
if hit:
return True, res
if self.command_start_with(message, "reset", "重置"):
return True, self.reset(session_id)
elif self.command_start_with(message, "his", "历史"):
return True, self.his(message, session_id, user_name)
elif self.command_start_with(message, "token"):
return True, self.token(session_id)
elif self.command_start_with(message, "gpt"):
return True, self.gpt()
elif self.command_start_with(message, "status"):
return True, self.status()
elif self.command_start_with(message, "count"):
return True, self.count()
elif self.command_start_with(message, "help", "帮助"):
return True, self.help()
elif self.command_start_with(message, "unset"):
return True, self.unset(session_id)
elif self.command_start_with(message, "set"):
return True, self.set(message, session_id)
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
elif self.command_start_with(message, ""):
return True, self.draw(message)
elif self.command_start_with(message, "keyword"):
return True, self.keyword(message, role)
elif self.command_start_with(message, "key"):
return True, self.key(message, user_name)
if self.command_start_with(message, "/"):
return True, (False, "未知指令", "unknown_command")
return False, None
def help(self):
commands = super().general_commands()
commands[''] = '画画'
commands['key'] = '添加OpenAI key'
commands['set'] = '人格设置面板'
commands['gpt'] = '查看gpt配置信息'
commands['status'] = '查看key使用状态'
commands['token'] = '查看本轮会话token'
return True, super().help_messager(commands), "help"
def reset(self, session_id: str):
self.provider.forget(session_id)
return True, "重置成功", "reset"
def his(self, message: str, session_id: str, name: str):
#分页每页5条
msg = ''
size_per_page = 3
page = 1
if message[4:]:
page = int(message[4:])
# 检查是否有过历史记录
if session_id not in self.provider.session_dict:
msg = f"历史记录为空"
return True, msg, "his"
l = self.provider.session_dict[session_id]
max_page = len(l)//size_per_page + 1 if len(l)%size_per_page != 0 else len(l)//size_per_page
p = self.provider.get_prompts_by_cache_list(self.provider.session_dict[session_id], divide=True, paging=True, size=size_per_page, page=page)
return True, f"历史记录如下:\n{p}\n{page}页 | 共{max_page}\n*输入/his 2跳转到第2页", "his"
def token(self, session_id: str):
return True, f"会话的token数: {self.provider.get_user_usage_tokens(self.provider.session_dict[session_id])}\n系统最大缓存token数: {self.provider.max_tokens}", "token"
def gpt(self):
return True, f"OpenAI GPT配置:\n {self.provider.chatGPT_configs}", "gpt"
def status(self):
chatgpt_cfg_str = ""
key_stat = self.provider.get_key_stat()
index = 1
max = 9000000
gg_count = 0
total = 0
tag = ''
for key in key_stat.keys():
sponsor = ''
total += key_stat[key]['used']
if key_stat[key]['exceed']:
gg_count += 1
continue
if 'sponsor' in key_stat[key]:
sponsor = key_stat[key]['sponsor']
chatgpt_cfg_str += f" |-{index}: {key_stat[key]['used']}/{max} {sponsor}赞助{tag}\n"
index += 1
return True, f"⭐使用情况({str(gg_count)}个已用):\n{chatgpt_cfg_str}⏰全频道已用{total}tokens", "status"
def count(self):
guild_count, guild_msg_count, guild_direct_msg_count, session_count = self.provider.get_stat()
return True, f"当前会话数: {len(self.provider.session_dict)}\n共有频道数: {guild_count} \n共有消息数: {guild_msg_count}\n私信数: {guild_direct_msg_count}\n历史会话数: {session_count}", "count"
def key(self, message: str, user_name: str):
l = message.split(" ")
if len(l) == 1:
msg = "感谢您赞助keykey为官方API使用请以以下格式赞助:\n/key xxxxx"
return True, msg, "key"
key = l[1]
if self.provider.check_key(key):
self.provider.append_key(key, user_name)
return True, f"*★,°*:.☆( ̄▽ ̄)/$:*.°★* 。\n该Key被验证为有效。感谢{user_name}赞助~"
else:
return True, "该Key被验证为无效。也许是输入错误了或者重试。", "key"
def unset(self, session_id: str):
self.provider.now_personality = {}
self.provider.forget(session_id)
return True, "已清除人格并重置历史记录。", "unset"
def set(self, message: str, session_id: str):
l = message.split(" ")
if len(l) == 1:
return True, f"【由Github项目QQChannelChatGPT支持】\n\n【人格文本由PlexPt开源项目awesome-chatgpt-pr \
ompts-zh提供】\n\n这个是人格设置指令。\n设置人格: \n/set 人格名。例如/set 编剧\n人格列表: /set list\n人格详细信息: \
/set view 人格名\n自定义人格: /set 人格文本\n清除人格: /unset\n【当前人格】: {str(self.provider.now_personality)}", "set"
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f" |-{key}\n"
msg += '\n\n*输入/set view 人格名查看人格详细信息'
msg += '\n*不定时更新人格库,请及时更新本项目。'
return True, msg, "set"
elif l[1] == "view":
if len(l) == 2:
return True, "请输入/set view 人格名", "set"
ps = l[2].strip()
if ps in personalities:
msg = f"人格{ps}的详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格{ps}不存在"
return True, msg, "set"
else:
ps = l[1].strip()
if ps in personalities:
self.provider.now_personality = {
'name': ps,
'prompt': personalities[ps]
}
self.provider.session_dict[session_id] = []
new_record = {
"user": {
"role": "system",
"content": personalities[ps],
},
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id].append(new_record)
return True, f"人格{ps}已设置.", "set"
else:
self.provider.now_personality = {
'name': '自定义人格',
'prompt': ps
}
new_record = {
"user": {
"role": "system",
"content": ps,
},
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id] = []
self.provider.session_dict[session_id].append(new_record)
return True, f"自定义人格已设置。 \n人格信息: {ps}", "set"
def draw(self, message):
try:
# 画图模式传回3个参数
img_url = self.provider.image_chat(message)
return True, img_url, "draw"
except Exception as e:
if 'exceeded' in str(e):
return f"OpenAI API错误。原因\n{str(e)} \n超额了。可自己搭建一个机器人(Github仓库QQChannelChatGPT)"
return False, f"图片生成失败: {e}", "draw"

View File

@@ -1,29 +0,0 @@
from model.command.command import Command
from model.provider.provider_rev_chatgpt import ProviderRevChatGPT
class CommandRevChatGPT(Command):
def __init__(self, provider: ProviderRevChatGPT):
self.provider = provider
def check_command(self, message: str, role, platform: str):
hit, res = super().check_command(message, role, platform)
if hit:
return True, res
if self.command_start_with(message, "help", "帮助"):
return True, self.help()
elif self.command_start_with(message, "reset"):
return True, self.reset()
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
elif self.command_start_with(message, "keyword"):
return True, self.keyword(message, role)
if self.command_start_with(message, "/"):
return True, (False, "未知指令", "unknown_command")
return False, None
def reset(self):
return False, "此功能暂未开放", "reset"
def help(self):
return True, super().help_messager(super().general_commands()), "help"

View File

@@ -1,35 +0,0 @@
from model.command.command import Command
from model.provider.provider_rev_edgegpt import ProviderRevEdgeGPT
import asyncio
class CommandRevEdgeGPT(Command):
def __init__(self, provider: ProviderRevEdgeGPT):
self.provider = provider
def check_command(self, message: str, loop, role, platform: str):
hit, res = super().check_command(message, role, platform)
if hit:
return True, res
if self.command_start_with(message, "reset"):
return True, self.reset(loop)
elif self.command_start_with(message, "help"):
return True, self.help()
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
elif self.command_start_with(message, "keyword"):
return True, self.keyword(message, role)
if self.command_start_with(message, "/"):
return True, (False, "未知指令", "unknown_command")
return False, None
def reset(self, loop):
res = asyncio.run_coroutine_threadsafe(self.provider.forget(), loop).result()
print(res)
if res:
return res, "重置成功", "reset"
else:
return res, "重置失败", "reset"
def help(self):
return True, super().help_messager(super().general_commands()), "help"

View File

@@ -0,0 +1,233 @@
import aiohttp
from model.command.manager import CommandManager
from model.plugin.manager import PluginManager
from type.message_event import AstrMessageEvent
from type.command import CommandResult
from type.types import Context
from type.config import VERSION
from SparkleLogging.utils.core import LogManager
from logging import Logger
from nakuru.entities.components import Image
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class InternalCommandHandler:
def __init__(self, manager: CommandManager, plugin_manager: PluginManager) -> None:
self.manager = manager
self.plugin_manager = plugin_manager
self.manager.register("help", "查看帮助", 10, self.help)
self.manager.register("wake", "设置机器人唤醒词", 10, self.set_nick)
self.manager.register("update", "更新 AstrBot", 10, self.update)
self.manager.register("plugin", "插件管理", 10, self.plugin)
self.manager.register("reboot", "重启 AstrBot", 10, self.reboot)
self.manager.register("websearch", "网页搜索开关", 10, self.web_search)
self.manager.register("t2i", "文本转图片开关", 10, self.t2i_toggle)
self.manager.register("myid", "获取你在此平台上的ID", 10, self.myid)
def set_nick(self, message: AstrMessageEvent, context: Context):
message_str = message.message_str
if message.role != "admin":
return CommandResult().message("你没有权限使用该指令。")
l = message_str.split(" ")
if len(l) == 1:
return CommandResult().message("设置机器人唤醒词,支持多唤醒词。以唤醒词开头的消息会唤醒机器人处理,起到 @ 的效果。\n示例wake 昵称1 昵称2 昵称3")
nick = l[1:]
context.config_helper.put("nick_qq", nick)
context.nick = tuple(nick)
return CommandResult(
hit=True,
success=True,
message_chain=f"已经成功将唤醒词设定为 {nick}",
)
def update(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if message.role != "admin":
return CommandResult(
hit=True,
success=False,
message_chain="你没有权限使用该指令",
)
update_info = context.updator.check_update(None, None)
if tokens.len == 1:
ret = ""
if not update_info:
ret = f"当前已经是最新版本 v{VERSION}"
else:
ret = f"发现新版本 {update_info.version},更新内容如下:\n---\n{update_info.body}\n---\n- 使用 /update latest 更新到最新版本。\n- 使用 /update vX.X.X 更新到指定版本。"
return CommandResult(
hit=True,
success=False,
message_chain=ret,
)
else:
if tokens.get(1) == "latest":
try:
context.updator.update()
return CommandResult().message(f"已经成功更新到最新版本 v{update_info.version}。要应用更新,请重启 AstrBot。输入 /reboot 即可重启")
except BaseException as e:
return CommandResult().message(f"更新失败。原因:{str(e)}")
elif tokens.get(1).startswith("v"):
try:
context.updator.update(version=tokens.get(1))
return CommandResult().message(f"已经成功更新到版本 v{tokens.get(1)}。要应用更新,请重启 AstrBot。输入 /reboot 即可重启")
except BaseException as e:
return CommandResult().message(f"更新失败。原因:{str(e)}")
else:
return CommandResult().message("update: 参数错误。")
def reboot(self, message: AstrMessageEvent, context: Context):
if message.role != "admin":
return CommandResult(
hit=True,
success=False,
message_chain="你没有权限使用该指令",
)
context.updator._reboot(5)
return CommandResult(
hit=True,
success=True,
message_chain="AstrBot 将在 5s 后重启。",
)
def plugin(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
ret = "# 插件指令面板 \n- 安装插件: `plugin i 插件Github地址`\n- 卸载插件: `plugin d 插件名`\n- 查看插件列表:`plugin l`\n - 更新插件: `plugin u 插件名`\n"
return CommandResult().message(ret)
if tokens.get(1) == "l":
plugin_list_info = ""
for plugin in context.cached_plugins:
plugin_list_info += f"- `{plugin.metadata.plugin_name}` By {plugin.metadata.author}: {plugin.metadata.desc}\n"
if plugin_list_info.strip() == "":
return CommandResult().message("plugin v: 没有找到插件。")
return CommandResult().message(plugin_list_info)
elif tokens.get(1) == "d":
if message.role != "admin":
return CommandResult().message("plugin d: 你没有权限使用该指令。")
if tokens.len == 2:
return CommandResult().message("plugin d: 请指定要卸载的插件名。")
plugin_name = tokens.get(2)
try:
self.plugin_manager.uninstall_plugin(plugin_name)
except BaseException as e:
return CommandResult().message(f"plugin d: 卸载插件失败。原因:{str(e)}")
return CommandResult().message(f"plugin d: 已经成功卸载插件 {plugin_name}")
elif tokens.get(1) == "i":
if message.role != "admin":
return CommandResult().message("plugin i: 你没有权限使用该指令。")
if tokens.len == 2:
return CommandResult().message("plugin i: 请指定要安装的插件的 Github 地址,或者前往可视化面板安装。")
plugin_url = tokens.get(2)
try:
self.plugin_manager.install_plugin(plugin_url)
except BaseException as e:
return CommandResult().message(f"plugin i: 安装插件失败。原因:{str(e)}")
return CommandResult().message("plugin i: 已经成功安装插件。")
elif tokens.get(1) == "u":
if message.role != "admin":
return CommandResult().message("plugin u: 你没有权限使用该指令。")
if tokens.len == 2:
return CommandResult().message("plugin u: 请指定要更新的插件名。")
plugin_name = tokens.get(2)
try:
self.plugin_manager.update_plugin(plugin_name)
except BaseException as e:
return CommandResult().message(f"plugin u: 更新插件失败。原因:{str(e)}")
return CommandResult().message(f"plugin u: 已经成功更新插件 {plugin_name}")
return CommandResult().message("plugin: 参数错误。")
async def help(self, message: AstrMessageEvent, context: Context):
notice = ""
try:
async with aiohttp.ClientSession() as session:
async with session.get("https://soulter.top/channelbot/notice.json") as resp:
notice = (await resp.json())["notice"]
except BaseException as e:
logger.warn("An error occurred while fetching astrbot notice. Never mind, it's not important.")
msg = "# Help Center\n## 指令列表\n"
for key, value in self.manager.commands_handler.items():
if value.plugin_metadata:
msg += f"- `{key}` ({value.plugin_metadata.plugin_name}): {value.description}\n"
else: msg += f"- `{key}`: {value.description}\n"
# plugins
if context.cached_plugins != None:
plugin_list_info = ""
for plugin in context.cached_plugins:
plugin_list_info += f"- `{plugin.metadata.plugin_name}` {plugin.metadata.desc}\n"
if plugin_list_info.strip() != "":
msg += "\n## 插件列表\n> 使用plugin v 插件名 查看插件帮助\n"
msg += plugin_list_info
msg += notice
return CommandResult().message(msg)
def web_search(self, message: AstrMessageEvent, context: Context):
l = message.message_str.split(' ')
if len(l) == 1:
return CommandResult(
hit=True,
success=True,
message_chain=f"网页搜索功能当前状态: {context.web_search}",
)
elif l[1] == 'on':
context.web_search = True
return CommandResult(
hit=True,
success=True,
message_chain="已开启网页搜索",
)
elif l[1] == 'off':
context.web_search = False
return CommandResult(
hit=True,
success=True,
message_chain="已关闭网页搜索",
)
else:
return CommandResult(
hit=True,
success=False,
message_chain="参数错误",
)
def t2i_toggle(self, message: AstrMessageEvent, context: Context):
p = context.config_helper.get("qq_pic_mode", True)
if p:
context.config_helper.put("qq_pic_mode", False)
return CommandResult(
hit=True,
success=True,
message_chain="已关闭文本转图片模式。",
)
context.config_helper.put("qq_pic_mode", True)
return CommandResult(
hit=True,
success=True,
message_chain="已开启文本转图片模式。",
)
def myid(self, message: AstrMessageEvent, context: Context):
try:
user_id = str(message.message_obj.sender.user_id)
return CommandResult(
hit=True,
success=True,
message_chain=f"你在此平台上的ID{user_id}",
)
except BaseException as e:
return CommandResult(
hit=True,
success=False,
message_chain=f"{message.platform} 上获取你的ID失败原因: {str(e)}",
)

106
model/command/manager.py Normal file
View File

@@ -0,0 +1,106 @@
import heapq
import inspect
import traceback
from typing import Dict
from type.types import Context
from type.plugin import PluginMetadata
from type.message_event import AstrMessageEvent
from type.command import CommandResult
from type.register import RegisteredPlugins
from model.command.parser import CommandParser
from model.plugin.command import PluginCommandBridge
from SparkleLogging.utils.core import LogManager
from logging import Logger
from dataclasses import dataclass
logger: Logger = LogManager.GetLogger(log_name='astrbot')
@dataclass
class CommandMetadata():
inner_command: bool
plugin_metadata: PluginMetadata
handler: callable
description: str
class CommandManager():
def __init__(self):
self.commands = []
self.commands_handler: Dict[str, CommandMetadata] = {}
self.command_parser = CommandParser()
def register(self,
command: str,
description: str,
priority: int,
handler: callable,
plugin_metadata: PluginMetadata = None,
):
'''
优先级越高,越先被处理。
'''
if command in self.commands_handler:
raise ValueError(f"Command {command} already exists.")
if not handler:
raise ValueError(f"Handler of {command} is None.")
heapq.heappush(self.commands, (-priority, command))
self.commands_handler[command] = CommandMetadata(
inner_command=plugin_metadata == None,
plugin_metadata=plugin_metadata,
handler=handler,
description=description
)
if plugin_metadata:
logger.info(f"已注册 {plugin_metadata.author}/{plugin_metadata.plugin_name} 的指令 {command}")
else:
logger.info(f"已注册指令 {command}")
def register_from_pcb(self, pcb: PluginCommandBridge):
for request in pcb.plugin_commands_waitlist:
plugin = None
for registered_plugin in pcb.cached_plugins:
if registered_plugin.metadata.plugin_name == request.plugin_name:
plugin = registered_plugin
break
if not plugin:
logger.warning(f"插件 {request.plugin_name} 未找到,无法注册指令 {request.command_name}")
self.register(request.command_name, request.description, request.priority, request.handler, plugin.metadata)
self.plugin_commands_waitlist = []
async def scan_command(self, message_event: AstrMessageEvent, context: Context) -> CommandResult:
message_str = message_event.message_str
for _, command in self.commands:
if message_str.startswith(command):
logger.info(f"触发 {command} 指令。")
command_result = await self.execute_handler(command, message_event, context)
return command_result
async def execute_handler(self,
command: str,
message_event: AstrMessageEvent,
context: Context) -> CommandResult:
command_metadata = self.commands_handler[command]
handler = command_metadata.handler
# call handler
try:
if inspect.iscoroutinefunction(handler):
command_result = await handler(message_event, context)
else:
command_result = handler(message_event, context)
if not isinstance(command_result, CommandResult):
raise ValueError(f"Command {command} handler should return CommandResult.")
context.metrics_uploader.command_stats[command] += 1
return command_result
except BaseException as e:
logger.error(traceback.format_exc())
if not command_metadata.inner_command:
text = f"执行 {command}/({command_metadata.plugin_metadata.plugin_name} By {command_metadata.plugin_metadata.author}) 指令时发生了异常。{e}"
logger.error(text)
else:
text = f"执行 {command} 指令时发生了异常。{e}"
logger.error(text)
return CommandResult().message(text)

View File

@@ -0,0 +1,186 @@
from model.command.manager import CommandManager
from type.message_event import AstrMessageEvent
from type.command import CommandResult
from type.types import Context
from SparkleLogging.utils.core import LogManager
from logging import Logger
from nakuru.entities.components import Image
from model.provider.openai_official import ProviderOpenAIOfficial, MODELS
from util.personality import personalities
from util.io import download_image_by_url
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class OpenAIOfficialCommandHandler():
def __init__(self, manager: CommandManager) -> None:
self.manager = manager
self.provider = None
self.manager.register("reset", "重置会话", 10, self.reset)
self.manager.register("his", "查看历史记录", 10, self.his)
self.manager.register("status", "查看当前状态", 10, self.status)
self.manager.register("switch", "切换账号", 10, self.switch)
self.manager.register("unset", "清除个性化人格设置", 10, self.unset)
self.manager.register("set", "设置个性化人格", 10, self.set)
self.manager.register("draw", "调用 DallE 模型画图", 10, self.draw)
self.manager.register("model", "切换模型", 10, self.model)
self.manager.register("", "调用 DallE 模型画图", 10, self.draw)
def set_provider(self, provider):
self.provider = provider
async def reset(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
await self.provider.forget(message.session_id, keep_system_prompt=True)
return CommandResult().message("重置成功")
elif tokens.get(1) == 'p':
await self.provider.forget(message.session_id)
async def model(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
ret = await self._print_models()
return CommandResult().message(ret)
model = tokens.get(1)
if model.isdigit():
try:
models = await self.provider.get_models()
except BaseException as e:
logger.error(f"获取模型列表失败: {str(e)}")
return CommandResult().message("获取模型列表失败,无法使用编号切换模型。可以尝试直接输入模型名来切换,如 gpt-4o。")
models = list(models)
if int(model) <= len(models) and int(model) >= 1:
model = models[int(model)-1]
self.provider.set_model(model.id)
return CommandResult().message(f"模型已设置为 {model.id}")
else:
self.provider.set_model(model)
return CommandResult().message(f"模型已设置为 {model} (自定义)")
async def _print_models(self):
try:
models = await self.provider.get_models()
except BaseException as e:
return "获取模型列表失败: " + str(e)
i = 1
ret = "OpenAI GPT 类可用模型"
for model in models:
ret += f"\n{i}. {model.id}"
i += 1
ret += "\nTips: 使用 /model 模型名/编号,即可实时更换模型。如目标模型不存在于上表,请输入模型名。"
logger.debug(ret)
return ret
def his(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
size_per_page = 3
page = 1
if tokens.len == 2:
try:
page = int(tokens.get(1))
except BaseException as e:
return CommandResult().message("页码格式错误")
contexts, total_num = self.provider.dump_contexts_page(message.session_id, size_per_page, page=page)
t_pages = total_num // size_per_page + 1
return CommandResult().message(f"历史记录如下:\n{contexts}\n{page} 页 | 共 {t_pages}\n*输入 /his 2 跳转到第 2 页")
def status(self, message: AstrMessageEvent, context: Context):
keys_data = self.provider.get_keys_data()
ret = "OpenAI Key"
for k in keys_data:
status = "🟢" if keys_data[k] else "🔴"
ret += "\n|- " + k[:8] + " " + status
conf = self.provider.get_configs()
ret += "\n当前模型: " + conf['model']
if conf['model'] in MODELS:
ret += "\n最大上下文窗口: " + str(MODELS[conf['model']]) + " tokens"
if message.session_id in self.provider.session_memory and len(self.provider.session_memory[message.session_id]):
ret += "\n你的会话上下文: " + str(self.provider.session_memory[message.session_id][-1]['usage_tokens']) + " tokens"
return CommandResult().message(ret)
async def switch(self, message: AstrMessageEvent, context: Context):
'''
切换账号
'''
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
_, ret, _ = self.status()
curr_ = self.provider.get_curr_key()
if curr_ is None:
ret += "当前您未选择账号。输入/switch <账号序号>切换账号。"
else:
ret += f"当前您选择的账号为:{curr_[-8:]}。输入/switch <账号序号>切换账号。"
return CommandResult().message(ret)
elif tokens.len == 2:
try:
key_stat = self.provider.get_keys_data()
index = int(tokens.get(1))
if index > len(key_stat) or index < 1:
return CommandResult().message("账号序号错误。")
else:
try:
new_key = list(key_stat.keys())[index-1]
self.provider.set_key(new_key)
except BaseException as e:
return CommandResult().message("切换账号未知错误: "+str(e))
return CommandResult().message("切换账号成功。")
except BaseException as e:
return CommandResult().message("切换账号错误。")
else:
return CommandResult().message("参数过多。")
def unset(self, message: AstrMessageEvent, context: Context):
self.provider.curr_personality = {}
self.provider.forget(message.session_id)
return CommandResult().message("已清除个性化设置。")
def set(self, message: AstrMessageEvent, context: Context):
l = message.message_str.split(" ")
if len(l) == 1:
return CommandResult().message("- 设置人格: \nset 人格名。例如 set 编剧\n- 人格列表: set list\n- 人格详细信息: set view 人格名\n- 自定义人格: set 人格文本\n- 重置会话(清除人格): reset\n- 重置会话(保留人格): reset p\n\n【当前人格】: " + str(self.provider.curr_personality['prompt']))
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f"- {key}\n"
msg += '\n\n*输入 set view 人格名 查看人格详细信息'
return CommandResult().message(msg)
elif l[1] == "view":
if len(l) == 2:
return CommandResult().message("请输入人格名")
ps = l[2].strip()
if ps in personalities:
msg = f"人格{ps}的详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格{ps}不存在"
return CommandResult().message(msg)
else:
ps = "".join(l[1:]).strip()
if ps in personalities:
self.provider.curr_personality = {
'name': ps,
'prompt': personalities[ps]
}
self.provider.personality_set(self.provider.curr_personality, message.session_id)
return CommandResult().message(f"人格已设置。 \n人格信息: {ps}")
else:
self.provider.curr_personality = {
'name': '自定义人格',
'prompt': ps
}
self.provider.personality_set(self.provider.curr_personality, message.session_id)
return CommandResult().message(f"人格已设置。 \n人格信息: {ps}")
async def draw(self, message: AstrMessageEvent, context: Context):
message = message.message_str.removeprefix("")
img_url = await self.provider.image_generate(message)
return CommandResult(
message_chain=[Image.fromURL(img_url)],
)

19
model/command/parser.py Normal file
View File

@@ -0,0 +1,19 @@
class CommandTokens():
def __init__(self) -> None:
self.tokens = []
self.len = 0
def get(self, idx: int):
if idx >= self.len:
return None
return self.tokens[idx].strip()
class CommandParser():
def __init__(self):
pass
def parse(self, message: str):
cmd_tokens = CommandTokens()
cmd_tokens.tokens = message.split(" ")
cmd_tokens.len = len(cmd_tokens.tokens)
return cmd_tokens

View File

@@ -0,0 +1,73 @@
import abc
from typing import Union, Any, List
from nakuru.entities.components import Plain, At, Image, BaseMessageComponent
from type.astrbot_message import AstrBotMessage
class Platform():
def __init__(self) -> None:
pass
@abc.abstractmethod
async def handle_msg(self, message: AstrBotMessage):
'''
处理到来的消息
'''
pass
@abc.abstractmethod
async def reply_msg(self, message: AstrBotMessage,
result_message: List[BaseMessageComponent]):
'''
回复用户唤醒机器人的消息。(被动回复)
'''
pass
@abc.abstractmethod
async def send_msg(self, target: Any, result_message: Union[List[BaseMessageComponent], str]):
'''
发送消息(主动)
'''
pass
def parse_message_outline(self, message: AstrBotMessage) -> str:
'''
将消息解析成大纲消息形式,如: xxxxx[图片]xxxxx。用于输出日志等。
'''
if isinstance(message, str):
return message
ret = ''
parsed = message if isinstance(message, list) else message.message
try:
for node in parsed:
if isinstance(node, Plain):
ret += node.text.replace('\n', ' ')
elif isinstance(node, At):
ret += f'[At: {node.name}/{node.qq}]'
elif isinstance(node, Image):
ret += '[图片]'
except Exception as e:
pass
return ret[:100] if len(ret) > 100 else ret
def check_nick(self, message_str: str) -> bool:
if self.context.nick:
for nick in self.context.nick:
if nick and message_str.strip().startswith(nick):
return True
return False
async def convert_to_t2i_chain(self, message_result: list) -> list:
plain_str = ""
rendered_images = []
for i in message_result:
if isinstance(i, Plain):
plain_str += i.text
if plain_str and len(plain_str) > 50:
p = await self.context.image_renderer.render(plain_str, return_url=True)
if p.startswith('http'):
rendered_images.append(Image.fromURL(p))
else:
rendered_images.append(Image.fromFileSystem(p))
return rendered_images

87
model/platform/manager.py Normal file
View File

@@ -0,0 +1,87 @@
import asyncio
from util.io import port_checker
from type.register import RegisteredPlatform
from type.types import Context
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class PlatformManager():
def __init__(self, context: Context, message_handler: MessageHandler) -> None:
self.context = context
self.config = context.base_config
self.msg_handler = message_handler
def load_platforms(self):
tasks = []
if 'gocqbot' in self.config and self.config['gocqbot']['enable']:
logger.info("启用 QQ(nakuru 适配器)")
tasks.append(asyncio.create_task(self.gocq_bot(), name="nakuru-adapter"))
if 'aiocqhttp' in self.config and self.config['aiocqhttp']['enable']:
logger.info("启用 QQ(aiocqhttp 适配器)")
tasks.append(asyncio.create_task(self.aiocq_bot(), name="aiocqhttp-adapter"))
# QQ频道
if 'qqbot' in self.config and self.config['qqbot']['enable'] and self.config['qqbot']['appid'] != None:
logger.info("启用 QQ(官方 API) 机器人消息平台")
tasks.append(asyncio.create_task(self.qqchan_bot(), name="qqofficial-adapter"))
return tasks
async def gocq_bot(self):
'''
运行 QQ(nakuru 适配器)
'''
from model.platform.qq_nakuru import QQGOCQ
noticed = False
host = self.config.get("gocq_host", "127.0.0.1")
port = self.config.get("gocq_websocket_port", 6700)
http_port = self.config.get("gocq_http_port", 5700)
logger.info(
f"正在检查连接...host: {host}, ws port: {port}, http port: {http_port}")
while True:
if not port_checker(port=port, host=host) or not port_checker(port=http_port, host=host):
if not noticed:
noticed = True
logger.warning(
f"连接到{host}:{port}(或{http_port})失败。程序会每隔 5s 自动重试。")
await asyncio.sleep(5)
else:
logger.info("nakuru 适配器已连接。")
break
try:
qq_gocq = QQGOCQ(self.context, self.msg_handler)
self.context.platforms.append(RegisteredPlatform(
platform_name="gocq", platform_instance=qq_gocq, origin="internal"))
await qq_gocq.run()
except BaseException as e:
logger.error("启动 nakuru 适配器时出现错误: " + str(e))
def aiocq_bot(self):
'''
运行 QQ(aiocqhttp 适配器)
'''
from model.platform.qq_aiocqhttp import AIOCQHTTP
qq_aiocqhttp = AIOCQHTTP(self.context, self.msg_handler)
self.context.platforms.append(RegisteredPlatform(
platform_name="aiocqhttp", platform_instance=qq_aiocqhttp, origin="internal"))
return qq_aiocqhttp.run_aiocqhttp()
def qqchan_bot(self):
'''
运行 QQ 官方机器人适配器
'''
try:
from model.platform.qq_official import QQOfficial
qqchannel_bot = QQOfficial(self.context, self.msg_handler)
self.context.platforms.append(RegisteredPlatform(
platform_name="qqchan", platform_instance=qqchannel_bot, origin="internal"))
return qqchannel_bot.run()
except BaseException as e:
logger.error("启动 QQ官方机器人适配器时出现错误: " + str(e))

View File

@@ -1,18 +0,0 @@
from nakuru.entities.components import Plain
class QQ:
def run_bot(self, gocq):
self.client = gocq
self.client.run()
async def send_qq_msg(self, source, res):
print("[System-Info] 回复QQ消息中..."+res)
# 通过消息链处理
if source.type == "GroupMessage":
await self.client.sendGroupMessage(source.group_id, [
Plain(text=res)
])
elif source.type == "FriendMessage":
await self.client.sendFriendMessage(source.user_id, [
Plain(text=res)
])

View File

@@ -0,0 +1,198 @@
import time
import asyncio
import traceback
import logging
from aiocqhttp import CQHttp, Event
from aiocqhttp.exceptions import ActionFailed
from . import Platform
from type.astrbot_message import *
from type.message_event import *
from typing import Union, List, Dict
from nakuru.entities.components import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class AIOCQHTTP(Platform):
def __init__(self, context: Context, message_handler: MessageHandler) -> None:
self.message_handler = message_handler
self.waiting = {}
self.context = context
self.unique_session = self.context.unique_session
self.announcement = self.context.base_config.get("announcement", "欢迎新人!")
self.host = self.context.base_config['aiocqhttp']['ws_reverse_host']
self.port = self.context.base_config['aiocqhttp']['ws_reverse_port']
def convert_message(self, event: Event) -> AstrBotMessage:
abm = AstrBotMessage()
abm.self_id = str(event.self_id)
abm.tag = "aiocqhttp"
abm.sender = MessageMember(str(event.sender['user_id']), event.sender['nickname'])
if event['message_type'] == 'group':
abm.type = MessageType.GROUP_MESSAGE
elif event['message_type'] == 'private':
abm.type = MessageType.FRIEND_MESSAGE
if self.unique_session:
abm.session_id = abm.sender.user_id
else:
abm.session_id = str(event.group_id) if abm.type == MessageType.GROUP_MESSAGE else abm.sender.user_id
abm.message_id = str(event.message_id)
abm.message = []
message_str = ""
for m in event.message:
t = m['type']
a = None
if t == 'at':
a = At(**m['data'])
abm.message.append(a)
if t == 'text':
a = Plain(text=m['data']['text'])
message_str += m['data']['text'].strip()
abm.message.append(a)
if t == 'image':
a = Image(file=m['data']['file'])
abm.message.append(a)
abm.timestamp = int(time.time())
abm.message_str = message_str
abm.raw_message = event
return abm
def run_aiocqhttp(self):
if not self.host or not self.port:
return
self.bot = CQHttp(use_ws_reverse=True, import_name='aiocqhttp')
@self.bot.on_message('group')
async def group(event: Event):
abm = self.convert_message(event)
if abm:
await self.handle_msg(abm)
# return {'reply': event.message}
@self.bot.on_message('private')
async def private(event: Event):
abm = self.convert_message(event)
if abm:
await self.handle_msg(abm)
# return {'reply': event.message}
bot = self.bot.run_task(host=self.host, port=int(self.port), shutdown_trigger=self.shutdown_trigger_placeholder)
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging.getLogger('aiocqhttp').setLevel(logging.ERROR)
return bot
async def shutdown_trigger_placeholder(self):
while True:
await asyncio.sleep(1)
def pre_check(self, message: AstrBotMessage) -> bool:
# if message chain contains Plain components or At components which points to self_id, return True
if message.type == MessageType.FRIEND_MESSAGE:
return True
for comp in message.message:
if isinstance(comp, At) and str(comp.qq) == message.self_id:
return True
# check nicks
if self.check_nick(message.message_str):
return True
return False
async def handle_msg(self, message: AstrBotMessage):
logger.info(
f"{message.sender.nickname}/{message.sender.user_id} -> {self.parse_message_outline(message)}")
if not self.pre_check(message):
return
# 解析 role
sender_id = str(message.sender.user_id)
if sender_id == self.context.config_helper.get('admin_qq', '') or \
sender_id in self.context.config_helper.get('other_admins', []):
role = 'admin'
else:
role = 'member'
# construct astrbot message event
ame = AstrMessageEvent.from_astrbot_message(message, self.context, "aiocqhttp", message.session_id, role)
# transfer control to message handler
message_result = await self.message_handler.handle(ame)
if not message_result: return
await self.reply_msg(message, message_result.result_message)
if message_result.callback:
message_result.callback()
# 如果是等待回复的消息
if message.session_id in self.waiting and self.waiting[message.session_id] == '':
self.waiting[message.session_id] = message
async def reply_msg(self,
message: AstrBotMessage,
result_message: list):
"""
回复用户唤醒机器人的消息。(被动回复)
"""
logger.info(
f"{message.sender.user_id} <- {self.parse_message_outline(message)}")
res = result_message
if isinstance(res, str):
res = [Plain(text=res), ]
# if image mode, put all Plain texts into a new picture.
if self.context.config_helper.get("qq_pic_mode", False) and isinstance(res, list):
rendered_images = await self.convert_to_t2i_chain(res)
if rendered_images:
try:
await self._reply(message, rendered_images)
return
except BaseException as e:
logger.warn(traceback.format_exc())
logger.warn(f"以文本转图片的形式回复消息时发生错误: {e},将尝试默认方式。")
await self._reply(message, res)
async def _reply(self, message: AstrBotMessage, message_chain: List[BaseMessageComponent]):
if isinstance(message_chain, str):
message_chain = [Plain(text=message_chain), ]
ret = []
image_idx = []
for idx, segment in enumerate(message_chain):
d = segment.toDict()
if isinstance(segment, Plain):
d['type'] = 'text'
if isinstance(segment, Image):
image_idx.append(idx)
ret.append(d)
try:
await self.bot.send(message.raw_message, ret)
except ActionFailed as e:
logger.error(traceback.format_exc())
logger.error(f"回复消息失败: {e}")
if e.retcode == 1200:
# ENOENT
if not image_idx:
raise e
logger.info("检测到失败原因为文件未找到,猜测用户的协议端与 AstrBot 位于不同的文件系统上。尝试采用上传图片的方式发图。")
for idx in image_idx:
if ret[idx]['data']['file'].startswith('file://'):
logger.info(f"正在上传图片: {ret[idx]['data']['path']}")
image_url = await self.context.image_uploader.upload_image(ret[idx]['data']['path'])
logger.info(f"上传成功。")
ret[idx]['data']['file'] = image_url
ret[idx]['data']['path'] = image_url
await self.bot.send(message.raw_message, ret)

254
model/platform/qq_nakuru.py Normal file
View File

@@ -0,0 +1,254 @@
import time, asyncio, traceback
from nakuru.entities.components import Plain, At, Image, Node, BaseMessageComponent
from nakuru import (
CQHTTP,
GuildMessage,
GroupMessage,
FriendMessage,
GroupMemberIncrease,
MessageItemType
)
from typing import Union, List, Dict
from type.types import Context
from . import Platform
from type.astrbot_message import *
from type.message_event import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class FakeSource:
def __init__(self, type, group_id):
self.type = type
self.group_id = group_id
class QQGOCQ(Platform):
def __init__(self, context: Context, message_handler: MessageHandler) -> None:
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.message_handler = message_handler
self.waiting = {}
self.context = context
self.unique_session = self.context.unique_session
self.announcement = self.context.base_config.get("announcement", "欢迎新人!")
self.client = CQHTTP(
host=self.context.base_config.get("gocq_host", "127.0.0.1"),
port=self.context.base_config.get("gocq_websocket_port", 6700),
http_port=self.context.base_config.get("gocq_http_port", 5700),
)
gocq_app = self.client
@gocq_app.receiver("GroupMessage")
async def _(app: CQHTTP, source: GroupMessage):
if self.context.base_config.get("gocq_react_group", True):
abm = self.convert_message(source)
await self.handle_msg(abm)
@gocq_app.receiver("FriendMessage")
async def _(app: CQHTTP, source: FriendMessage):
if self.context.base_config.get("gocq_react_friend", True):
abm = self.convert_message(source)
await self.handle_msg(abm)
@gocq_app.receiver("GroupMemberIncrease")
async def _(app: CQHTTP, source: GroupMemberIncrease):
if self.context.base_config.get("gocq_react_group_increase", True):
await app.sendGroupMessage(source.group_id, [
Plain(text=self.announcement)
])
@gocq_app.receiver("GuildMessage")
async def _(app: CQHTTP, source: GuildMessage):
if self.cc.get("gocq_react_guild", True):
abm = self.convert_message(source)
await self.handle_msg(abm)
def pre_check(self, message: AstrBotMessage) -> bool:
# if message chain contains Plain components or At components which points to self_id, return True
if message.type == MessageType.FRIEND_MESSAGE:
return True
for comp in message.message:
if isinstance(comp, At) and str(comp.qq) == message.self_id:
return True
# check nicks
if self.check_nick(message.message_str):
return True
return False
def run(self):
coro = self.client._run()
return coro
async def handle_msg(self, message: AstrBotMessage):
logger.info(
f"{message.sender.nickname}/{message.sender.user_id} -> {self.parse_message_outline(message)}")
assert isinstance(message.raw_message,
(GroupMessage, FriendMessage, GuildMessage))
# 判断是否响应消息
if not self.pre_check(message):
return
# 解析 session_id
if self.unique_session or message.type == MessageType.FRIEND_MESSAGE:
session_id = message.raw_message.user_id
elif message.type == MessageType.GROUP_MESSAGE:
session_id = message.raw_message.group_id
elif message.type == MessageType.GUILD_MESSAGE:
session_id = message.raw_message.channel_id
else:
session_id = message.raw_message.user_id
message.session_id = session_id
# 解析 role
sender_id = str(message.raw_message.user_id)
if sender_id == self.context.config_helper.get('admin_qq', '') or \
sender_id in self.context.config_helper.get('other_admins', []):
role = 'admin'
else:
role = 'member'
# construct astrbot message event
ame = AstrMessageEvent.from_astrbot_message(message, self.context, "gocq", session_id, role)
# transfer control to message handler
message_result = await self.message_handler.handle(ame)
if not message_result: return
await self.reply_msg(message, message_result.result_message)
if message_result.callback:
message_result.callback()
# 如果是等待回复的消息
if session_id in self.waiting and self.waiting[session_id] == '':
self.waiting[session_id] = message
async def reply_msg(self,
message: AstrBotMessage,
result_message: List[BaseMessageComponent]):
"""
回复用户唤醒机器人的消息。(被动回复)
"""
source = message.raw_message
res = result_message
assert isinstance(source,
(GroupMessage, FriendMessage, GuildMessage))
logger.info(
f"{source.user_id} <- {self.parse_message_outline(res)}")
if isinstance(res, str):
res = [Plain(text=res), ]
# if image mode, put all Plain texts into a new picture.
if self.context.config_helper.get("qq_pic_mode", False) and isinstance(res, list):
rendered_images = await self.convert_to_t2i_chain(res)
if rendered_images:
try:
await self._reply(source, rendered_images)
return
except BaseException as e:
logger.warn(traceback.format_exc())
logger.warn(f"以文本转图片的形式回复消息时发生错误: {e},将尝试默认方式。")
await self._reply(source, res)
async def _reply(self, source, message_chain: List[BaseMessageComponent]):
if isinstance(message_chain, str):
message_chain = [Plain(text=message_chain), ]
is_dict = isinstance(source, dict)
if source.type == "GuildMessage":
guild_id = source['guild_id'] if is_dict else source.guild_id
chan_id = source['channel_id'] if is_dict else source.channel_id
await self.client.sendGuildChannelMessage(guild_id, chan_id, message_chain)
elif source.type == "FriendMessage":
user_id = source['user_id'] if is_dict else source.user_id
await self.client.sendFriendMessage(user_id, message_chain)
elif source.type == "GroupMessage":
group_id = source['group_id'] if is_dict else source.group_id
# 过长时forward发送
plain_text_len = 0
image_num = 0
for i in message_chain:
if isinstance(i, Plain):
plain_text_len += len(i.text)
elif isinstance(i, Image):
image_num += 1
if plain_text_len > self.context.config_helper.get('qq_forward_threshold', 200):
# 删除At
for i in message_chain:
if isinstance(i, At):
message_chain.remove(i)
node = Node(message_chain)
node.uin = 123456
node.name = f"bot"
node.time = int(time.time())
nodes = [node]
await self.client.sendGroupForwardMessage(group_id, nodes)
return
await self.client.sendGroupMessage(group_id, message_chain)
async def send_msg(self, target: Dict[str, int], result_message: Union[List[BaseMessageComponent], str]):
'''
以主动的方式给用户、群或者频道发送一条消息。
`target` 接收一个 dict 类型的值引用。
- 要发给 QQ 下的某个用户,请添加 key `user_id`,值为 int 类型的 qq 号;
- 要发给某个群聊,请添加 key `group_id`,值为 int 类型的 qq 群号;
- 要发给某个频道,请添加 key `guild_id`, `channel_id`。均为 int 类型。
guild_id 不是频道号。
'''
await self._reply(target, result_message)
def convert_message(self, message: Union[GroupMessage, FriendMessage, GuildMessage]) -> AstrBotMessage:
abm = AstrBotMessage()
abm.type = MessageType(message.type)
abm.raw_message = message
abm.message_id = message.message_id
plain_content = ""
for i in message.message:
if isinstance(i, Plain):
plain_content += i.text
abm.message_str = plain_content.strip()
if message.type == MessageItemType.GuildMessage:
abm.self_id = str(message.self_tiny_id)
else:
abm.self_id = str(message.self_id)
abm.sender = MessageMember(
str(message.sender.user_id),
str(message.sender.nickname)
)
abm.tag = "gocq"
abm.message = message.message
return abm
def wait_for_message(self, group_id) -> Union[GroupMessage, FriendMessage, GuildMessage]:
'''
等待下一条消息,超时 300s 后抛出异常
'''
self.waiting[group_id] = ''
cnt = 0
while True:
if group_id in self.waiting and self.waiting[group_id] != '':
# 去掉
ret = self.waiting[group_id]
del self.waiting[group_id]
return ret
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)

View File

@@ -0,0 +1,379 @@
import botpy
import re
import time
import traceback
import asyncio
import botpy.message
import botpy.types
import botpy.types.message
from botpy.types.message import Reference, Media
from botpy import Client
from util.io import save_temp_img, download_image_by_url
from . import Platform
from type.astrbot_message import *
from type.message_event import *
from typing import Union, List, Dict
from nakuru.entities.components import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
logger: Logger = LogManager.GetLogger(log_name='astrbot')
# QQ 机器人官方框架
class botClient(Client):
def set_platform(self, platform: 'QQOfficial'):
self.platform = platform
# 收到群消息
async def on_group_at_message_create(self, message: botpy.message.GroupMessage):
abm = self.platform._parse_from_qqofficial(message, MessageType.GROUP_MESSAGE)
await self.platform.handle_msg(abm)
# 收到频道消息
async def on_at_message_create(self, message: botpy.message.Message):
# 转换层
abm = self.platform._parse_from_qqofficial(message, MessageType.GUILD_MESSAGE)
await self.platform.handle_msg(abm)
# 收到私聊消息
async def on_direct_message_create(self, message: botpy.message.DirectMessage):
# 转换层
abm = self.platform._parse_from_qqofficial(message, MessageType.FRIEND_MESSAGE)
await self.platform.handle_msg(abm)
class QQOfficial(Platform):
def __init__(self, context: Context, message_handler: MessageHandler) -> None:
super().__init__()
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.message_handler = message_handler
self.waiting: dict = {}
self.context = context
self.appid = context.base_config['qqbot']['appid']
self.token = context.base_config['qqbot']['token']
self.secret = context.base_config['qqbot_secret']
self.unique_session = context.unique_session
qq_group = context.base_config['qqofficial_enable_group_message']
if qq_group:
self.intents = botpy.Intents(
public_messages=True,
public_guild_messages=True,
direct_message=context.base_config['direct_message_mode']
)
else:
self.intents = botpy.Intents(
public_guild_messages=True,
direct_message=context.base_config['direct_message_mode']
)
self.client = botClient(
intents=self.intents,
bot_log=False
)
self.client.set_platform(self)
async def _parse_to_qqofficial(self, message: List[BaseMessageComponent], is_group: bool = False):
plain_text = ""
image_path = None # only one img supported
for i in message:
if isinstance(i, Plain):
plain_text += i.text
elif isinstance(i, Image) and not image_path:
if i.path:
image_path = i.path
elif i.file and i.file.startswith("base64://"):
img_data = base64.b64decode(i.file[9:])
image_path = save_temp_img(img_data)
elif i.file and i.file.startswith("http"):
# 如果是群消息,不需要下载
image_path = await download_image_by_url(i.file) if not is_group else i.file
return plain_text, image_path
def _parse_from_qqofficial(self, message: Union[botpy.message.Message, botpy.message.GroupMessage],
message_type: MessageType):
abm = AstrBotMessage()
abm.type = message_type
abm.timestamp = int(time.time())
abm.raw_message = message
abm.message_id = message.id
abm.tag = "qqchan"
msg: List[BaseMessageComponent] = []
if message_type == MessageType.GROUP_MESSAGE:
abm.sender = MessageMember(
message.author.member_openid,
""
)
abm.message_str = message.content.strip()
abm.self_id = "unknown_selfid"
msg.append(Plain(abm.message_str))
if message.attachments:
for i in message.attachments:
if i.content_type.startswith("image"):
url = i.url
if not url.startswith("http"):
url = "https://"+url
img = Image.fromURL(url)
msg.append(img)
abm.message = msg
elif message_type == MessageType.GUILD_MESSAGE or message_type == MessageType.FRIEND_MESSAGE:
# 目前对于 FRIEND_MESSAGE 只处理频道私聊
try:
abm.self_id = str(message.mentions[0].id)
except:
abm.self_id = ""
plain_content = message.content.replace(
"<@!"+str(abm.self_id)+">", "").strip()
msg.append(Plain(plain_content))
if message.attachments:
for i in message.attachments:
if i.content_type.startswith("image"):
url = i.url
if not url.startswith("http"):
url = "https://"+url
img = Image.fromURL(url)
msg.append(img)
abm.message = msg
abm.message_str = plain_content
abm.sender = MessageMember(
str(message.author.id),
str(message.author.username)
)
else:
raise ValueError(f"Unknown message type: {message_type}")
return abm
def run(self):
try:
return self.client.start(
appid=self.appid,
secret=self.secret
)
except BaseException as e:
# 早期的 qq-botpy 版本使用 token 登录。
logger.error(traceback.format_exc())
self.client = botClient(
intents=self.intents,
bot_log=False
)
self.client.set_platform(self)
return self.client.start(
appid=self.appid,
token=self.token
)
async def handle_msg(self, message: AstrBotMessage):
assert isinstance(message.raw_message, (botpy.message.Message,
botpy.message.GroupMessage, botpy.message.DirectMessage))
is_group = message.type != MessageType.FRIEND_MESSAGE
_t = "/私聊" if not is_group else ""
logger.info(
f"{message.sender.nickname}({message.sender.user_id}{_t}) -> {self.parse_message_outline(message)}")
# 解析出 session_id
if self.unique_session or not is_group:
session_id = message.sender.user_id
else:
if message.type == MessageType.GUILD_MESSAGE:
session_id = message.raw_message.channel_id
elif message.type == MessageType.GROUP_MESSAGE:
session_id = str(message.raw_message.group_openid)
else:
session_id = str(message.raw_message.author.id)
message.session_id = session_id
# 解析出 role
sender_id = message.sender.user_id
if sender_id == self.context.config_helper.get('admin_qqchan', None) or \
sender_id in self.context.config_helper.get('other_admins', None):
role = 'admin'
else:
role = 'member'
# construct astrbot message event
ame = AstrMessageEvent.from_astrbot_message(message, self.context, "qqchan", session_id, role)
message_result = await self.message_handler.handle(ame)
if not message_result:
return
await self.reply_msg(message, message_result.result_message)
if message_result.callback:
message_result.callback()
# 如果是等待回复的消息
if session_id in self.waiting and self.waiting[session_id] == '':
self.waiting[session_id] = message
async def reply_msg(self,
message: AstrBotMessage,
result_message: List[BaseMessageComponent]):
'''
回复频道消息
'''
source = message.raw_message
assert isinstance(source, (botpy.message.Message,
botpy.message.GroupMessage, botpy.message.DirectMessage))
logger.info(
f"{message.sender.nickname}({message.sender.user_id}) <- {self.parse_message_outline(result_message)}")
plain_text = ''
image_path = ''
msg_ref = None
rendered_images = []
if self.context.config_helper.get("qq_pic_mode", False) and isinstance(result_message, list):
rendered_images = await self.convert_to_t2i_chain(result_message)
if isinstance(result_message, list):
plain_text, image_path = await self._parse_to_qqofficial(result_message, message.type == MessageType.GROUP_MESSAGE)
else:
plain_text = result_message
if source and not image_path: # file_image与message_reference不能同时传入
msg_ref = Reference(message_id=source.id,
ignore_get_message_error=False)
# 到这里,我们得到了 plain_textimage_pathmsg_ref
data = {
'content': plain_text,
'msg_id': message.message_id,
'message_reference': msg_ref
}
if message.type == MessageType.GROUP_MESSAGE:
data['group_openid'] = str(source.group_openid)
elif message.type == MessageType.GUILD_MESSAGE:
data['channel_id'] = source.channel_id
elif message.type == MessageType.FRIEND_MESSAGE:
data['guild_id'] = source.guild_id
if image_path:
data['file_image'] = image_path
if rendered_images:
# 文转图
_data = data.copy()
_data['content'] = ''
_data['file_image'] = rendered_images[0].file
_data['message_reference'] = None
try:
await self._reply(**_data)
return
except BaseException as e:
logger.warn(traceback.format_exc())
logger.warn(f"以文本转图片的形式回复消息时发生错误: {e},将尝试默认方式。")
try:
await self._reply(**data)
except BaseException as e:
logger.error(traceback.format_exc())
# 分割过长的消息
if "msg over length" in str(e):
split_res = []
split_res.append(plain_text[:len(plain_text)//2])
split_res.append(plain_text[len(plain_text)//2:])
for i in split_res:
data['content'] = i
await self._reply(**data)
else:
try:
# 防止被qq频道过滤消息
plain_text = plain_text.replace(".", " . ")
await self._reply(**data)
except BaseException as e:
try:
data['content'] = str.join(" ", plain_text)
await self._reply(**data)
except BaseException as e:
plain_text = re.sub(
r'(https|http)?:\/\/(\w|\.|\/|\?|\=|\&|\%)*\b', '[被隐藏的链接]', str(e), flags=re.MULTILINE)
plain_text = plain_text.replace(".", "·")
data['content'] = plain_text
await self._reply(**data)
async def _reply(self, **kwargs):
if 'group_openid' in kwargs:
# QQ群组消息
# qq群组消息需要自行上传暂时不处理
if 'file_image' in kwargs:
file_image_path = kwargs['file_image'].replace("file:///", "")
if file_image_path:
if file_image_path.startswith("http"):
image_url = file_image_path
else:
logger.debug(f"上传图片: {file_image_path}")
image_url = await self.context.image_uploader.upload_image(file_image_path)
logger.debug(f"上传成功: {image_url}")
media = await self.client.api.post_group_file(kwargs['group_openid'], 1, image_url)
del kwargs['file_image']
kwargs['media'] = media
logger.debug(f"发送群图片: {media}")
kwargs['msg_type'] = 7 # 富媒体
await self.client.api.post_group_message(**kwargs)
elif 'channel_id' in kwargs:
# 频道消息
if 'file_image' in kwargs:
kwargs['file_image'] = kwargs['file_image'].replace("file:///", "")
# 频道消息发图只支持本地
if kwargs['file_image'].startswith("http"):
kwargs['file_image'] = await download_image_by_url(kwargs['file_image'])
await self.client.api.post_message(**kwargs)
else:
# 频道私聊消息
if 'file_image' in kwargs:
kwargs['file_image'] = kwargs['file_image'].replace("file:///", "")
if kwargs['file_image'].startswith("http"):
kwargs['file_image'] = await download_image_by_url(kwargs['file_image'])
await self.client.api.post_dms(**kwargs)
async def send_msg(self, target: Dict[str, str], result_message: Union[List[BaseMessageComponent], str]):
'''
以主动的方式给用户、群或者频道发送一条消息。
`target` 接收一个 dict 类型的值引用。
- 如果目标是 QQ 群,请添加 key `group_openid`。
- 如果目标是 频道消息,请添加 key `channel_id`。
- 如果目标是 频道私聊,请添加 key `guild_id`。
'''
if isinstance(result_message, list):
plain_text, image_path = await self._parse_to_qqofficial(result_message)
else:
plain_text = result_message
payload = {
'content': plain_text,
'file_image': image_path,
**target
}
await self._reply(**payload)
def wait_for_message(self, channel_id: int) -> AstrBotMessage:
'''
等待指定 channel_id 的下一条信息,超时 300s 后抛出异常
'''
self.waiting[channel_id] = ''
cnt = 0
while True:
if channel_id in self.waiting and self.waiting[channel_id] != '':
# 去掉
ret = self.waiting[channel_id]
del self.waiting[channel_id]
return ret
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)()

View File

@@ -1,63 +0,0 @@
import io
import botpy
from PIL import Image
import re
import asyncio
import requests
from cores.qqbot.personality import personalities
class QQChan():
def run_bot(self, botclient, appid, token):
intents = botpy.Intents(public_guild_messages=True, direct_message=True)
self.client = botclient
self.client.run(appid=appid, token=token)
def send_qq_msg(self, message, res, image_mode=False, msg_ref = None):
print("[System-Info] 回复QQ频道消息中..."+res)
if not image_mode:
try:
if msg_ref is not None:
reply_res = asyncio.run_coroutine_threadsafe(message.reply(content=res, message_reference = msg_ref), self.client.loop)
else:
reply_res = asyncio.run_coroutine_threadsafe(message.reply(content=res), self.client.loop)
reply_res.result()
except BaseException as e:
# 分割过长的消息
if "msg over length" in str(e):
split_res = []
split_res.append(res[:len(res)//2])
split_res.append(res[len(res)//2:])
for i in split_res:
if msg_ref is not None:
reply_res = asyncio.run_coroutine_threadsafe(message.reply(content=i, message_reference = msg_ref), self.client.loop)
else:
reply_res = asyncio.run_coroutine_threadsafe(message.reply(content=i), self.client.loop)
reply_res.result()
else:
# 发送qq信息
try:
# 防止被qq频道过滤消息
res = res.replace(".", " . ")
asyncio.run_coroutine_threadsafe(message.reply(content=res), self.client.loop).result()
# 发送信息
except BaseException as e:
print("QQ频道API错误: \n"+str(e))
res = str.join(" ", res)
try:
asyncio.run_coroutine_threadsafe(message.reply(content=res), self.client.loop).result()
except BaseException as e:
# 如果还是不行则报出错误
res = re.sub(r'(https|http)?:\/\/(\w|\.|\/|\?|\=|\&|\%)*\b', '[被隐藏的链接]', str(e), flags=re.MULTILINE)
res = res.replace(".", "·")
asyncio.run_coroutine_threadsafe(message.reply(content=res), self.client.loop).result()
# send(message, f"QQ频道API错误{str(e)}\n下面是格式化后的回答\n{f_res}")
else:
pic_res = requests.get(str(res), stream=True)
if pic_res.status_code == 200:
# 将二进制数据转换成图片对象
image = Image.open(io.BytesIO(pic_res.content))
# 保存图片到本地
image.save('tmp_image.jpg')
asyncio.run_coroutine_threadsafe(message.reply(file_image='tmp_image.jpg', content=""), self.client.loop)

25
model/plugin/command.py Normal file
View File

@@ -0,0 +1,25 @@
from dataclasses import dataclass
from type.register import RegisteredPlugins
from typing import List, Union, Callable
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot')
@dataclass
class CommandRegisterRequest():
command_name: str
description: str
priority: int
handler: Callable
plugin_name: str = None
class PluginCommandBridge():
def __init__(self, cached_plugins: RegisteredPlugins):
self.plugin_commands_waitlist: List[CommandRegisterRequest] = []
self.cached_plugins = cached_plugins
def register_command(self, plugin_name, command_name, description, priority, handler):
self.plugin_commands_waitlist.append(CommandRegisterRequest(command_name, description, priority, handler, plugin_name))

256
model/plugin/manager.py Normal file
View File

@@ -0,0 +1,256 @@
import inspect
import os
import sys
import traceback
import uuid
import shutil
import yaml
from util.updator.plugin_updator import PluginUpdator
from util.io import remove_dir, download_file
from types import ModuleType
from type.types import Context
from type.plugin import *
from type.register import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class PluginManager():
def __init__(self, context: Context):
self.updator = PluginUpdator()
self.plugin_store_path = self.updator.get_plugin_store_path()
self.context = context
def get_classes(self, arg: ModuleType):
classes = []
clsmembers = inspect.getmembers(arg, inspect.isclass)
for (name, _) in clsmembers:
if name.lower().endswith("plugin") or name.lower() == "main":
classes.append(name)
break
return classes
def get_modules(self, path):
modules = []
dirs = os.listdir(path)
# 遍历文件夹,找到 main.py 或者和文件夹同名的文件
for d in dirs:
if os.path.isdir(os.path.join(path, d)):
if os.path.exists(os.path.join(path, d, "main.py")):
module_str = 'main'
elif os.path.exists(os.path.join(path, d, d + ".py")):
module_str = d
else:
print(f"插件 {d} 未找到 main.py 或者 {d}.py跳过。")
continue
if os.path.exists(os.path.join(path, d, "main.py")) or os.path.exists(os.path.join(path, d, d + ".py")):
modules.append({
"pname": d,
"module": module_str,
"module_path": os.path.join(path, d, module_str)
})
return modules
def get_plugin_modules(self):
plugins = []
try:
plugin_dir = self.plugin_store_path
if os.path.exists(plugin_dir):
plugins = self.get_modules(plugin_dir)
return plugins
except BaseException as e:
raise e
def check_plugin_dept_update(self, target_plugin: str = None):
plugin_dir = self.plugin_store_path
if not os.path.exists(plugin_dir):
return False
to_update = []
if target_plugin:
to_update.append(target_plugin)
else:
for p in self.context.cached_plugins:
to_update.append(p.root_dir_name)
for p in to_update:
plugin_path = os.path.join(plugin_dir, p)
if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
pth = os.path.join(plugin_path, "requirements.txt")
logger.info(f"正在检查更新插件 {p} 的依赖: {pth}")
self.update_plugin_dept(os.path.join(plugin_path, "requirements.txt"))
def update_plugin_dept(self, path):
mirror = "https://mirrors.aliyun.com/pypi/simple/"
py = sys.executable
os.system(f"{py} -m pip install -r {path} -i {mirror} --quiet")
def install_plugin(self, repo_url: str):
ppath = self.plugin_store_path
# we no longer use Git anymore :)
# Repo.clone_from(repo_url, to_path=plugin_path, branch='master')
plugin_path = self.updator.update(repo_url)
with open(os.path.join(plugin_path, "REPO"), "w", encoding='utf-8') as f:
f.write(repo_url)
ok, err = self.plugin_reload()
if not ok:
raise Exception(err)
def download_from_repo_url(self, target_path: str, repo_url: str):
repo_namespace = repo_url.split("/")[-2:]
author = repo_namespace[0]
repo = repo_namespace[1]
logger.info(f"正在下载插件 {repo} ...")
release_url = f"https://api.github.com/repos/{author}/{repo}/releases"
releases = self.updator.fetch_release_info(url=release_url)
if not releases:
# download from the default branch directly.
logger.warn(f"未在插件 {author}/{repo} 中找到任何发布版本,将从默认分支下载。")
release_url = f"https://github.com/{author}/{repo}/archive/refs/heads/master.zip"
else:
release_url = releases[0]['zipball_url']
download_file(release_url, target_path + ".zip")
def get_registered_plugin(self, plugin_name: str) -> RegisteredPlugin:
for p in self.context.cached_plugins:
if p.metadata.plugin_name == plugin_name:
return p
def uninstall_plugin(self, plugin_name: str):
plugin = self.get_registered_plugin(plugin_name, self.context.cached_plugins)
if not plugin:
raise Exception("插件不存在。")
root_dir_name = plugin.root_dir_name
ppath = self.plugin_store_path
self.context.cached_plugins.remove(plugin)
if not remove_dir(os.path.join(ppath, root_dir_name)):
raise Exception("移除插件成功,但是删除插件文件夹失败。您可以手动删除该文件夹,位于 addons/plugins/ 下。")
def update_plugin(self, plugin_name: str):
plugin = self.get_registered_plugin(plugin_name, self.context.cached_plugins)
if not plugin:
raise Exception("插件不存在。")
self.updator.update(plugin)
def plugin_reload(self):
cached_plugins = self.context.cached_plugins
plugins = self.get_plugin_modules()
if plugins is None:
return False, "未找到任何插件模块"
fail_rec = ""
registered_map = {}
for p in cached_plugins:
registered_map[p.module_path] = None
for plugin in plugins:
try:
p = plugin['module']
module_path = plugin['module_path']
root_dir_name = plugin['pname']
# self.check_plugin_dept_update(cached_plugins, root_dir_name)
module = __import__("addons.plugins." +
root_dir_name + "." + p, fromlist=[p])
cls = self.get_classes(module)
try:
# 尝试传入 ctx
obj = getattr(module, cls[0])(context=self.context)
except:
obj = getattr(module, cls[0])()
metadata = None
plugin_path = os.path.join(self.plugin_store_path, root_dir_name)
metadata = self.load_plugin_metadata(plugin_path=plugin_path, plugin_obj=obj)
logger.info(f"插件 {metadata.plugin_name}({metadata.author}) 加载成功。")
if module_path not in registered_map:
cached_plugins.append(RegisteredPlugin(
metadata=metadata,
plugin_instance=obj,
module=module,
module_path=module_path,
root_dir_name=root_dir_name
))
except BaseException as e:
traceback.print_exc()
fail_rec += f"加载{p}插件出现问题,原因 {str(e)}\n"
if not fail_rec:
return True, None
else:
return False, fail_rec
def install_plugin_from_file(self, zip_file_path: str):
# try to unzip
temp_dir = os.path.join(os.path.dirname(zip_file_path), str(uuid.uuid4()))
self.updator.unzip_file(zip_file_path, temp_dir)
# check if the plugin has metadata.yaml
if not os.path.exists(os.path.join(temp_dir, "metadata.yaml")):
remove_dir(temp_dir)
raise Exception("插件缺少 metadata.yaml 文件。")
metadata = self.load_plugin_metadata(temp_dir)
plugin_name = metadata.plugin_name
if not plugin_name:
remove_dir(temp_dir)
raise Exception("插件 metadata.yaml 文件中 name 字段为空。")
plugin_name = self.updator.format_name(plugin_name)
ppath = self.plugin_store_path
plugin_path = os.path.join(ppath, plugin_name)
if os.path.exists(plugin_path):
remove_dir(plugin_path)
# move to the target path
shutil.move(temp_dir, plugin_path)
if metadata.repo:
with open(os.path.join(plugin_path, "REPO"), "w", encoding='utf-8') as f:
f.write(metadata.repo)
# remove the temp dir
remove_dir(temp_dir)
ok, err = self.plugin_reload()
if not ok:
raise Exception(err)
def load_plugin_metadata(self, plugin_path: str, plugin_obj = None) -> PluginMetadata:
metadata = None
if not os.path.exists(plugin_path):
raise Exception("插件不存在。")
if os.path.exists(os.path.join(plugin_path, "metadata.yaml")):
with open(os.path.join(plugin_path, "metadata.yaml"), "r", encoding='utf-8') as f:
metadata = yaml.safe_load(f)
elif plugin_obj:
# 使用 info() 函数
metadata = plugin_obj.info()
if isinstance(metadata, dict):
if 'name' not in metadata or 'desc' not in metadata or 'version' not in metadata or 'author' not in metadata:
raise Exception("插件元数据信息不完整。")
metadata = PluginMetadata(
plugin_name=metadata['name'],
plugin_type=PluginType.COMMON if 'plugin_type' not in metadata else PluginType(metadata['plugin_type']),
author=metadata['author'],
desc=metadata['desc'],
version=metadata['version'],
repo=metadata['repo'] if 'repo' in metadata else None
)
return metadata

View File

@@ -0,0 +1,508 @@
import os
import sys
import json
import time
import tiktoken
import threading
import traceback
import base64
from openai import AsyncOpenAI
from openai.types.images_response import ImagesResponse
from openai.types.chat.chat_completion import ChatCompletion
from openai._exceptions import *
from astrbot.persist.helper import dbConn
from model.provider.provider import Provider
from util import general_utils as gu
from util.cmd_config import CmdConfig
from SparkleLogging.utils.core import LogManager
from logging import Logger
from typing import List, Dict
from type.types import Context
logger: Logger = LogManager.GetLogger(log_name='astrbot')
MODELS = {
"gpt-4o": 128000,
"gpt-4o-2024-05-13": 128000,
"gpt-4-turbo": 128000,
"gpt-4-turbo-2024-04-09": 128000,
"gpt-4-turbo-preview": 128000,
"gpt-4-0125-preview": 128000,
"gpt-4-1106-preview": 128000,
"gpt-4-vision-preview": 128000,
"gpt-4-1106-vision-preview": 128000,
"gpt-4": 8192,
"gpt-4-0613": 8192,
"gpt-4-32k": 32768,
"gpt-4-32k-0613": 32768,
"gpt-3.5-turbo-0125": 16385,
"gpt-3.5-turbo": 16385,
"gpt-3.5-turbo-1106": 16385,
"gpt-3.5-turbo-instruct": 4096,
"gpt-3.5-turbo-16k": 16385,
"gpt-3.5-turbo-0613": 16385,
"gpt-3.5-turbo-16k-0613": 16385,
}
class ProviderOpenAIOfficial(Provider):
def __init__(self, context: Context) -> None:
super().__init__()
os.makedirs("data/openai", exist_ok=True)
self.cc = CmdConfig
self.key_data_path = "data/openai/keys.json"
self.api_keys = []
self.chosen_api_key = None
self.base_url = None
self.keys_data = {} # 记录超额
cfg = context.base_config['openai']
if cfg['key']: self.api_keys = cfg['key']
if cfg['api_base']: self.base_url = cfg['api_base']
if not self.api_keys:
logger.warn("看起来你没有添加 OpenAI 的 API 密钥OpenAI LLM 能力将不会启用。")
else:
self.chosen_api_key = self.api_keys[0]
for key in self.api_keys:
self.keys_data[key] = True
self.client = AsyncOpenAI(
api_key=self.chosen_api_key,
base_url=self.base_url
)
self.model_configs: Dict = cfg['chatGPTConfigs']
super().set_curr_model(self.model_configs['model'])
self.image_generator_model_configs: Dict = self.cc.get('openai_image_generate', None)
self.session_memory: Dict[str, List] = {} # 会话记忆
self.session_memory_lock = threading.Lock()
self.max_tokens = self.model_configs['max_tokens'] # 上下文窗口大小
logger.info("正在载入分词器 cl100k_base...")
self.tokenizer = tiktoken.get_encoding("cl100k_base") # todo: 根据 model 切换分词器
logger.info("分词器载入完成。")
self.DEFAULT_PERSONALITY = context.default_personality
self.curr_personality = self.DEFAULT_PERSONALITY
self.session_personality = {} # 记录了某个session是否已设置人格。
# 从 SQLite DB 读取历史记录
try:
db1 = dbConn()
for session in db1.get_all_session():
self.session_memory_lock.acquire()
self.session_memory[session[0]] = json.loads(session[1])['data']
self.session_memory_lock.release()
except BaseException as e:
logger.warn(f"读取 OpenAI LLM 对话历史记录 失败:{e}。仍可正常使用。")
# 定时保存历史记录
threading.Thread(target=self.dump_history, daemon=True).start()
def dump_history(self):
'''
转储历史记录
'''
time.sleep(10)
db = dbConn()
while True:
try:
for key in self.session_memory:
data = self.session_memory[key]
data_json = {
'data': data
}
if db.check_session(key):
db.update_session(key, json.dumps(data_json))
else:
db.insert_session(key, json.dumps(data_json))
logger.debug("已保存 OpenAI 会话历史记录")
except BaseException as e:
print(e)
finally:
time.sleep(10*60)
def personality_set(self, default_personality: dict, session_id: str):
if not default_personality: return
if session_id not in self.session_memory:
self.session_memory[session_id] = []
self.curr_personality = default_personality
self.session_personality = {} # 重置
encoded_prompt = self.tokenizer.encode(default_personality['prompt'])
tokens_num = len(encoded_prompt)
model = self.model_configs['model']
if model in MODELS and tokens_num > MODELS[model] - 500:
default_personality['prompt'] = self.tokenizer.decode(encoded_prompt[:MODELS[model] - 500])
new_record = {
"user": {
"role": "system",
"content": default_personality['prompt'],
},
'usage_tokens': 0, # 到该条目的总 token 数
'single-tokens': 0 # 该条目的 token 数
}
self.session_memory[session_id].append(new_record)
async def encode_image_bs64(self, image_url: str) -> str:
'''
将图片转换为 base64
'''
if image_url.startswith("http"):
image_url = await gu.download_image_by_url(image_url)
with open(image_url, "rb") as f:
image_bs64 = base64.b64encode(f.read()).decode()
return "data:image/jpeg;base64," + image_bs64
async def retrieve_context(self, session_id: str):
'''
根据 session_id 获取保存的 OpenAI 格式的上下文
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
# 转换为 openai 要求的格式
context = []
is_lvm = await self.is_lvm()
for record in self.session_memory[session_id]:
if "user" in record and record['user']:
if not is_lvm and "content" in record['user'] and isinstance(record['user']['content'], list):
logger.warn(f"由于当前模型 {self.model_configs['model']}不支持视觉,将忽略上下文中的图片输入。如果一直弹出此警告,可以尝试 reset 指令。")
continue
context.append(record['user'])
if "AI" in record and record['AI']:
context.append(record['AI'])
return context
async def is_lvm(self):
'''
是否是 LVM
'''
return self.model_configs['model'].startswith("gpt-4")
async def get_models(self):
try:
models = await self.client.models.list()
except NotFoundError as e:
bu = str(self.client.base_url)
self.client.base_url = bu + "/v1"
models = await self.client.models.list()
finally:
return filter(lambda x: x.id.startswith("gpt"), models.data)
async def assemble_context(self, session_id: str, prompt: str, image_url: str = None):
'''
组装上下文,并且根据当前上下文窗口大小截断
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
tokens_num = len(self.tokenizer.encode(prompt))
previous_total_tokens_num = 0 if not self.session_memory[session_id] else self.session_memory[session_id][-1]['usage_tokens']
message = {
"usage_tokens": previous_total_tokens_num + tokens_num,
"single_tokens": tokens_num,
"AI": None
}
if image_url:
user_content = {
"role": "user",
"content": [
{
"type": "text",
"text": prompt
},
{
"type": "image_url",
"image_url": {
"url": await self.encode_image_bs64(image_url)
}
}
]
}
else:
user_content = {
"role": "user",
"content": prompt
}
message["user"] = user_content
self.session_memory[session_id].append(message)
# 根据 模型的上下文窗口 淘汰掉多余的记录
curr_model = self.model_configs['model']
if curr_model in MODELS:
maxium_tokens_num = MODELS[curr_model] - 300 # 至少预留 300 给 completion
# if message['usage_tokens'] > maxium_tokens_num:
# 淘汰多余的记录,使得最终的 usage_tokens 不超过 maxium_tokens_num - 300
# contexts = self.session_memory[session_id]
# need_to_remove_idx = 0
# freed_tokens_num = contexts[0]['single-tokens']
# while freed_tokens_num < message['usage_tokens'] - maxium_tokens_num:
# need_to_remove_idx += 1
# freed_tokens_num += contexts[need_to_remove_idx]['single-tokens']
# # 更新之后的所有记录的 usage_tokens
# for i in range(len(contexts)):
# if i > need_to_remove_idx:
# contexts[i]['usage_tokens'] -= freed_tokens_num
# logger.debug(f"淘汰上下文记录 {need_to_remove_idx+1} 条,释放 {freed_tokens_num} 个 token。当前上下文总 token 为 {contexts[-1]['usage_tokens']}。")
# self.session_memory[session_id] = contexts[need_to_remove_idx+1:]
while len(self.session_memory[session_id]) and self.session_memory[session_id][-1]['usage_tokens'] > maxium_tokens_num:
self.pop_record(session_id)
async def pop_record(self, session_id: str, pop_system_prompt: bool = False):
'''
弹出第一条记录
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
if len(self.session_memory[session_id]) == 0:
return None
for i in range(len(self.session_memory[session_id])):
# 检查是否是 system prompt
if not pop_system_prompt and self.session_memory[session_id][i]['user']['role'] == "system":
# 如果只有一个 system prompt才不删掉
f = False
for j in range(i+1, len(self.session_memory[session_id])):
if self.session_memory[session_id][j]['user']['role'] == "system":
f = True
break
if not f:
continue
record = self.session_memory[session_id].pop(i)
break
# 更新之后所有记录的 usage_tokens
for i in range(len(self.session_memory[session_id])):
self.session_memory[session_id][i]['usage_tokens'] -= record['single-tokens']
logger.debug(f"淘汰上下文记录 1 条,释放 {record['single-tokens']} 个 token。当前上下文总 token 为 {self.session_memory[session_id][-1]['usage_tokens']}")
return record
async def text_chat(self,
prompt: str,
session_id: str,
image_url: None=None,
tools: None=None,
extra_conf: Dict = None,
**kwargs
) -> str:
super().accu_model_stat()
if not session_id:
session_id = "unknown"
if "unknown" in self.session_memory:
del self.session_memory["unknown"]
if session_id not in self.session_memory:
self.session_memory[session_id] = []
if session_id not in self.session_personality or not self.session_personality[session_id]:
self.personality_set(self.curr_personality, session_id)
self.session_personality[session_id] = True
# 如果 prompt 超过了最大窗口,截断。
# 1. 可以保证之后 pop 的时候不会出现问题
# 2. 可以保证不会超过最大 token 数
_encoded_prompt = self.tokenizer.encode(prompt)
curr_model = self.model_configs['model']
if curr_model in MODELS and len(_encoded_prompt) > MODELS[curr_model] - 300:
_encoded_prompt = _encoded_prompt[:MODELS[curr_model] - 300]
prompt = self.tokenizer.decode(_encoded_prompt)
# 组装上下文,并且根据当前上下文窗口大小截断
await self.assemble_context(session_id, prompt, image_url)
# 获取上下文openai 格式
contexts = await self.retrieve_context(session_id)
conf = self.model_configs
if extra_conf: conf.update(extra_conf)
# start request
retry = 0
rate_limit_retry = 0
while retry < 3 or rate_limit_retry < 5:
logger.debug(conf)
logger.debug(contexts)
if tools:
completion_coro = self.client.chat.completions.create(
messages=contexts,
tools=tools,
**conf
)
else:
completion_coro = self.client.chat.completions.create(
messages=contexts,
**conf
)
try:
completion = await completion_coro
break
except AuthenticationError as e:
api_key = self.chosen_api_key[10:] + "..."
logger.error(f"OpenAI API Key {api_key} 验证错误。详细原因:{e}。正在切换到下一个可用的 Key如果有的话")
self.keys_data[self.chosen_api_key] = False
ok = await self.switch_to_next_key()
if ok: continue
else: raise Exception("所有 OpenAI API Key 目前都不可用。")
except BadRequestError as e:
logger.warn(f"OpenAI 请求异常:{e}")
if "image_url is only supported by certain models." in str(e):
raise Exception(f"当前模型 { self.model_configs['model'] } 不支持图片输入,请更换模型。")
retry += 1
except RateLimitError as e:
if "You exceeded your current quota" in str(e):
self.keys_data[self.chosen_api_key] = False
ok = await self.switch_to_next_key()
if ok: continue
else: raise Exception("所有 OpenAI API Key 目前都不可用。")
logger.error(f"OpenAI API Key {self.chosen_api_key} 达到请求速率限制或者官方服务器当前超载。详细原因:{e}")
await self.switch_to_next_key()
rate_limit_retry += 1
time.sleep(1)
except Exception as e:
retry += 1
if retry >= 3:
logger.error(traceback.format_exc())
raise Exception(f"OpenAI 请求失败:{e}。重试次数已达到上限。")
if "maximum context length" in str(e):
logger.warn(f"OpenAI 请求失败:{e}。上下文长度超过限制。尝试弹出最早的记录然后重试。")
self.pop_record(session_id)
logger.warning(traceback.format_exc())
logger.warning(f"OpenAI 请求失败:{e}。重试第 {retry} 次。")
time.sleep(1)
assert isinstance(completion, ChatCompletion)
logger.debug(f"openai completion: {completion.usage}")
choice = completion.choices[0]
usage_tokens = completion.usage.total_tokens
completion_tokens = completion.usage.completion_tokens
self.session_memory[session_id][-1]['usage_tokens'] = usage_tokens
self.session_memory[session_id][-1]['single_tokens'] += completion_tokens
if choice.message.content:
# 返回文本
completion_text = str(choice.message.content).strip()
elif choice.message.tool_calls and choice.message.tool_calls:
# tools call (function calling)
return choice.message.tool_calls[0].function
self.session_memory[session_id][-1]['AI'] = {
"role": "assistant",
"content": completion_text
}
return completion_text
async def switch_to_next_key(self):
'''
切换到下一个 API Key
'''
if not self.api_keys:
logger.error("OpenAI API Key 不存在。")
return False
for key in self.keys_data:
if self.keys_data[key]:
# 没超额
self.chosen_api_key = key
self.client.api_key = key
logger.info(f"OpenAI 切换到 API Key {key[:10]}... 成功。")
return True
return False
async def image_generate(self, prompt: str, session_id: str = None, **kwargs) -> str:
'''
生成图片
'''
retry = 0
conf = self.image_generator_model_configs
super().accu_model_stat(model=conf['model'])
if not conf:
logger.error("OpenAI 图片生成模型配置不存在。")
raise Exception("OpenAI 图片生成模型配置不存在。")
while retry < 3:
try:
images_response = await self.client.images.generate(
prompt=prompt,
**conf
)
image_url = images_response.data[0].url
return image_url
except Exception as e:
retry += 1
if retry >= 3:
logger.error(traceback.format_exc())
raise Exception(f"OpenAI 图片生成请求失败:{e}。重试次数已达到上限。")
logger.warning(f"OpenAI 图片生成请求失败:{e}。重试第 {retry} 次。")
time.sleep(1)
async def forget(self, session_id=None, keep_system_prompt: bool=False) -> bool:
if session_id is None: return False
self.session_memory[session_id] = []
if keep_system_prompt:
self.personality_set(self.curr_personality, session_id)
else:
self.curr_personality = self.DEFAULT_PERSONALITY
return True
def dump_contexts_page(self, session_id: str, size=5, page=1,):
'''
获取缓存的会话
'''
# contexts_str = ""
# for i, key in enumerate(self.session_memory):
# if i < (page-1)*size or i >= page*size:
# continue
# contexts_str += f"Session ID: {key}\n"
# for record in self.session_memory[key]:
# if "user" in record:
# contexts_str += f"User: {record['user']['content']}\n"
# if "AI" in record:
# contexts_str += f"AI: {record['AI']['content']}\n"
# contexts_str += "---\n"
contexts_str = ""
if session_id in self.session_memory:
for record in self.session_memory[session_id]:
if "user" in record and record['user']:
text = record['user']['content'][:100] + "..." if len(record['user']['content']) > 100 else record['user']['content']
contexts_str += f"User: {text}\n"
if "AI" in record and record['AI']:
text = record['AI']['content'][:100] + "..." if len(record['AI']['content']) > 100 else record['AI']['content']
contexts_str += f"Assistant: {text}\n"
else:
contexts_str = "会话 ID 不存在。"
return contexts_str, len(self.session_memory[session_id])
def set_model(self, model: str):
self.model_configs['model'] = model
self.cc.put_by_dot_str("openai.chatGPTConfigs.model", model)
super().set_curr_model(model)
def get_configs(self):
return self.model_configs
def get_keys_data(self):
return self.keys_data
def get_curr_key(self):
return self.chosen_api_key
def set_key(self, key):
self.client.api_key = key

View File

@@ -1,18 +1,58 @@
import abc
from collections import defaultdict
class Provider:
def __init__(self, cfg):
pass
def __init__(self) -> None:
self.model_stat = defaultdict(int) # 用于记录 LLM Model 使用数据
self.curr_model_name = "unknown"
def reset_model_stat(self):
self.model_stat.clear()
def set_curr_model(self, model_name: str):
self.curr_model_name = model_name
def get_curr_model(self):
'''
返回当前正在使用的 LLM
'''
return self.curr_model_name
def accu_model_stat(self, model: str = None):
if not model:
model = self.get_curr_model()
self.model_stat[model] += 1
async def text_chat(self,
prompt: str,
session_id: str,
image_url: None = None,
tools: None = None,
extra_conf: dict = None,
default_personality: dict = None,
**kwargs) -> str:
'''
[require]
prompt: 提示词
session_id: 会话id
def text_chat(self, prompt):
pass
[optional]
image_url: 图片url识图
tools: 函数调用工具
extra_conf: 额外配置
default_personality: 默认人格
'''
raise NotImplementedError()
def image_chat(self, prompt):
pass
async def image_generate(self, prompt, session_id, **kwargs) -> str:
'''
[require]
prompt: 提示词
session_id: 会话id
'''
raise NotImplementedError()
def memory(self):
pass
@abc.abstractmethod
def forget(self) -> bool:
pass
async def forget(self, session_id=None) -> bool:
'''
重置会话
'''
raise NotImplementedError()

View File

@@ -1,398 +0,0 @@
import openai
import json
import time
import os
import sys
from cores.database.conn import dbConn
from model.provider.provider import Provider
import threading
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
key_record_path = abs_path+'chatgpt_key_record'
class ProviderOpenAIOfficial(Provider):
def __init__(self, cfg):
self.key_list = []
if 'api_base' in cfg and cfg['api_base'] != 'none' and cfg['api_base'] != '':
openai.api_base = cfg['api_base']
if cfg['key'] != '' and cfg['key'] != None:
print("[System] 读取ChatGPT Key成功")
self.key_list = cfg['key']
else:
input("[System] 请先去完善ChatGPT的Key。详情请前往https://beta.openai.com/account/api-keys")
# init key record
self.init_key_record()
self.chatGPT_configs = cfg['chatGPTConfigs']
print(f'[System] 加载ChatGPTConfigs: {self.chatGPT_configs}')
self.openai_configs = cfg
# 会话缓存
self.session_dict = {}
# 最大缓存token
self.max_tokens = cfg['total_tokens_limit']
# 历史记录持久化间隔时间
self.history_dump_interval = 20
# 读取历史记录
try:
db1 = dbConn()
for session in db1.get_all_session():
self.session_dict[session[0]] = json.loads(session[1])['data']
print("[System] 历史记录读取成功喵")
except BaseException as e:
print("[System] 历史记录读取失败: " + str(e))
# 读取统计信息
if not os.path.exists(abs_path+"configs/stat"):
with open(abs_path+"configs/stat", 'w', encoding='utf-8') as f:
json.dump({}, f)
self.stat_file = open(abs_path+"configs/stat", 'r', encoding='utf-8')
global count
res = self.stat_file.read()
if res == '':
count = {}
else:
try:
count = json.loads(res)
except BaseException:
pass
# 创建转储定时器线程
threading.Thread(target=self.dump_history, daemon=True).start()
# 人格
self.now_personality = {}
# 转储历史记录的定时器~ Soulter
def dump_history(self):
time.sleep(10)
db = dbConn()
while True:
try:
# print("转储历史记录...")
for key in self.session_dict:
# print("TEST: "+str(db.get_session(key)))
data = self.session_dict[key]
data_json = {
'data': data
}
if db.check_session(key):
db.update_session(key, json.dumps(data_json))
else:
db.insert_session(key, json.dumps(data_json))
# print("转储历史记录完毕")
except BaseException as e:
print(e)
# 每隔10分钟转储一次
time.sleep(10*self.history_dump_interval)
def text_chat(self, prompt, session_id):
# 会话机制
if session_id not in self.session_dict:
self.session_dict[session_id] = []
fjson = {}
try:
f = open(abs_path+"configs/session", "r", encoding="utf-8")
fjson = json.loads(f.read())
f.close()
except:
pass
finally:
fjson[session_id] = 'true'
f = open(abs_path+"configs/session", "w", encoding="utf-8")
f.write(json.dumps(fjson))
f.flush()
f.close()
cache_data_list, new_record, req = self.wrap(prompt, session_id)
retry = 0
response = None
while retry < 5:
try:
response = openai.ChatCompletion.create(
messages=req,
**self.chatGPT_configs
)
break
except Exception as e:
print(e)
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
print("[System] 当前Key已超额或者不正常,正在切换")
self.key_stat[openai.api_key]['exceed'] = True
self.save_key_record()
response, is_switched = self.handle_switch_key(req)
if not is_switched:
# 所有Key都超额或不正常
raise e
else:
break
if 'maximum context length' in str(e):
print("token超限, 清空对应缓存")
self.session_dict[session_id] = []
cache_data_list, new_record, req = self.wrap(prompt, session_id)
retry+=1
if retry >= 5:
raise BaseException("连接超时")
self.key_stat[openai.api_key]['used'] += response['usage']['total_tokens']
self.save_key_record()
print("[ChatGPT] "+str(response["choices"][0]["message"]["content"]))
chatgpt_res = str(response["choices"][0]["message"]["content"]).strip()
current_usage_tokens = response['usage']['total_tokens']
# 超过指定tokens 尽可能的保留最多的条目直到小于max_tokens
if current_usage_tokens > self.max_tokens:
t = current_usage_tokens
index = 0
while t > self.max_tokens:
if index >= len(cache_data_list):
break
# 保留人格信息
if 'user' in cache_data_list[index] and cache_data_list[index]['user']['role'] != 'system':
t -= int(cache_data_list[index]['single_tokens'])
del cache_data_list[index]
else:
index += 1
# 删除完后更新相关字段
self.session_dict[session_id] = cache_data_list
# cache_prompt = get_prompts_by_cache_list(cache_data_list)
# 添加新条目进入缓存的prompt
new_record['AI'] = {
'role': 'assistant',
'content': chatgpt_res,
}
new_record['usage_tokens'] = current_usage_tokens
if len(cache_data_list) > 0:
new_record['single_tokens'] = current_usage_tokens - int(cache_data_list[-1]['usage_tokens'])
else:
new_record['single_tokens'] = current_usage_tokens
cache_data_list.append(new_record)
self.session_dict[session_id] = cache_data_list
return chatgpt_res
def image_chat(self, prompt, img_num = 1, img_size = "1024x1024"):
retry = 0
image_url = ''
while retry < 5:
try:
# print("test1")
response = openai.Image.create(
prompt=prompt,
n=img_num,
size=img_size
)
# print("test2")
image_url = []
for i in range(img_num):
image_url.append(response['data'][i]['url'])
print(image_url)
break
except Exception as e:
print(e)
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(
e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
print("[System] 当前Key已超额或者不正常,正在切换")
self.key_stat[openai.api_key]['exceed'] = True
self.save_key_record()
response, is_switched = self.handle_switch_key(req)
if not is_switched:
# 所有Key都超额或不正常
raise e
else:
break
retry += 1
if retry >= 5:
raise BaseException("连接超时")
return image_url
def forget(self, session_id) -> bool:
self.session_dict[session_id] = []
return True
'''
获取缓存的会话
'''
def get_prompts_by_cache_list(self, cache_data_list, divide=False, paging=False, size=5, page=1):
prompts = ""
if paging:
page_begin = (page-1)*size
page_end = page*size
if page_begin < 0:
page_begin = 0
if page_end > len(cache_data_list):
page_end = len(cache_data_list)
cache_data_list = cache_data_list[page_begin:page_end]
for item in cache_data_list:
prompts += str(item['user']['role']) + ":\n" + str(item['user']['content']) + "\n"
prompts += str(item['AI']['role']) + ":\n" + str(item['AI']['content']) + "\n"
if divide:
prompts += "----------\n"
return prompts
def get_user_usage_tokens(self,cache_list):
usage_tokens = 0
for item in cache_list:
usage_tokens += int(item['single_tokens'])
return usage_tokens
'''
获取统计信息
'''
def get_stat(self):
try:
f = open(abs_path+"configs/stat", "r", encoding="utf-8")
fjson = json.loads(f.read())
f.close()
guild_count = 0
guild_msg_count = 0
guild_direct_msg_count = 0
for k,v in fjson.items():
guild_count += 1
guild_msg_count += v['count']
guild_direct_msg_count += v['direct_count']
session_count = 0
f = open(abs_path+"configs/session", "r", encoding="utf-8")
fjson = json.loads(f.read())
f.close()
for k,v in fjson.items():
session_count += 1
return guild_count, guild_msg_count, guild_direct_msg_count, session_count
except:
return -1, -1, -1, -1
# 包装信息
def wrap(self, prompt, session_id):
# 获得缓存信息
context = self.session_dict[session_id]
new_record = {
"user": {
"role": "user",
"content": prompt,
},
"AI": {},
'usage_tokens': 0,
}
req_list = []
for i in context:
if 'user' in i:
req_list.append(i['user'])
if 'AI' in i:
req_list.append(i['AI'])
req_list.append(new_record['user'])
return context, new_record, req_list
def handle_switch_key(self, req):
# messages = [{"role": "user", "content": prompt}]
while True:
is_all_exceed = True
for key in self.key_stat:
if key == None:
continue
if not self.key_stat[key]['exceed']:
is_all_exceed = False
openai.api_key = key
print(f"[System] 切换到Key: {key}, 已使用token: {self.key_stat[key]['used']}")
if len(req) > 0:
try:
response = openai.ChatCompletion.create(
messages=req,
**self.chatGPT_configs
)
return response, True
except Exception as e:
print(e)
if 'You exceeded' in str(e):
print("[System] 当前Key已超额,正在切换")
self.key_stat[openai.api_key]['exceed'] = True
self.save_key_record()
time.sleep(1)
continue
else:
return True
if is_all_exceed:
print("[System] 所有Key已超额")
return None, False
else:
print("[System] 在切换key时程序异常。")
return None, False
def getConfigs(self):
return self.openai_configs
def save_key_record(self):
with open(key_record_path, 'w', encoding='utf-8') as f:
json.dump(self.key_stat, f)
def get_key_stat(self):
return self.key_stat
def get_key_list(self):
return self.key_list
# 添加key
def append_key(self, key, sponsor):
self.key_list.append(key)
self.key_stat[key] = {'exceed': False, 'used': 0, 'sponsor': sponsor}
self.save_key_record()
self.init_key_record()
# 检查key是否可用
def check_key(self, key):
pre_key = openai.api_key
openai.api_key = key
messages = [{"role": "user", "content": "1"}]
try:
response = openai.ChatCompletion.create(
messages=messages,
**self.chatGPT_configs
)
openai.api_key = pre_key
return True
except Exception as e:
pass
openai.api_key = pre_key
return False
#将key_list的key转储到key_record中并记录相关数据
def init_key_record(self):
if not os.path.exists(key_record_path):
with open(key_record_path, 'w', encoding='utf-8') as f:
json.dump({}, f)
with open(key_record_path, 'r', encoding='utf-8') as keyfile:
try:
self.key_stat = json.load(keyfile)
except Exception as e:
print(e)
self.key_stat = {}
finally:
for key in self.key_list:
if key not in self.key_stat:
self.key_stat[key] = {'exceed': False, 'used': 0}
# if openai.api_key is None:
# openai.api_key = key
else:
# if self.key_stat[key]['exceed']:
# print(f"Key: {key} 已超额")
# continue
# else:
# if openai.api_key is None:
# openai.api_key = key
# print(f"使用Key: {key}, 已使用token: {self.key_stat[key]['used']}")
pass
if openai.api_key == None:
self.handle_switch_key("")
self.save_key_record()

View File

@@ -1,70 +0,0 @@
from revChatGPT.V1 import Chatbot
from model.provider.provider import Provider
class ProviderRevChatGPT(Provider):
def __init__(self, config):
self.rev_chatgpt = []
for i in range(0, len(config['account'])):
try:
print(f"[System] 创建rev_ChatGPT负载{str(i)}: " + str(config['account'][i]))
if 'password' in config['account'][i]:
config['account'][i]['password'] = str(config['account'][i]['password'])
revstat = {
'obj': Chatbot(config=config['account'][i]),
'busy': False
}
self.rev_chatgpt.append(revstat)
except BaseException as e:
print(f"[System] 创建rev_ChatGPT负载失败: {str(e)}")
def forget(self) -> bool:
return False
def request_text(self, prompt: str, bot) -> str:
resp = ''
err_count = 0
retry_count = 5
while err_count < retry_count:
try:
for data in bot.ask(prompt):
resp = data["message"]
break
except BaseException as e:
try:
print("[RevChatGPT] 请求出现了一些问题, 正在重试。次数"+str(err_count))
err_count += 1
if err_count >= retry_count:
raise e
except BaseException:
err_count += 1
print("[RevChatGPT] "+str(resp))
return resp
def text_chat(self, prompt):
res = ''
print("[Debug] "+str(self.rev_chatgpt))
for revstat in self.rev_chatgpt:
if not revstat['busy']:
try:
revstat['busy'] = True
print("[Debug] 使用逆向ChatGPT回复ing", end='', flush=True)
res = self.request_text(prompt, revstat['obj'])
print("OK")
revstat['busy'] = False
# 处理结果文本
chatgpt_res = res.strip()
return res
except Exception as e:
print("[System-Error] 逆向ChatGPT回复失败" + str(e))
try:
if e.code == 2:
print("[System-Error] 频率限制,正在切换账号。"+ str(e))
continue
else:
res = '所有的非忙碌OpenAI账号经过测试都暂时出现问题请稍后再试或者联系管理员~'
return res
except BaseException:
continue
res = '所有的OpenAI账号都有负载, 请稍后再试~'

View File

@@ -1,79 +0,0 @@
from model.provider.provider import Provider
from EdgeGPT import Chatbot, ConversationStyle
import json
class ProviderRevEdgeGPT(Provider):
def __init__(self):
self.busy = False
self.wait_stack = []
with open('./cookies.json', 'r') as f:
cookies = json.load(f)
self.bot = Chatbot(cookies=cookies)
def is_busy(self):
return self.busy
async def forget(self):
try:
await self.bot.reset()
return True
except BaseException:
return False
async def text_chat(self, prompt):
if self.busy:
return
self.busy = True
resp = 'err'
err_count = 0
retry_count = 5
while err_count < retry_count:
try:
resp = await self.bot.ask(prompt=prompt, conversation_style=ConversationStyle.creative)
# print("[RevEdgeGPT] "+str(resp))
msj_obj = resp['item']['messages'][len(resp['item']['messages'])-1]
reply_msg = msj_obj['text']
if 'sourceAttributions' in msj_obj:
reply_source = msj_obj['sourceAttributions']
else:
reply_source = []
if 'throttling' in resp['item']:
throttling = resp['item']['throttling']
# print(throttling)
else:
throttling = None
if 'I\'m sorry but I prefer not to continue this conversation. I\'m still learning so I appreciate your understanding and patience.' in resp:
self.busy = False
return '', 0
if reply_msg == prompt:
# resp += '\n\n如果你没有让我复述你的话那代表我可能不想和你继续这个话题了请输入reset重置会话😶'
await self.forget()
err_count += 1
continue
if reply_msg is None:
# 不想答复
return '', 0
else:
index = 1
if len(reply_source) > 0:
reply_msg += "\n\n信息来源:\n"
for i in reply_source:
reply_msg += f"[{str(index)}]: {i['seeMoreUrl']} | {i['providerDisplayName']}\n"
index += 1
if throttling is not None:
reply_msg += f"\n{throttling['numUserMessagesInConversation']}/{throttling['maxNumUserMessagesInConversation']}"
break
except BaseException as e:
# raise e
print(e)
err_count += 1
if err_count >= retry_count:
self.busy = False
raise e
print("[RevEdgeGPT] 请求出现了一些问题, 正在重试。次数"+str(err_count))
self.busy = False
print("[RevEdgeGPT] "+str(reply_msg))
return reply_msg, 1

View File

@@ -1,10 +1,19 @@
requests~=2.28.1
openai~=0.27.4
qq-botpy~=1.1.2
revChatGPT~=4.0.8
baidu-aip~=4.16.9
EdgeGPT~=0.1.22.1
pydantic~=1.10.4
aiohttp
requests
openai
qq-botpy
chardet~=5.1.0
Pillow~=9.4.0
GitPython~=3.1.31
git+https://github.com/Lxns-Network/nakuru-project.git
Pillow
nakuru-project
beautifulsoup4
googlesearch-python
tiktoken
readability-lxml
baidu-aip
websockets
flask
psutil
lxml_html_clean
SparkleLogging
aiocqhttp

Binary file not shown.

Before

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 241 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 239 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

37
type/astrbot_message.py Normal file
View File

@@ -0,0 +1,37 @@
import time
from enum import Enum
from typing import List
from dataclasses import dataclass
from nakuru.entities.components import BaseMessageComponent
class MessageType(Enum):
GROUP_MESSAGE = 'GroupMessage' # 群组形式的消息
FRIEND_MESSAGE = 'FriendMessage' # 私聊、好友等单聊消息
GUILD_MESSAGE = 'GuildMessage' # 频道消息
@dataclass
class MessageMember():
user_id: str # 发送者id
nickname: str = None
class AstrBotMessage():
'''
AstrBot 的消息对象
'''
tag: str # 消息来源标签
type: MessageType # 消息类型
self_id: str # 机器人的识别id
session_id: str # 会话id
message_id: str # 消息id
sender: MessageMember # 发送者
message: List[BaseMessageComponent] # 消息链使用 Nakuru 的消息链格式
message_str: str # 最直观的纯文本消息字符串
raw_message: object
timestamp: int # 消息时间戳
def __init__(self) -> None:
self.timestamp = int(time.time())
def __str__(self) -> str:
return str(self.__dict__)

66
type/command.py Normal file
View File

@@ -0,0 +1,66 @@
from typing import Union, List, Callable
from dataclasses import dataclass
from nakuru.entities.components import Plain, Image
@dataclass
class CommandItem():
'''
用来描述单个指令
'''
command_name: Union[str, tuple] # 指令名
callback: Callable # 回调函数
description: str # 描述
origin: str # 注册来源
class CommandResult():
'''
用于在Command中返回多个值
'''
def __init__(self, hit: bool = True, success: bool = True, message_chain: list = [], command_name: str = "unknown_command") -> None:
self.hit = hit
self.success = success
self.message_chain = message_chain
self.command_name = command_name
def message(self, message: str):
'''
快捷回复消息。
CommandResult().message("Hello, world!")
'''
self.message_chain = [Plain(message), ]
return self
def error(self, message: str):
'''
快捷回复消息。
CommandResult().error("Hello, world!")
'''
self.success = False
self.message_chain = [Plain(message), ]
return self
def url_image(self, url: str):
'''
快捷回复图片(网络url的格式)。
CommandResult().image("https://example.com/image.jpg")
'''
self.message_chain = [Image.fromURL(url), ]
return self
def file_image(self, path: str):
'''
快捷回复图片(本地文件路径的格式)。
CommandResult().image("image.jpg")
'''
self.message_chain = [Image.fromFileSystem(path), ]
return self
def _result_tuple(self):
return (self.success, self.message_chain, self.command_name)

1
type/config.py Normal file
View File

@@ -0,0 +1 @@
VERSION = '3.3.0'

55
type/message_event.py Normal file
View File

@@ -0,0 +1,55 @@
from typing import List, Union, Optional
from dataclasses import dataclass
from type.register import RegisteredPlatform
from type.types import Context
from type.astrbot_message import AstrBotMessage
class AstrMessageEvent():
def __init__(self,
message_str: str,
message_obj: AstrBotMessage,
platform: RegisteredPlatform,
role: str,
context: Context,
session_id: str = None):
'''
AstrBot 消息事件。
`message_str`: 纯消息字符串
`message_obj`: AstrBotMessage 对象
`platform`: 平台对象
`role`: 角色,`admin` or `member`
`context`: 全局对象
`session_id`: 会话id
'''
self.context = context
self.message_str = message_str
self.message_obj = message_obj
self.platform = platform
self.role = role
self.session_id = session_id
def from_astrbot_message(message: AstrBotMessage,
context: Context,
platform_name: str,
session_id: str,
role: str = "member"):
ame = AstrMessageEvent(message.message_str,
message,
context.find_platform(platform_name),
role,
context,
session_id)
return ame
@dataclass
class MessageResult():
result_message: Union[str, list]
is_command_call: Optional[bool] = False
callback: Optional[callable] = None

53
type/plugin.py Normal file
View File

@@ -0,0 +1,53 @@
from enum import Enum
from types import ModuleType
from typing import List
from dataclasses import dataclass
class PluginType(Enum):
PLATFORM = 'platform' # 平台类插件。
LLM = 'llm' # 大语言模型类插件
COMMON = 'common' # 其他插件
@dataclass
class PluginMetadata:
'''
插件的元数据。
'''
# required
plugin_name: str
plugin_type: PluginType
author: str # 插件作者
desc: str # 插件简介
version: str # 插件版本
# optional
repo: str = None # 插件仓库地址
def __str__(self) -> str:
return f"PluginMetadata({self.plugin_name}, {self.plugin_type}, {self.desc}, {self.version}, {self.repo})"
@dataclass
class RegisteredPlugin:
'''
注册在 AstrBot 中的插件。
'''
metadata: PluginMetadata
plugin_instance: object
module_path: str
module: ModuleType
root_dir_name: str
trig_cnt: int = 0
def reset_trig_cnt(self):
self.trig_cnt = 0
def trig(self):
self.trig_cnt += 1
def __str__(self) -> str:
return f"RegisteredPlugin({self.metadata}, {self.module_path}, {self.root_dir_name})"
RegisteredPlugins = List[RegisteredPlugin]

27
type/register.py Normal file
View File

@@ -0,0 +1,27 @@
from model.provider.provider import Provider as LLMProvider
from model.platform import Platform
from type.plugin import *
from typing import List
from dataclasses import dataclass
@dataclass
class RegisteredPlatform:
'''
注册在 AstrBot 中的平台。平台应当实现 Platform 接口。
'''
platform_name: str
platform_instance: Platform
origin: str = None # 注册来源
def __str__(self) -> str:
return self.platform_name
@dataclass
class RegisteredLLM:
'''
注册在 AstrBot 中的大语言模型调用。大语言模型应当实现 LLMProvider 接口。
'''
llm_name: str
llm_instance: LLMProvider
origin: str = None # 注册来源

77
type/types.py Normal file
View File

@@ -0,0 +1,77 @@
import asyncio
from asyncio import Task
from type.register import *
from typing import List, Awaitable
from logging import Logger
from util.cmd_config import CmdConfig
from util.t2i.renderer import TextToImageRenderer
from util.updator.astrbot_updator import AstrBotUpdator
from util.image_uploader import ImageUploader
from util.updator.plugin_updator import PluginUpdator
from model.plugin.command import PluginCommandBridge
class Context:
'''
存放一些公用的数据,用于在不同模块(如core与command)之间传递
'''
def __init__(self):
self.logger: Logger = None
self.base_config: dict = None # 配置(期望启动机器人后是不变的)
self.config_helper: CmdConfig = None
self.cached_plugins: List[RegisteredPlugin] = [] # 缓存的插件
self.platforms: List[RegisteredPlatform] = []
self.llms: List[RegisteredLLM] = []
self.default_personality: dict = None
self.unique_session = False # 独立会话
self.version: str = None # 机器人版本
self.nick = None # gocq 的唤醒词
self.stat = {}
self.t2i_mode = False
self.web_search = False # 是否开启了网页搜索
self.reply_prefix = ""
self.updator: AstrBotUpdator = None
self.plugin_updator: PluginUpdator = None
self.metrics_uploader = None
self.plugin_command_bridge = PluginCommandBridge(self.cached_plugins)
self.image_renderer = TextToImageRenderer()
self.image_uploader = ImageUploader()
self.message_handler = None # see astrbot/message/handler.py
self.ext_tasks: List[Task] = []
def register_commands(self,
plugin_name: str,
command_name: str,
description: str,
priority: int,
handler: callable):
'''
注册插件指令。
`plugin_name`: 插件名,注意需要和你的 metadata 中的一致。
`command_name`: 指令名,如 "help"。不需要带前缀。
`description`: 指令描述。
`priority`: 优先级越高,越先被处理。合理的优先级应该在 1-10 之间。
`handler`: 指令处理函数。函数参数message: AstrMessageEvent, context: Context
'''
self.plugin_command_bridge.register_command(plugin_name, command_name, description, priority, handler)
def register_task(self, coro: Awaitable, task_name: str):
'''
注册任务。适用于需要长时间运行的插件。
`coro`: 协程对象
`task_name`: 任务名,用于标识任务。自定义即可。
'''
task = asyncio.create_task(coro, name=task_name)
self.ext_tasks.append(task)
def find_platform(self, platform_name: str) -> RegisteredPlatform:
for platform in self.platforms:
if platform_name == platform.platform_name:
return platform
raise ValueError("couldn't find the platform you specified")

247
util/agent/func_call.py Normal file
View File

@@ -0,0 +1,247 @@
import json
import util.general_utils as gu
import time
class FuncCallJsonFormatError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class FuncNotFoundError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class FuncCall():
def __init__(self, provider) -> None:
self.func_list = []
self.provider = provider
def add_func(self, name: str = None, func_args: list = None, desc: str = None, func_obj=None) -> None:
if name == None or func_args == None or desc == None or func_obj == None:
raise FuncCallJsonFormatError(
"name, func_args, desc must be provided.")
params = {
"type": "object", # hardcore here
"properties": {}
}
for param in func_args:
params['properties'][param['name']] = {
"type": param['type'],
"description": param['description']
}
self._func = {
"name": name,
"parameters": params,
"description": desc,
"func_obj": func_obj,
}
self.func_list.append(self._func)
def func_dump(self, intent: int = 2) -> str:
_l = []
for f in self.func_list:
_l.append({
"name": f["name"],
"parameters": f["parameters"],
"description": f["description"],
})
return json.dumps(_l, indent=intent, ensur_ascii=False)
def get_func(self) -> list:
_l = []
for f in self.func_list:
_l.append({
"type": "function",
"function": {
"name": f["name"],
"parameters": f["parameters"],
"description": f["description"],
}
})
return _l
def func_call(self, question, func_definition, is_task=False, tasks=None, taskindex=-1, is_summary=True, session_id=None):
funccall_prompt = """
我正实现function call功能该功能旨在让你变成给定的问题到给定的函数的解析器意味着你不是创造函数
下面会给你提供可能用到的函数相关信息和一个问题,你需要将其转换成给定的函数调用。
- 你的返回信息只含json请严格仿照以下内容不含注释必须含有`res`,`func_call`字段:
```
{
"res": string // 如果没有找到对应的函数,那么你可以在这里正常输出内容。如果有,这里是空字符串。
"func_call": [ // 这是一个数组,里面包含了所有的函数调用,如果没有函数调用,那么这个数组是空数组。
{
"res": string // 如果没有找到对应的函数,那么你可以在这里正常输出内容。如果有,这里是空字符串。
"name": str, // 函数的名字
"args_type": {
"arg1": str, // 函数的参数的类型
"arg2": str,
...
},
"args": {
"arg1": any, // 函数的参数
"arg2": any,
...
}
},
... // 可能在这个问题中会有多个函数调用
],
}
```
- 如果用户的要求较复杂,允许返回多个函数调用,但需保证这些函数调用的顺序正确。
- 当问题没有提到给定的函数时相当于提问方不打算使用function call功能这时你可以在res中正常输出这个问题的回答以AI的身份正常回答该问题并将答案输出在res字段中回答不要涉及到任何函数调用的内容就只是正常讨论这个问题。
提供的函数是:
"""
prompt = f"{funccall_prompt}\n```\n{func_definition}\n```\n"
prompt += f"""
用户的提问是:
```
{question}
```
"""
# if is_task:
# # task_prompt = f"\n任务列表为{str(tasks)}\n你目前进行到了任务{str(taskindex)}, **你不需要重新进行已经进行过的任务, 不要生成已经进行过的**"
# prompt += task_prompt
# provider.forget()
_c = 0
while _c < 3:
try:
res = self.provider.text_chat(prompt, session_id)
if res.find('```') != -1:
res = res[res.find('```json') + 7: res.rfind('```')]
gu.log("REVGPT func_call json result",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
print(res)
res = json.loads(res)
break
except Exception as e:
_c += 1
if _c == 3:
raise e
if "The message you submitted was too long" in str(e):
raise e
invoke_func_res = ""
if "func_call" in res and len(res["func_call"]) > 0:
task_list = res["func_call"]
invoke_func_res_list = []
for res in task_list:
# 说明有函数调用
func_name = res["name"]
# args_type = res["args_type"]
args = res["args"]
# 调用函数
# func = eval(func_name)
func_target = None
for func in self.func_list:
if func["name"] == func_name:
func_target = func["func_obj"]
break
if func_target == None:
raise FuncNotFoundError(
f"Request function {func_name} not found.")
t_res = str(func_target(**args))
invoke_func_res += f"{func_name} 调用结果:\n```\n{t_res}\n```\n"
invoke_func_res_list.append(invoke_func_res)
gu.log(f"[FUNC| {func_name} invoked]",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
# print(str(t_res))
if is_summary:
# 生成返回结果
after_prompt = """
有以下内容:"""+invoke_func_res+"""
请以AI助手的身份结合返回的内容对用户提问做详细全面的回答。
用户的提问是:
```""" + question + """```
- 在res字段中不要输出函数的返回值也不要针对返回值的字段进行分析也不要输出用户的提问而是理解这一段返回的结果并以AI助手的身份回答问题只需要输出回答的内容不需要在回答的前面加上身份词。
- 你的返回信息必须只能是json且需严格遵循以下内容不含注释:
```json
{
"res": string, // 回答的内容
"func_call_again": bool // 如果函数返回的结果有错误或者问题可将其设置为true否则为false
}
```
- 如果func_call_again为trueres请你设为空值否则请你填写回答的内容。"""
_c = 0
while _c < 5:
try:
res = self.provider.text_chat(after_prompt, session_id)
# 截取```之间的内容
gu.log(
"DEBUG BEGIN", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
print(res)
gu.log(
"DEBUG END", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
if res.find('```') != -1:
res = res[res.find('```json') +
7: res.rfind('```')]
gu.log("REVGPT after_func_call json result",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
after_prompt_res = res
after_prompt_res = json.loads(after_prompt_res)
break
except Exception as e:
_c += 1
if _c == 5:
raise e
if "The message you submitted was too long" in str(e):
# 如果返回的内容太长了,那么就截取一部分
time.sleep(3)
invoke_func_res = invoke_func_res[:int(
len(invoke_func_res) / 2)]
after_prompt = """
函数返回以下内容:"""+invoke_func_res+"""
请以AI助手的身份结合返回的内容对用户提问做详细全面的回答。
用户的提问是:
```""" + question + """```
- 在res字段中不要输出函数的返回值也不要针对返回值的字段进行分析也不要输出用户的提问而是理解这一段返回的结果并以AI助手的身份回答问题只需要输出回答的内容不需要在回答的前面加上身份词。
- 你的返回信息必须只能是json且需严格遵循以下内容不含注释:
```json
{
"res": string, // 回答的内容
"func_call_again": bool // 如果函数返回的结果有错误或者问题可将其设置为true否则为false
}
```
- 如果func_call_again为trueres请你设为空值否则请你填写回答的内容。"""
else:
raise e
if "func_call_again" in after_prompt_res and after_prompt_res["func_call_again"]:
# 如果需要重新调用函数
# 重新调用函数
gu.log("REVGPT func_call_again",
bg=gu.BG_COLORS["purple"], fg=gu.FG_COLORS["white"])
res = self.func_call(question, func_definition)
return res, True
gu.log("REVGPT func callback:",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
# print(after_prompt_res["res"])
return after_prompt_res["res"], True
else:
return str(invoke_func_res_list), True
else:
# print(res["res"])
return res["res"], False

Some files were not shown because too many files have changed in this diff Show More