Compare commits

...

455 Commits

Author SHA1 Message Date
Soulter
5ac4748537 Merge pull request #143 from Soulter/dev_dashboard
[Feature] 可视化面板功能和一些常规优化
2023-12-28 13:03:31 +08:00
Soulter
2e5ec1d2dc Create docker-image.yml 2023-12-28 13:02:37 +08:00
Soulter
bac4c069d7 Update Dockerfile 2023-12-28 12:57:53 +08:00
Soulter
9d4a21a10b Update README.md 2023-12-27 00:09:15 +08:00
Soulter
dbeb41195d Update README.md 2023-12-26 23:11:27 +08:00
Soulter
71f4998458 fix: 修复 console 2023-12-26 09:29:25 +08:00
Soulter
40af5b7574 feat: 支持更多配置类型 2023-12-26 09:25:22 +08:00
Soulter
e7a1020f82 Merge branch 'master' into dev_dashboard 2023-12-26 09:24:28 +08:00
Soulter
018e49ed95 Update README.md 2023-12-25 20:36:54 +08:00
Soulter
582cfe9f7c Update README.md 2023-12-23 19:03:02 +08:00
Soulter
db07f740b3 Update README.md 2023-12-23 17:03:31 +08:00
Soulter
bacbd351d7 Update README.md 2023-12-23 16:49:15 +08:00
Soulter
7e2c61c661 Update README.md 2023-12-23 16:19:57 +08:00
Soulter
3df30fd4de Update README.md 2023-12-23 16:18:47 +08:00
Soulter
92789ffdc9 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-12-23 14:09:20 +08:00
Soulter
09b746cdec feat: 插件、指令返回接口优化 2023-12-23 14:08:44 +08:00
Soulter
8ace7b59e3 Update requirements.txt 2023-12-23 00:21:35 +08:00
Soulter
1fc0248d8f feat: 插件安装卸载 2023-12-22 16:00:46 +08:00
Soulter
57bde33bfe perf: 优化插件代码结构。
fix: 修复卸载插件之后,线程无限自旋的问题。
2023-12-21 14:00:29 +08:00
Soulter
1b1e558a3b feat: dashboard 用户登录、重置密码 2023-12-20 19:13:38 +08:00
Soulter
c5c7e686d0 feat: ip 信息指令 2023-12-20 16:14:42 +08:00
Soulter
bd28f880f6 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-12-19 18:44:44 +08:00
Soulter
fe2ab69773 feat: bing网页搜索失败之后使用搜狗 2023-12-19 18:44:26 +08:00
Soulter
75f9d383cb feat: 补充一些config 2023-12-19 18:36:33 +08:00
Soulter
5fefba4583 feat: 插件显示 2023-12-19 00:40:47 +08:00
Soulter
780d126437 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-12-18 20:18:41 +08:00
Soulter
4057dd9f5b delete: test module 2023-12-18 20:18:27 +08:00
Soulter
b5f8df4bb6 feat: dashboard 首页实现功能 2023-12-17 14:21:34 +08:00
Soulter
5ace10d39f feat: 联网时间 2023-12-15 13:49:55 +08:00
Soulter
07ecdedf0d Update README.md 2023-12-14 20:05:58 +08:00
Soulter
c2ca365312 📦 NEW: 面板支持更多配置 2023-12-14 19:59:15 +08:00
Soulter
8b9ca08903 Update README.md 2023-12-14 17:18:08 +08:00
Soulter
16e6b588f6 Merge branch 'master' into dev_dashboard 2023-12-14 17:12:21 +08:00
Soulter
3a1d5d8904 📦 NEW: /update checkout 指令支持切换代码分支 2023-12-14 17:11:00 +08:00
Soulter
84d1293fd0 🐛 FIX: 移除一些不必要的报错抛出 2023-12-14 16:41:05 +08:00
Soulter
a12be7fa77 feat: 集成可视化面板到机器人内部 2023-12-14 16:39:47 +08:00
Soulter
6eee4f678f feat: dashboard 支持内存显示、配置更新 2023-12-13 22:47:17 +08:00
Soulter
0e53c95c06 feat: config 2023-12-13 18:35:50 +08:00
Soulter
3ba97ad0dc chore: dashboard update 2023-12-13 16:17:49 +08:00
Soulter
99ff8bc1f5 feat: dashboard partially 2023-12-12 20:23:39 +08:00
Soulter
63aa6ee9a5 feat: 支持 Docker 部署项目 2023-12-07 19:18:54 +08:00
Soulter
925a42e2c4 feat: 修复 nohup 等无标准输出流情况下启动失败的问题 2023-12-07 15:30:50 +08:00
Soulter
8dc91cfed4 delete: remove screenshots 2023-12-07 11:44:19 +08:00
Soulter
9c6bdeea9d feat: 画图指令支持 DallE3 2023-12-04 13:50:49 +08:00
Soulter
9bc8ac10fa chore: remove some unuseful log 2023-12-02 16:19:41 +08:00
Soulter
3df3879954 feat: 支持设置默认人格 2023-11-30 12:46:29 +08:00
Soulter
be1f8e7075 feat: 支持在命令行操作bot
fix: 修复 windows 下 ctrl+c 不能退出程序的问题
2023-11-30 12:06:37 +08:00
Soulter
d602041ad0 Update README.md 2023-11-25 23:10:54 +08:00
Soulter
23882bcb8e Update README.md 2023-11-25 23:09:15 +08:00
Soulter
311178189f fix: 修复未期望的QQ群BOT启动和文件BOM的问题 2023-11-25 20:09:21 +08:00
Soulter
5a57526aab fix: 修复配置文件BOM的一些问题 2023-11-25 19:56:11 +08:00
Soulter
450dd34f4d perf: dump 配置时关闭强制ascii 2023-11-25 11:59:39 +08:00
Soulter
89ed31a888 feat: 支持在cmd_config中设置llm_env_prompt来自定义环境提示词 2023-11-25 11:55:26 +08:00
Soulter
9fe031efe3 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-11-25 11:51:47 +08:00
Soulter
baa57266b4 feat: 初步接入官方QQ群机器人API 2023-11-25 11:50:32 +08:00
Soulter
3e4818d0ee feat: 适配部分插件 2023-11-23 17:50:40 +08:00
Soulter
b36747c728 fix: 修复QQ频道发图文消息报错的情况 2023-11-23 11:55:22 +08:00
Soulter
fdbe993913 fix: 修复消息兼容的一些问题 2023-11-21 22:38:54 +08:00
Soulter
9c3c8ff2c4 feat: 支持频道主动消息回复
fix: 修复一些问题
2023-11-20 23:45:04 +08:00
Soulter
aaefdab0aa fix: 修复没有语言模型启动时输入指令报错的问题 2023-11-20 14:56:28 +08:00
Soulter
f18a311bc2 chore: tidy some files 2023-11-18 15:17:42 +08:00
Soulter
ad9705f9c4 🐛: 取消QQSDK的旧版标记
🐛: 修复switch指令的一些问题
2023-11-18 15:09:22 +08:00
Soulter
fb0b626813 feat: 平衡请求和回答的token数比例 2023-11-14 20:38:21 +08:00
Soulter
b48fbf10e1 perf: 优化网页搜索回答的格式 2023-11-14 20:34:23 +08:00
Soulter
4aa2eab8b6 fix: 修复计算token的一些问题 2023-11-14 11:40:30 +08:00
Soulter
3960a19bcb perf: 增加一些注释 2023-11-14 11:30:08 +08:00
Soulter
b3cec4781b fix: 修复 requirements 中的typo 2023-11-14 11:17:35 +08:00
Soulter
8f0b0bf0d0 perf: 优化插件run函数参数传递规范 2023-11-14 11:15:19 +08:00
Soulter
847672d7f1 Update README.md 2023-11-14 10:32:59 +08:00
Soulter
c7f2962654 Update README.md 2023-11-14 10:32:45 +08:00
Soulter
752201cb46 update: requirements.txt 2023-11-14 09:33:30 +08:00
Soulter
deebf61b5f feat: 大幅优化网页搜索的信息提取准确性
perf: 使用 tictoken 预先计算 token
2023-11-14 09:33:18 +08:00
Soulter
d5e5b06e86 perf: 让回复末尾添加1-2个emoji 2023-11-13 23:05:19 +08:00
Soulter
cb5975c102 feat: 1. 适配新版openai sdk
2. 适配官方 function calling
2023-11-13 21:54:23 +08:00
Soulter
5b1aee1b4d feat: web search support prefix keyword call 2023-11-09 16:05:42 +00:00
Soulter
510c8b4236 feat: support gpt-4-vision-preview 2023-11-09 20:53:02 +08:00
Soulter
89fc7b0553 perf: 使用异步重写部分代码 2023-10-12 11:16:49 +08:00
Soulter
123c21fcb3 perf: 重载插件支持更新依赖库 2023-10-05 22:34:26 +08:00
Soulter
75d62d66f9 fix: 修复折叠发送时可能发送失败的问题 2023-10-05 21:38:35 +08:00
Soulter
23a8e989a5 perf: 优化插件加载机制 2023-10-05 13:38:10 +08:00
Soulter
9577e637f1 perf: 优化代码结构、稳定性和插件加载机制 2023-10-05 13:21:39 +08:00
Soulter
e51ef2201b Merge remote-tracking branch 'refs/remotes/origin/master' 2023-10-05 10:49:49 +08:00
Soulter
f4ae503abf perf: 优化报错提示和代码结构 2023-10-05 10:48:35 +08:00
Soulter
3424b658f3 bugfixes 2023-10-02 10:35:51 +08:00
Soulter
3198f73f3d perf: 清除警告;适配新版启动器 2023-10-02 10:17:10 +08:00
Soulter
aa3262a8ab chore: fix some typos 2023-10-02 10:10:04 +08:00
Soulter
6acd7be547 perf: 优化一些库的导入机制 2023-10-01 17:46:51 +08:00
Soulter
fb7669ddad perf: 依赖库安装优化 2023-10-01 16:20:51 +08:00
Soulter
f2c4ef126e perf: 优化openai模型消息截断机制 2023-09-30 15:11:06 +08:00
Soulter
33dcc4c152 perf: openai模型超限时截断消息(0.75x) 2023-09-30 15:06:57 +08:00
Soulter
b9e331ebd6 perf: 网页搜索改用google search,是改善效果 2023-09-30 14:59:25 +08:00
Soulter
7832ec386e perf: 优化web search 2023-09-30 14:06:50 +08:00
Soulter
b9828428cc perf: web search优化 2023-09-30 13:37:10 +08:00
Soulter
da11034aec feat: 支持在cmd_config中修改配置文件 2023-09-29 10:06:41 +08:00
Soulter
578c9e0695 feat: 支持戳一戳消息 2023-09-28 20:51:50 +08:00
Soulter
cc675a9b4f perf: 对插件开放更多接口 2023-09-28 20:12:39 +08:00
Soulter
08e7d4d0c6 fix: 修复一部分超限的报错
perf: web search稳定性和精确度优化
2023-09-27 22:06:08 +08:00
Soulter
553f1b8d83 fix: 修复官方模型下web search报错的问题 2023-09-27 21:14:03 +08:00
Soulter
73e7e2088d perf: 完善报错堆栈显示 2023-09-27 21:02:50 +08:00
Soulter
e40c9de610 perf: 优化聊天会话管理 2023-09-27 16:42:39 +08:00
Soulter
2f4e0bb4f2 fix: 修复人格一段时间后消失的问题 2023-09-25 15:55:51 +08:00
Soulter
191976e22e fix: 修复一些权限上的问题 2023-09-25 13:55:00 +08:00
Soulter
52656b8586 perf: 支持多管理员配置 2023-09-25 13:51:12 +08:00
Soulter
998e29ded6 fix: myid显示异常 2023-09-25 13:43:33 +08:00
Soulter
5bbe3f12d6 feat: OpenAI官方模型支持切换账号 2023-09-25 13:25:38 +08:00
Soulter
56aea81ed7 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-09-25 12:04:04 +08:00
Soulter
7b8a311dde fix: 修复gocq启动下QQ频道无法通过@回复消息的问题
feat:  支持重置会话时保留人格
perf: 清除部分无用日志输出
2023-09-25 12:03:17 +08:00
Soulter
b75d20a3e8 Update README.md 2023-09-20 10:46:09 +08:00
Soulter
67faa587b6 fix: 修复初次调用/keyword指令时报错文件不存在的bug 2023-09-20 10:31:31 +08:00
Soulter
15fde686d4 perf: 精简日志输出和冗余的日志文件 2023-09-14 14:04:47 +08:00
Soulter
741284f6e8 perf: 去除启动时检查更新产生的大量的日志 2023-09-14 13:50:00 +08:00
Soulter
8352fc269b 1. 修复qq频道发不了图片的问题 2023-09-14 08:39:05 +08:00
Soulter
5852f36557 1. gocq支持选择不回复群、私聊、频道消息。
(在cmd_config.json文件设置gocq_react_xxx等项);
2. update指令升级成功后返回新版本信息
2023-09-10 09:03:26 +08:00
Soulter
cc1c723c12 fix: 修复OpenAI官方模型无法启用的问题 2023-09-09 09:45:34 +08:00
Soulter
adf5cbfeba fix: 优化网页搜索的稳定性 2023-09-08 16:41:37 +08:00
Soulter
d6d0516c9a feat: gocq服务器地址支持在cmd_config自定义。 2023-09-08 14:19:07 +08:00
Soulter
8aab10aaf3 websearch bugfixes 2023-09-08 13:46:57 +08:00
Soulter
4fe5616ae1 Merge remote-tracking branch 'refs/remotes/origin/master' 2023-09-08 13:40:03 +08:00
Soulter
7e1c76a3f5 fix: 修复openai官方模型一些指令报错的问题
feat: revChatGPT支持人格设置
2023-09-08 13:38:48 +08:00
Soulter
f74665ff71 Update README.md 2023-09-08 12:01:39 +08:00
Soulter
a96d64fe88 fix: 修复qq频道下无法发送图片的bug 2023-09-04 10:14:46 +08:00
Soulter
fd2aa0cba6 bugfixes 2023-09-02 19:59:14 +08:00
Soulter
a92ea3db02 fix: 修复只启动频道官方SDK下,不显示管理者QQ设置的问题 2023-09-02 19:39:38 +08:00
Soulter
d7a513b640 fix: 关键词指令 2023-09-02 18:30:11 +08:00
Soulter
8a017ff693 bugfixes 2023-09-02 11:11:54 +08:00
Soulter
7d08f57b32 bugfixes 2023-09-02 10:31:13 +08:00
Soulter
6f4ad7890b bugfixes 2023-09-02 10:05:06 +08:00
Soulter
37488118a6 feat: 1. keyword指令支持记录图片;
2. qq频道转gocq数据结构兼容层实现;
perf: 1. 优化代码结构;
2. log 支持环境变量指定log等级
2023-09-02 00:24:13 +08:00
Soulter
b2da0778ae Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-09-01 15:12:18 +08:00
Soulter
cc887a5037 perf: 优化代码结构 2023-09-01 15:11:58 +08:00
Soulter
ca86a02d30 Update requirements.txt 2023-08-31 21:27:26 +08:00
Soulter
d652dc19a6 Update README.md 2023-08-31 18:39:37 +08:00
Soulter
6a56b7bff5 Update README.md 2023-08-31 18:35:29 +08:00
Soulter
81e8997852 feat: 1. 支持llm网页搜索,实时消息。
2. 加入频道兼容层;支持频道发图
perf: 1. 稳定性优化
2. 精简部分代码结构
2023-08-31 18:34:20 +08:00
Soulter
372a204ba9 feat: QQ频道平台支持myid指令 2023-08-27 19:25:39 +08:00
Soulter
15ad5aae35 Update README.md 2023-08-20 17:44:39 +08:00
Soulter
fd2e9ef93f Update README.md 2023-08-20 14:48:40 +08:00
Soulter
5be3bf1f46 feat: 网页版ChatGPT模型支持Plus账户、网页搜索、插件 2023-08-20 14:26:13 +08:00
Soulter
4915c2d480 bugfixes 2023-08-20 14:04:50 +08:00
Soulter
bd56a19ac5 bugfixes 2023-08-20 14:03:44 +08:00
Soulter
da8fa2d905 bugfixes 2023-08-20 14:00:46 +08:00
Soulter
f56fd100d7 bugfixes 2023-08-20 14:00:25 +08:00
Soulter
b725a1a20c Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-08-20 13:56:53 +08:00
Soulter
ff1b5d02d2 perf: 优化初次启动后报错时的处理 2023-08-20 13:56:50 +08:00
Soulter
d4882a8240 Update README.md 2023-08-15 21:18:47 +08:00
Soulter
e37f84c1ae Update README.md 2023-08-15 21:18:08 +08:00
Soulter
a23bd0a63c Update README.md 2023-08-15 16:21:34 +08:00
Soulter
ae00e84974 Update README.md 2023-08-15 15:48:24 +08:00
Soulter
53b3250978 Update README.md 2023-08-15 15:42:54 +08:00
Soulter
7f15a59a4e Update README.md 2023-08-15 15:39:16 +08:00
Soulter
6a164c9961 Update README.md 2023-08-15 15:35:23 +08:00
Soulter
bd779a3df3 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-08-15 13:43:16 +08:00
Soulter
9ebb340c00 perf: 优化更新插件的相关逻辑;优化日志输出 2023-08-15 13:42:12 +08:00
Soulter
e8edbaae2d Update README.md 2023-08-12 12:47:41 +08:00
Soulter
2aab1f4c96 Update requirements.txt 2023-08-11 23:43:38 +08:00
Soulter
90ea621c65 Update main.py 2023-08-11 23:36:38 +08:00
Soulter
34bdceb41b Update README.md 2023-08-11 02:38:44 +08:00
Soulter
6d2ded1c6c Update README.md 2023-08-11 02:37:03 +08:00
Soulter
9b926048ca Update README.md 2023-08-11 02:35:43 +08:00
Soulter
9cf4f0f57d Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-08-06 11:04:30 +08:00
Soulter
9123b9d773 fix: 修复windows启动下会弹出markdown测试窗口的问题 2023-08-06 11:04:25 +08:00
Soulter
f9258ae1e1 fix: 修复生成图片时报错的问题 2023-08-06 11:02:09 +08:00
Soulter
d8808de4a9 Update README.md 2023-08-03 22:30:54 +08:00
Soulter
afcb152d8d Update requirements.txt 2023-07-21 21:43:11 +08:00
Soulter
ff01174a1f 删除GUI界面下启动项目出现的二维码 2023-06-26 20:34:10 +08:00
Soulter
71f1625284 Update README.md 2023-06-18 13:30:08 +08:00
Soulter
19e3390083 Update README.md 2023-06-13 17:04:29 +08:00
Soulter
3015b90e12 bugfixes 2023-06-13 11:59:16 +08:00
Soulter
aa419f3ef9 perf: 去帮助中心部分指令显示 2023-06-13 11:54:44 +08:00
Soulter
954236c284 fix: 修复markdown宽度计算异常的问题 2023-06-13 11:54:20 +08:00
Soulter
72d6b3886b perf: markdown render 增大 fontsize 2023-06-13 11:44:34 +08:00
Soulter
a95046ecaf Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-06-13 10:09:33 +08:00
Soulter
ccdb11575b remove chore 2023-06-13 10:09:28 +08:00
Soulter
7e68b2f2be Update requirements.txt 2023-06-13 10:05:57 +08:00
Soulter
39efab1081 perf: enhanced markdown image render regex 2023-06-12 18:41:04 +08:00
Soulter
cc6707c8ce Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-06-12 18:26:16 +08:00
Soulter
09080adf84 perf: markdown渲染器支持渲染图片 2023-06-12 18:26:11 +08:00
Soulter
4cc72030c0 Update README.md 2023-06-12 08:32:05 +08:00
Soulter
a395902184 Update README.md 2023-06-12 08:30:58 +08:00
Soulter
5156f0584a Update README.md 2023-06-12 08:30:03 +08:00
Soulter
be171fe0d7 Update README.md 2023-06-12 08:14:55 +08:00
Soulter
ad4bf5e654 perf: update command add "update latest r" 2023-06-11 09:53:49 +08:00
Soulter
da7429ad62 perf: add markdown minheight 2023-06-11 09:51:53 +08:00
Soulter
b5f20ee282 chore: change fonts 2023-06-11 09:16:16 +08:00
Soulter
a9023d6f3a perf: 支持markdown渲染 2023-06-10 13:10:32 +00:00
Soulter
628b661a18 fix markdown 2023-06-10 13:05:22 +00:00
Soulter
638fe466f8 perf markdown 2023-06-10 12:51:34 +00:00
Soulter
a90adcf15c chore: change some markdown parameters 2023-06-10 12:24:37 +00:00
Soulter
7896066db6 perf: markdown perf 2023-06-10 12:22:32 +00:00
Soulter
b1314bcc31 perf: \t -> 4 blanks 2023-06-10 12:13:50 +00:00
Soulter
b1ecc929f2 perf: markdown render perf 2023-06-10 12:10:20 +00:00
Soulter
3aad42a886 perf: markdown render perf 2023-06-10 12:08:32 +00:00
Soulter
b6e87d3d31 perf: markdown render perf 2023-06-10 10:54:26 +00:00
Soulter
461eb4b9c7 perf: markdown render perf 2023-06-10 10:48:36 +00:00
Soulter
a89e92d5cc perf: markdown render perf 2023-06-10 10:33:14 +00:00
Soulter
6e69e88e91 perf: markdown render perf 2023-06-10 10:30:17 +00:00
Soulter
ae732c1dac perf: markdown render perf 2023-06-10 10:03:03 +00:00
Soulter
8e4a72c97b perf: markdown render perf 2023-06-10 10:02:23 +00:00
Soulter
bf0d82fe67 perf: markdown render perf 2023-06-10 10:01:23 +00:00
Soulter
987383f957 perf: markdown render perf 2023-06-10 10:00:22 +00:00
Soulter
c2cacf3281 perf: markdown render perf 2023-06-10 09:56:58 +00:00
Soulter
72878477dc perf: qq pic mode support markdown 2023-06-10 09:47:02 +00:00
Soulter
ad0d14420a feat: markdown render support 2023-06-10 09:39:37 +00:00
Soulter
5a7c60c81e fix: markdown render support 2023-06-10 09:38:19 +00:00
Soulter
6011840d1f feat: markdown render support 2023-06-10 09:32:49 +00:00
Soulter
9a2dffe299 feat: markdown render support 2023-06-10 09:27:02 +00:00
Soulter
e6770d2b12 Update README.md 2023-06-09 00:19:27 +08:00
Soulter
255db6ee57 Update README.md 2023-06-08 23:58:44 +08:00
Soulter
aa9ff99557 perf: better help 2023-06-06 12:31:14 +00:00
Soulter
5f024e9f30 fix: bugfixes 2023-06-06 12:28:55 +00:00
Soulter
cbdc7b7ce4 perf: better help 2023-06-06 12:23:48 +00:00
Soulter
5f636ca061 perf: improve text2img 2023-06-06 11:57:52 +00:00
Soulter
9fa3651170 perf: change word2img factors 2023-06-06 11:45:32 +00:00
Soulter
bba66788c3 fix: bugfixes 2023-06-06 11:41:26 +00:00
Soulter
200f3cce00 fix: bugfixes 2023-06-06 11:34:52 +00:00
Soulter
938490b739 fix: bugfixes 2023-06-06 11:31:08 +00:00
Soulter
e77e7b050a feat: QQ message plain texts to pic support #108 2023-06-06 11:21:55 +00:00
Soulter
bd2dbe5b63 feat:转发消息支持非文本类型 2023-06-03 14:21:47 +08:00
Soulter
c684d9cb4a fix: 修复某些插件调用send可能发生的错误 2023-06-03 10:49:12 +08:00
Soulter
7a39a9d45e feat: nick指令仅管理者能用 2023-06-01 22:09:51 +08:00
Soulter
2a3bb068db feat: bing支持自定义代理地址 2023-05-31 21:17:47 +08:00
Soulter
1aa4384ca3 perf: 优化日志输出长度限制 2023-05-31 20:31:11 +08:00
Soulter
3b26b7b26c feat: 将CmdConfig的一些方法改为静态方法 2023-05-31 10:25:39 +08:00
Soulter
3b097d662b perf: 增加支持查看新版配置文件的管理员指令newconfig 2023-05-31 10:18:08 +08:00
Soulter
c3acb3e77f feat: 支持修改入群欢迎 2023-05-31 10:07:15 +08:00
Soulter
55d58d30a8 fix: 修复手滑造成的启动报错 2023-05-29 16:40:02 +08:00
Soulter
020a8ace9f feat: 支持自定义qq回复折叠阈值
perf: 优化新版配置文件加载流程
2023-05-29 16:37:11 +08:00
Soulter
15f56ffc01 feat: 长文本支持折叠发送 #104 2023-05-29 01:10:37 +08:00
Soulter
3724659b32 perf: improve stater 2023-05-24 18:24:18 +08:00
Soulter
df77152581 chore: 更新说明 2023-05-23 23:11:24 +08:00
Soulter
339ea5f12a feat: 支持更多本地预设指令的图片化 2023-05-23 11:01:56 +08:00
Soulter
36f96ccc97 feat: 文字转图片的图片过期处理逻辑 2023-05-23 10:58:07 +08:00
Soulter
190e0a4971 feat: 支持文字转图片 2023-05-23 10:41:12 +08:00
Soulter
72638fac68 fix: 修复QQ频道@不回的问题 2023-05-23 07:58:03 +08:00
Soulter
807d19e381 fix: 修复gocq群聊时@无反应的问题 2023-05-22 20:54:31 +08:00
Soulter
10870172b4 fix: 修复私聊不回的问题 2023-05-22 20:17:53 +08:00
Soulter
1f7d3eccf9 fix: blank nick 2023-05-22 19:42:37 +08:00
Soulter
5fc58123bb fix: 修复频率限制消息识别的问题 2023-05-22 19:31:24 +08:00
Soulter
c84c9f4aaa fix: 修复gocq_loop 2023-05-22 18:47:33 +08:00
Soulter
cabe66fc0a perf: 优化gocq平台消息处理逻辑 2023-05-22 18:46:01 +08:00
Soulter
9f1315b06d perf: 优化gocq平台消息处理逻辑 2023-05-22 18:42:23 +08:00
Soulter
6f27f59730 fix: 修复GOCQ频道at报错的问题 2023-05-22 18:25:58 +08:00
Soulter
17815e7fe3 fix: 优化切换到未启动的模型报错的问题 2023-05-22 18:23:16 +08:00
Soulter
596ae80fea perf: 优化模型识别提示 2023-05-22 18:22:34 +08:00
Soulter
be2dc6ba70 feat: 指令操作不再需要在消息前加前缀
perf: 改善性能
2023-05-22 18:10:22 +08:00
Soulter
e5aa8c8270 fix: 修复群内欢迎 2023-05-21 11:12:50 +08:00
Soulter
7c5ac41c55 chore: 删除不必要的log日志 2023-05-21 11:02:00 +08:00
Soulter
c6cf1153c1 fix: 修复Windows下删除插件报错拒绝访问的问题;
修复权限组异常的问题
2023-05-21 11:00:59 +08:00
Soulter
a68338b651 perf: 优化bing报错提示 2023-05-21 10:23:50 +08:00
Soulter
bab46e912e fix: 修复默认昵称失效的问题;
修复启动时跳过管理者qq设置的问题
2023-05-21 10:18:51 +08:00
Soulter
4b158a1c89 feat: GOCQ适配QQ频道 2023-05-20 15:30:07 +08:00
Soulter
6894900e46 fix: 修复画画指令得到的图片风格像油画的问题 2023-05-20 14:27:02 +08:00
Soulter
2e11d6e007 perf: log perf 2023-05-18 22:21:29 +08:00
Soulter
348381be15 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-05-18 22:15:07 +08:00
Soulter
9024c28e70 perf: fix some logs 2023-05-18 22:15:01 +08:00
Soulter
ae1702901b Update README.md 2023-05-18 08:34:41 +08:00
Soulter
c1c0df85e6 Update README.md 2023-05-17 20:36:54 +08:00
Soulter
f3c6d9c02b fix: draw command 2023-05-16 15:06:39 +08:00
Soulter
811a885411 fix: draw command 2023-05-16 15:04:55 +08:00
Soulter
b4ec28b71c Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-05-16 11:57:04 +08:00
Soulter
cdf4a5321b perf: 1.逆向ChatGPT库支持消息等待,不会回复忙碌。
2. 优化模型加载流程
2023-05-16 11:56:59 +08:00
Soulter
d83f155f80 Update README.md 2023-05-15 20:54:19 +08:00
Soulter
4c402ed5bd perf: 优化插件鉴别 2023-05-15 20:43:42 +08:00
Soulter
ec5aff8d0b fix: update helloworld default plugin 2023-05-15 20:14:14 +08:00
Soulter
eae0d6c422 fix: 修复一些奇怪的地方 2023-05-15 20:09:01 +08:00
Soulter
9c284b84b1 perf: 升级插件协议簇 2023-05-15 20:03:17 +08:00
Soulter
9f36e5ae05 perf: 在连接到go-cqhttp之前添加连接检测 2023-05-15 18:33:07 +08:00
Soulter
7caa380e54 perf: 优化控制台输出的长度限制 2023-05-14 21:58:45 +08:00
Soulter
41d81bb60e perf: 简化控制台字数 2023-05-14 20:54:56 +08:00
Soulter
454a74f4e1 perf: 颜色日志-优化控制台显示 2023-05-14 20:51:39 +08:00
Soulter
c5bdad02e5 fix: 修复ChatGPT逆向库回答报错的问题 2023-05-14 20:39:15 +08:00
Soulter
f46de3d518 perf: 颜色日志-美化控制台显示 2023-05-14 20:38:28 +08:00
Soulter
a3e21bea1a perf: 删除不必要的控制台信息显示 2023-05-14 19:54:47 +08:00
Soulter
d7e4707d5d perf: 简化控制台输出信息 2023-05-14 19:43:12 +08:00
Soulter
a78ebf2fd7 feat: plugin dev mode 2023-05-14 18:20:28 +08:00
Soulter
bd11541678 perf: 优化插件更新缓存策略 2023-05-14 18:16:12 +08:00
Soulter
0d99aa81e6 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-05-14 17:40:16 +08:00
Soulter
f104d40d0a perf: 优化插件加载规则、更新插件接口规范
fix: 修复发言频率限制报错的问题
2023-05-14 17:40:13 +08:00
Soulter
0d69f8ab8a Update README.md 2023-05-13 17:03:32 +08:00
Soulter
66d1fc08b6 perf: 去除不必要的import 2023-05-13 14:25:16 +08:00
Soulter
e32fc27728 perf: 优化插件的删除逻辑 2023-05-13 14:23:05 +08:00
Soulter
eec890cd02 perf: 优化插件指令的卸载插件逻辑 2023-05-13 14:20:38 +08:00
Soulter
d30881e59b perf: 补充插件指令的帮助信息 2023-05-13 14:10:00 +08:00
Soulter
9afaf83368 perf: 优化插件指令的身份组鉴定 2023-05-13 14:07:40 +08:00
Soulter
33f9a9cfa0 feat: 支持显示插件列表和插件帮助 2023-05-13 14:04:15 +08:00
Soulter
bf72d5fa27 fix: 修复有依赖的插件拉取问题 2023-05-13 13:55:22 +08:00
Soulter
567c29bcd6 perf: 清除多余log 2023-05-13 13:45:04 +08:00
Soulter
dcdfe453fb perf: 优化插件类鉴定规则 2023-05-13 13:44:14 +08:00
Soulter
0d23c0900b perf: 优化插件卸载逻辑 2023-05-13 13:28:18 +08:00
Soulter
86eda7bdf8 perf: 优化插件缓存逻辑 2023-05-13 13:25:56 +08:00
Soulter
1e46525b0f perf: 优化pip更新逻辑 2023-05-13 13:02:24 +08:00
Soulter
8d41efea4d Update README.md 2023-05-13 11:43:03 +08:00
Soulter
f15d0eb0eb Update README.md 2023-05-13 11:10:56 +08:00
Soulter
1795362bcd perf: 优化插件调用逻辑 2023-05-13 11:03:16 +08:00
Soulter
2bf9c82617 perf: 更好的插件处理逻辑和更开放的插件功能 2023-05-13 10:54:57 +08:00
Soulter
33793a2053 chore: 更新版本号 2023-05-12 09:21:57 +08:00
Soulter
656fe14af4 perf: 优化身份组鉴定 2023-05-12 09:15:32 +08:00
Soulter
46197d49a4 perf: 调换语言模型启动顺序 2023-05-12 09:08:06 +08:00
Soulter
843ab56f50 perf: 优化身份组 2023-05-12 09:02:29 +08:00
Soulter
6b4b52f3c5 perf: 完善身份组功能 2023-05-11 22:56:38 +08:00
Soulter
392e5cd592 chore: 添加默认插件 2023-05-11 22:43:26 +08:00
Soulter
d273019830 chore: 删除一些没必要的文件 2023-05-11 22:39:55 +08:00
Soulter
fd59ec4b6c fix: 修复插件指令clone插件异常的问题 2023-05-11 22:12:39 +08:00
Soulter
bf33ccafca fix: 修复插件指令创建文件夹出错的问题 2023-05-11 22:06:38 +08:00
Soulter
425936872d fix: 修复插件指令结果显示异常的问题 2023-05-11 22:01:57 +08:00
Soulter
6627b2e1e5 fix: 修复插件指令报错的问题 2023-05-11 22:00:18 +08:00
Soulter
323c2cecf8 feat: 新增插件指令 2023-05-11 21:52:44 +08:00
Soulter
5b1dd3dce9 feat: 插件支持 2023-05-11 21:35:25 +08:00
Soulter
54af770dfb fix: 修复keyword指令的一些问题 2023-05-08 20:30:36 +08:00
Soulter
30a48fea6e feat: QQ群的免@唤醒支持多个前缀(nick指令) #92 2023-05-08 20:17:51 +08:00
Soulter
cfd5fb1452 perf: keyword指令支持删除关键词 2023-05-08 19:43:26 +08:00
Soulter
a78984376f perf: 优化与go-cqhttp的通信 2023-04-26 14:52:09 +08:00
Soulter
9887cae43c fix: replit web fix 2023-04-25 20:57:22 +08:00
Soulter
e63fe60f8d Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-25 20:48:08 +08:00
Soulter
b0ac2d676c feat: Replit平台支持 2023-04-25 20:48:04 +08:00
Soulter
5ef515165c Merge pull request #94 from RockChinQ/patch-1
chore: 更换使用nakuru-project-idk
2023-04-25 19:43:02 +08:00
Rock Chin
e21d43f920 chore: 更换使用nakuru-project-idk 2023-04-25 12:46:41 +08:00
Soulter
3a80ffad88 perf: 优化控制台信息显示 2023-04-25 10:42:03 +08:00
Soulter
47506d60cd perf: 优化pip检测 2023-04-25 10:29:16 +08:00
Soulter
b999b712b7 perf: 优化逆向库的错误管理 2023-04-25 10:21:12 +08:00
Soulter
6860ba3f05 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-25 09:38:21 +08:00
Soulter
02594867c0 fix: 修复QQ平台昵称后的消息前导空格问题 2023-04-25 09:38:17 +08:00
Soulter
250435f3e7 Update requirements.txt 2023-04-25 09:29:35 +08:00
Soulter
3c593fb6f7 Update README.md 2023-04-24 19:34:11 +08:00
Soulter
807cad5c48 fix: 删除启动时对qq频道appid不应该的检查 2023-04-24 08:00:45 +00:00
Soulter
e92ecdd3f8 Update README.md 2023-04-23 17:16:31 +08:00
Soulter
1c91079d8f Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-23 09:05:32 +00:00
Soulter
376b2fef40 fix: 修复启动前检查依赖的问题 2023-04-23 09:05:30 +00:00
Soulter
300f3b6df8 Update README.md 2023-04-23 16:52:07 +08:00
Soulter
6e6f6d5cd4 Update README.md 2023-04-23 16:51:46 +08:00
Soulter
077e54d0f1 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-23 08:35:02 +00:00
Soulter
18ffaa2b91 perf: 优化各种报错管理;
feat: 启动时检查依赖库
2023-04-23 08:31:22 +00:00
Soulter
a6555681a0 Update requirements.txt 2023-04-23 15:31:33 +08:00
Soulter
43ac0ef87c fix: remove judge_res 2023-04-22 08:09:22 +00:00
Soulter
754842be7c Update README.md 2023-04-22 14:35:22 +08:00
Soulter
5b3ee2dbe8 fix: 修复回复内容屏蔽词无效的问题 2023-04-22 06:07:33 +00:00
Soulter
ca5a1ddc0b perf: 优化bing模型达到单次会话上限后自动重置 2023-04-22 11:27:43 +08:00
Soulter
c9821132ad fix: QQ频道停止信息来源显示 2023-04-21 11:11:16 +08:00
Soulter
0641dca2a6 fix: 修复qqAt的一些问题 2023-04-21 01:04:57 +08:00
Soulter
fd983b9f5d fix: 修复了一些问题 2023-04-21 01:01:54 +08:00
Soulter
7e1e51c450 feat: QQ支持at发送方和画画指令支持 2023-04-21 01:00:31 +08:00
Soulter
d912b990e4 fix: 修复画画指令失效的问题 2023-04-21 00:45:59 +08:00
Soulter
8224aa87a5 fix: 修复bing模型不想继续会话自动重置的一些问题 2023-04-20 09:07:55 +08:00
Soulter
4cb5abc7b6 fix: 修复bing会话超时过期的问题 2023-04-20 08:59:58 +08:00
Soulter
743a800b0d perf: 删除一些不必要的log 2023-04-19 17:23:24 +08:00
Soulter
a5c43612bf feat: bing模型支持显示信息来源 #70 2023-04-19 17:22:59 +08:00
Soulter
e2bd612b8e perf: 优化语言模型载入流程 2023-04-19 17:01:13 +08:00
Soulter
3ddb65e399 fix: 修复bing获取消息数量报错的bug 2023-04-19 16:52:16 +08:00
Soulter
56775580fc feat: bing模型添加单次会话消息条数显示 2023-04-19 16:34:38 +08:00
Soulter
8f7703c158 perf: 强化超出会话限制后自动重置;其他优化 2023-04-19 16:25:07 +08:00
Soulter
7aba9ff3ff fix: 细化依赖库版本 2023-04-18 21:35:09 +08:00
Soulter
aea1271a94 perf: 取消程序启动多余的依赖库检查 2023-04-18 21:30:51 +08:00
Soulter
b575f195c9 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-18 10:54:21 +08:00
Soulter
1eedf7b332 fix: 取消切换模型的权限限制 2023-04-18 10:54:16 +08:00
Soulter
d327a1041b Update README.md 2023-04-17 14:40:34 +08:00
Soulter
10a3ba7dd4 Update README.md 2023-04-15 21:44:11 +08:00
Soulter
deaa4ea910 Create CODE_OF_CONDUCT.md 2023-04-15 15:33:00 +08:00
Soulter
fbfceb3137 fix: 将除去昵称和@后,/开头的消息都视为指令 2023-04-14 22:43:15 +08:00
Soulter
e7b9d7cd54 feat: QQ平台支持自定义昵称指令。使用格式: nick 新昵称。默认是ai 2023-04-14 21:54:56 +08:00
Soulter
34aba58351 feat: 更新help说明 2023-04-14 21:10:39 +08:00
Soulter
e1639be6c3 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-14 21:05:42 +08:00
Soulter
80975c5715 perf: 重做help指令模块 2023-04-14 21:05:39 +08:00
Soulter
c12a4f7353 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-14 20:49:12 +08:00
Soulter
defab688e5 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-14 20:32:51 +08:00
Soulter
2244386d33 feat: 针对QQ平台增加私聊模式 #86 2023-04-14 20:31:52 +08:00
Soulter
39244fa27f Update README.md 2023-04-11 19:40:14 +08:00
Soulter
20c19905ac fix: 修复update文本显示的问题 2023-04-11 17:54:24 +08:00
Soulter
8086d645f9 perf: 优化update指令:更新成功后不执行重启命令,需要使用update r重启 2023-04-11 17:50:09 +08:00
Soulter
3a3289bf04 perf: QQ平台@优化 2023-04-11 17:39:29 +08:00
Soulter
1711ff3bb5 fix: bugfixes 2023-04-11 10:53:48 +08:00
Soulter
b945913f88 fix: 修复了一些其他问题 2023-04-11 10:50:41 +08:00
Soulter
d31533ed82 fix: 修复QQ平台@时使用不了指令的问题 2023-04-11 10:49:10 +08:00
Soulter
0fb2ec2c76 fix: 修复QQ平台@的一些问题 2023-04-11 10:44:01 +08:00
Soulter
89847cbc83 fix: 修复@机器人的已知问题 2023-04-11 10:36:05 +08:00
Soulter
9d12bb23fd fix: 修复@机器人的已知问题 2023-04-11 10:33:46 +08:00
Soulter
79af4ce381 fix: 修复一些已知问题 2023-04-11 10:28:29 +08:00
Soulter
79f293e248 fix: 修复QQ平台@机器人不回复的问题 2023-04-11 10:26:52 +08:00
Soulter
e75a0fec01 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-11 10:23:29 +08:00
Soulter
a935b085d4 perf: QQ平台支持@机器人时回复 2023-04-11 10:23:25 +08:00
Soulter
4ef0a14420 Update README.md 2023-04-11 09:40:38 +08:00
Soulter
8273154904 Update README.md 2023-04-11 09:40:09 +08:00
Soulter
71d6ef3b52 Update README.md 2023-04-11 09:37:14 +08:00
Soulter
119b3a090a Update README.md 2023-04-11 09:34:45 +08:00
Soulter
496df3347b Update README.md 2023-04-11 09:33:58 +08:00
Soulter
2b70eef35b fix: 修复启动时的依赖库更新流程;优化update latest指令 2023-04-10 22:40:26 +08:00
Soulter
c4071eedf8 perf: 优化update指令显示 2023-04-10 22:37:08 +08:00
Soulter
b6cc866113 Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-10 22:32:31 +08:00
Soulter
6aabcdeac7 perf: bing模型不再显示正忙,会自动逐个回复。 2023-04-10 22:32:24 +08:00
Soulter
72bccee9e2 Update README.md 2023-04-10 21:54:42 +08:00
Soulter
b54f934fcd Update README.md 2023-04-10 01:07:04 +08:00
Soulter
43dc0f96ff Update README.md 2023-04-10 00:46:16 +08:00
Soulter
e02a82fa72 Update README.md 2023-04-10 00:45:53 +08:00
Soulter
9f91b0c92b feat: 1.接入QQ,可以同时在QQ和频道上使用 (beta)
2. 支持临时使用其他语言模型回复(如 /bing hello) #79
perf: 😊1. 优化代码结构,降低耦合度
2. 启动前检查依赖库安装情况
fix: 🤔修复bing模型死锁(正忙)的问题
2023-04-10 00:43:30 +08:00
Soulter
d14d6364a3 fix: 修复切换模型造成的数组超限问题 2023-04-09 00:28:03 +08:00
Soulter
15c8f0b6f7 feat: update和切换模型指令以及keyword指令现在仅可管理员使用;
fix: 修复keyword指令在使用官方模型的时候会被识别为“赞助key”的指令的问题 #80
2023-04-08 23:58:46 +08:00
Soulter
9bca158174 feat: 新增自定义指令/keyword 2023-04-08 20:21:49 +08:00
Soulter
45bb30692d fix: 修复bing自动重置会话的一些问题 2023-04-08 19:21:45 +08:00
Soulter
5bf73caba7 perf: 更新后自动更新第三方库 2023-04-08 11:05:12 +08:00
Soulter
e5389f620a Merge branch 'master' of https://github.com/Soulter/QQChannelChatGPT 2023-04-06 23:42:20 +08:00
Soulter
61fd52ff61 fix: 修复无法重置后无法回信的问题 2023-04-06 23:42:14 +08:00
Soulter
73c46bd812 Update README.md 2023-04-06 22:52:59 +08:00
Soulter
d8173122e0 fix: 修复bing text_chat报too many values to unpack的问题 2023-04-06 22:39:37 +08:00
Soulter
1af6e77dd1 perf: bing不想继续话题后自动重置并重试 2023-04-06 22:28:31 +08:00
Soulter
ce476ca163 fix: 修复update指令commit数量显示的问题 2023-04-05 20:32:15 +08:00
Soulter
0a1df90a83 fix: 修复重启程序后设置的语言模型选择偏好重置的问题 2023-04-05 20:26:35 +08:00
Soulter
762f5ea30f fix: 修复官方语言模型切换key时可能发生的死循环问题;修复切换语言模型时前缀不更新的问题 2023-04-05 19:10:31 +08:00
Soulter
06e7753797 Merge pull request #75 from Soulter/55-multi-models-switch
fix: 修复官方api模型画画指令和更新指令接反的问题
2023-04-05 12:11:46 +08:00
Soulter
c5e1f8d3e9 fix: 修复官方api模型画画指令和更新指令接反的问题 2023-04-05 12:11:21 +08:00
Soulter
221433725b Update README.md 2023-04-05 12:08:05 +08:00
Soulter
28864cd066 Update README.md 2023-04-05 10:56:25 +08:00
Soulter
854f70dc8b Merge pull request #73 from Soulter/55-multi-models-switch
55 multi models switch
2023-04-05 10:51:50 +08:00
Soulter
435b988223 feat: 支持切换语言模型 #55 2023-04-05 10:50:27 +08:00
Soulter
ecc119b296 Merge pull request #72 from Soulter/master
fix: launcher add hints
2023-04-05 09:43:08 +08:00
Soulter
209f3aa136 fix: launcher add hints 2023-04-05 09:41:39 +08:00
Soulter
8935859934 perf: main.py install requierment.txt 2023-04-04 12:26:07 +08:00
Soulter
6c77ec3534 fix: remove main.py request module 2023-04-04 12:24:41 +08:00
Soulter
291d3ebae8 perf: 支持使用/开头的指令 2023-04-03 21:44:46 +08:00
Soulter
5b97fd2e6f fix: 增加baidu_aip_judge文件 2023-04-03 21:33:54 +08:00
Soulter
4de8c5ed7d fix: 修复某些语言模型下update指令无法发送的问题 2023-04-03 21:13:42 +08:00
Soulter
09333d1604 delete: delete some files 2023-04-03 20:05:44 +08:00
Soulter
60240ca9a1 update redeme.md 2023-04-03 18:46:28 +08:00
Soulter
3e45ec0a08 fix: 修复热更新的一些问题 2023-04-03 18:44:47 +08:00
Soulter
4aad04b31a fix: 修复安装器的一些问题 2023-04-03 18:35:20 +08:00
Soulter
99ff3f8d42 fix: 修复热更新的一些问题 2023-04-03 18:02:52 +08:00
Soulter
f9a7a723aa fix: 修复update的一些bug 2023-04-02 21:12:16 +08:00
Soulter
7bb4ad648a Update README.md 2023-04-02 21:07:19 +08:00
Soulter
7c3cb98cf8 fix: 修复update一些问题 2023-04-02 21:02:31 +08:00
Soulter
0cc6bc0f1d fix: 修复update一些问题 2023-04-02 21:02:06 +08:00
Soulter
4181d62b5c Merge pull request #65 from Soulter/61-enhancement-重构代码增强稳定性
61 enhancement 重构代码增强稳定性
2023-04-02 20:42:54 +08:00
Soulter
7a1c0b0821 add: launcher.py 2023-04-02 20:37:55 +08:00
Soulter
b74d32c2c8 fix: 修复command的bug 2023-04-02 20:31:29 +08:00
Soulter
e320bb5ab8 refactor: command: update 2023-04-02 20:23:31 +08:00
Soulter
076cfd3e97 Update README.md 2023-04-02 19:51:28 +08:00
Soulter
515a937c07 refactor: 重构部分代码 2023-04-02 17:29:51 +08:00
Soulter
2c5451120e refactor: 重构部分代码 2023-04-01 09:24:14 +08:00
Soulter
e6f6bee7ee refactor: 重构部分代码 #61 2023-04-01 01:02:16 +08:00
Soulter
1a137a8639 refactor: 重构部分代码-officialapi 2023-03-31 20:05:23 +08:00
Soulter
c8f6d090cc fix: 修复官方api回复不显示前缀的问题 2023-03-31 09:06:08 +08:00
Soulter
b7b7877dfc feat: 支持在配置文件自定义回复前缀 #57 2023-03-30 04:53:26 +00:00
Soulter
608bd0398e Merge pull request #60 from slippersheepig/patch-2
fix: add dependency for image function
2023-03-29 20:14:13 +08:00
Soulter
2541663b77 Merge pull request #54 from RockChinQ/master
[Chore] 添加.gitignore
2023-03-29 20:13:11 +08:00
sheepgreen
5d774f3d7b add dependency for image function 2023-03-29 20:05:44 +08:00
Soulter
6ea1366e73 Merge pull request #59 from O2022/patch-3
feat: 新增画图指令:画
2023-03-29 17:30:21 +08:00
Soulter
134a8e233a Merge pull request #58 from O2022/patch-2
feat: 实现OpenAI画图
2023-03-29 17:21:49 +08:00
O2022
6ccfc674a5 Update core.py
添加了画图功能,内容第一个字输入画字即可触发该功能
2023-03-29 16:26:25 +08:00
O2022
37e9373561 Update core.py
添加画图功能,需调用OpenAI api启用
2023-03-29 16:17:11 +08:00
Soulter
bfa8f137de Update requirements.txt 2023-03-29 00:31:37 +08:00
Rock Chin
66a85cddf5 chore: 添加.gitignore 2023-03-28 23:07:52 +08:00
Rock Chin
bf84e74490 chore: 清理不应提交的文件 2023-03-28 23:07:37 +08:00
Soulter
3d8f96ef8a Merge pull request #51 from Soulter/42-openai_api_domain_customization
fix: 删除测试数据
2023-03-27 13:26:13 +08:00
Soulter
84c57a47ad fix: 删除测试数据 2023-03-27 13:25:37 +08:00
Soulter
5e101bb3c0 Merge pull request #50 from Soulter/42-openai_api_domain_customization
perf: 信息过长分条发送 #40
2023-03-27 12:39:55 +08:00
Soulter
246fbd6337 perf: 信息过长分条发送 2023-03-27 12:39:07 +08:00
Soulter
9938e4392a Merge pull request #49 from RockChinQ/master
doc: 更新NewBing可用性说明
2023-03-27 09:13:04 +08:00
Rock Chin
10cda13213 doc: 更新NewBing可用性 2023-03-26 23:22:24 +08:00
87 changed files with 7014 additions and 1597 deletions

25
.github/workflows/docker-image.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Docker Image CI/CD
on:
push:
branches:
- master
- dev_dashboard
paths-ignore:
- '**/*.md'
workflow_dispatch:
jobs:
publish-latest-docker-image:
runs-on: ubuntu-latest
name: Build and publish docker image
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Build image
run: |
docker build -t ${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:v1 .
- name: Publish image
run: |
docker login -u ${{ secrets.DOCKER_HUB_USERNAME }} -p ${{ secrets.DOCKER_HUB_PASSWORD }}
docker push ${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:v1

8
.gitignore vendored Normal file
View File

@@ -0,0 +1,8 @@
__pycache__
botpy.log
.vscode
data.db
configs/session
configs/config.yaml
**/.DS_Store
temp

View File

@@ -1,3 +0,0 @@
{
"python.analysis.typeCheckingMode": "basic"
}

128
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
SoulterL@outlook.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

8
Dockerfile Normal file
View File

@@ -0,0 +1,8 @@
FROM python:3.10.13-bullseye
WORKDIR /AstrBot
COPY . /AstrBot/
RUN python -m pip install -r requirements.txt
CMD [ "python", "main.py" ]

144
README.md
View File

@@ -1,22 +1,70 @@
## ⭐体验
<p align="center">
使用手机QQ扫码加入QQ频道(频道名: GPT机器人 | 频道号: x42d56aki2)
<img src="https://github.com/Soulter/AstrBot/assets/37870767/b1686114-f3aa-4963-b07f-28bf83dc0a10" alt="QQChannelChatGPT" width="200" />
</p>
<div align="center">
<img src="https://user-images.githubusercontent.com/37870767/227197121-4f1e02a4-92fd-4497-8768-9d6977a291b7.jpg" width="200"></img>
# AstrBot
**Windows用户推荐Windows一键安装请前往Release下载最新版本**
*✨ 2024 - 希望成为一个跨平台、极易上手、稳定安全的机器人项目。✨*
详细部署教程链接https://soulter.top/posts/qpdg.html
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/Soulter/AstrBot)](https://github.com/Soulter/AstrBot/releases/latest)
<img src="https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/34412545-2e37-400f-bedc-42348713ac1f.svg" alt="wakatime">
<img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="python">
<a href="https://qm.qq.com/cgi-bin/qm/qr?k=EYGsuUTfe00_iOu9JTXS7_TEpMkXOvwv&jump_from=webapi&authKey=uUEMKCROfsseS+8IzqPjzV3y1tzy4AkykwTib2jNkOFdzezF9s9XknqnIaf3CDft">
<img alt="Static Badge" src="https://img.shields.io/badge/QQ群-322154837-purple">
</a>
<img alt="Static Badge" src="https://img.shields.io/badge/频道-x42d56aki2-purple">
有网络问题报错的请先看issue解决不了再加频道反馈
<a href="https://astrbot.soulter.top/center">项目主页(开发中)</a>
<a href="https://github.com/Soulter/QQChannelChatGPT/wiki">部署文档</a>
<a href="https://github.com/Soulter/QQChannelChatGPT/issues">问题提交</a>
<a href="https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91">插件开发(最少只需 25 行,真不难!)</a>
## ⭐功能:
</div>
- 逆向ChatGPT库
- 官方ChatGPT AI
- 文心一言即将支持链接https://github.com/Soulter/ERNIEBot 欢迎Star
- NewBing即将支持
## 🤔您可能想了解的
- **如何部署?** [帮助文档](https://github.com/Soulter/QQChannelChatGPT/wiki) (部署不成功欢迎进群捞人解决<3)
- **go-cqhttp启动不成功报登录失败** [在这里搜索解决方法](https://github.com/Mrs4s/go-cqhttp/issues)
- **程序闪退/机器人启动不成功** [提交issue或加群反馈](https://github.com/Soulter/QQChannelChatGPT/issues)
- **如何开启ChatGPTBardClaude等语言模型** [查看帮助](https://github.com/Soulter/QQChannelChatGPT/wiki/%E8%A1%A5%E5%85%85%EF%BC%9A%E5%A6%82%E4%BD%95%E5%BC%80%E5%90%AFChatGPT%E3%80%81Bard%E3%80%81Claude%E7%AD%89%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%EF%BC%9F)
## 🧩功能:
最近功能
1. 支持切换代码分支输入`/update checkout <分支名>`即可切换代码分支
2. 正在测试可视化面板输入`/update checkout dev_dashboard`后根据提示即可体验
🌍支持的AI语言模型一览
**文字模型/图片理解**
- OpenAI GPT-3原生支持
- OpenAI GPT-3.5原生支持
- OpenAI GPT-4原生支持
- Claude免费[LLMs插件](https://github.com/Soulter/llms)支持
- HuggingChat免费[LLMs插件](https://github.com/Soulter/llms)支持
**图片生成**
- NovelAI/Naifu (免费[AIDraw插件](https://github.com/Soulter/aidraw)支持)
🌍机器人支持的能力一览
- 可视化面板beta
- 同时部署机器人到 QQ QQ 频道
- 大模型对话
- 大模型网页搜索能力 **(目前仅支持OpenAI系模型最新版本下使用 web on 指令打开)**
- 插件在QQ或QQ频道聊天框内输入 `plugin` 了解详情
- 回复文字图片渲染以图片markdown格式回复**大幅度降低被风控概率**需手动在`cmd_config.json`内开启qq_pic_mode
- 人格设置
- 关键词回复
- 热更新更新本项目时**仅需**在QQ或QQ频道聊天框内输入`update latest r`
- Windows一键部署 https://github.com/Soulter/QQChatGPTLauncher/releases/latest
<!--
### 基本功能
<details>
<summary>✅ 回复符合上下文</summary>
@@ -71,11 +119,39 @@
- QQ频道机器人框架为QQ官方开源的框架稳定。
</details>
</details> -->
> 关于tokentoken就相当于是AI中的单词数但是不等于单词数`text-davinci-003`模型中最大可以支持`4097`个token。在发送信息时这个机器人会将用户的历史聊天记录打包发送给ChatGPT因此`token`也会相应的累加为了保证聊天的上下文的逻辑性就有了缓存token。
### 指令功能
需要先`@`机器人之后再输入指令
<!-- > 关于tokentoken就相当于是AI中的单词数但是不等于单词数`text-davinci-003`模型中最大可以支持`4097`个token。在发送信息时这个机器人会将用户的历史聊天记录打包发送给ChatGPT因此`token`也会相应的累加为了保证聊天的上下文的逻辑性就有了缓存token。 -->
### 🛠️ 插件支持
本项目支持接入插件。
> 使用`plugin i 插件GitHub链接`即可安装。
插件开发教程https://github.com/Soulter/QQChannelChatGPT/wiki/%E5%9B%9B%E3%80%81%E5%BC%80%E5%8F%91%E6%8F%92%E4%BB%B6
部分插件:
- `LLMS`: https://github.com/Soulter/llms | Claude, HuggingChat 大语言模型接入。
- `GoodPlugins`: https://github.com/Soulter/goodplugins | 随机动漫图片、搜番、喜报生成器等等
- `sysstat`: https://github.com/Soulter/sysstatqcbot | 查看系统状态
- `BiliMonitor`: https://github.com/Soulter/BiliMonitor | 订阅B站动态
- `liferestart`: https://github.com/Soulter/liferestart | 人生重开模拟器
<img width="900" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/824d1ff3-7b85-481c-b795-8e62dedb9fd7">
<!--
### 指令
#### OpenAI官方API
在频道内需要先`@`机器人之后再输入指令在QQ中暂时需要在消息前加上`ai `,不需要@
- `/reset`重置prompt
- `/his`查看历史记录(每个用户都有独立的会话)
- `/his [页码数]`查看不同页码的历史记录。例如`/his 2`查看第2页
@@ -85,33 +161,25 @@
- `/help` 查看帮助
- `/key` 动态添加key
- `/set` 人格设置面板
- `/keyword nihao 你好` 设置关键词回复。nihao->你好
- `/bing` 切换为bing
- `/revgpt` 切换为ChatGPT逆向库
- `/画` 画画
## 📰使用方法:
#### 逆向ChatGPT库语言模型
- `/gpt` 切换为OpenAI官方API
- `/bing` 切换为bing
**详细部署教程链接**https://soulter.top/posts/qpdg.html
* 切换模型指令支持临时回复。如`/bing 你好`将会临时使用一次bing模型 -->
<!--
## 🙇‍感谢
### 安装第三方库
本项目使用了一下项目:
使用Python的pip工具安装
- `qq-botpy` QQ频道官方Python SDK
- `openai` (OpenAI Python SDK)
[ChatGPT by acheong08](https://github.com/acheong08/ChatGPT)
```shell
pip install -r requirements.txt
```
> ⚠注意由于qq-botpy库需要运行在`Python 3.8+`的版本上,因此本项目也需要在此之上运行
[EdgeGPT by acheong08](https://github.com/acheong08/EdgeGPT)
### 配置
[go-cqhttp by Mrs4s](https://github.com/Mrs4s/go-cqhttp)
- 获得 OpenAI的key [OpenAI](https://beta.openai.com/)
- 获得 QQ开放平台下QQ频道机器人的token和appid [QQ开放平台](https://q.qq.com/)一个QQ频道机器人很容易创建~
- 在configs/config.yaml下进行配置
### 启动
- 启动main.py
## DEMO
![1.jpg](screenshots/1.jpg)
![3.jpg](screenshots/3.jpg)
![2.jpg](screenshots/2.jpg)
[nakuru-project by Lxns-Network](https://github.com/Lxns-Network/nakuru-project) -->

1
addons/dashboard/dist/_redirects vendored Normal file
View File

@@ -0,0 +1 @@
/* /index.html 200

View File

@@ -0,0 +1 @@
.page-breadcrumb .v-toolbar{background:transparent}

View File

@@ -0,0 +1 @@
import{x as i,o as l,c as _,w as s,a as e,f as a,J as m,V as r,b as t,t as u,ab as p,B as n,ac as o,j as f}from"./index-7c8bc001.js";const b={class:"text-h3"},h={class:"d-flex align-center"},g={class:"d-flex align-center"},V=i({__name:"BaseBreadcrumb",props:{title:String,breadcrumbs:Array,icon:String},setup(d){const c=d;return(x,B)=>(l(),_(r,{class:"page-breadcrumb mb-1 mt-1"},{default:s(()=>[e(a,{cols:"12",md:"12"},{default:s(()=>[e(m,{variant:"outlined",elevation:"0",class:"px-4 py-3 withbg"},{default:s(()=>[e(r,{"no-gutters":"",class:"align-center"},{default:s(()=>[e(a,{md:"5"},{default:s(()=>[t("h3",b,u(c.title),1)]),_:1}),e(a,{md:"7",sm:"12",cols:"12"},{default:s(()=>[e(p,{items:c.breadcrumbs,class:"text-h5 justify-md-end pa-1"},{divider:s(()=>[t("div",h,[e(n(o),{size:"17"})])]),prepend:s(()=>[e(f,{size:"small",icon:"mdi-home",class:"text-secondary mr-2"}),t("div",g,[e(n(o),{size:"17"})])]),_:1},8,["items"])]),_:1})]),_:1})]),_:1})]),_:1})]),_:1}))}});export{V as _};

View File

@@ -0,0 +1 @@
import{x as e,o as a,c as t,w as o,a as s,B as n,X as r,T as c}from"./index-7c8bc001.js";const f=e({__name:"BlankLayout",setup(p){return(u,_)=>(a(),t(c,null,{default:o(()=>[s(n(r))]),_:1}))}});export{f as default};

View File

@@ -0,0 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-89ca5198.js";import{_}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{x as p,D as a,o as r,s,a as e,w as t,f as o,V as i,F as n,u as g,c as h,_ as b,e as x,t as y}from"./index-7c8bc001.js";const P=p({__name:"ColorPage",setup(C){const c=a({title:"Colors Page"}),d=a([{title:"Utilities",disabled:!1,href:"#"},{title:"Colors",disabled:!0,href:"#"}]),u=a(["primary","lightprimary","secondary","lightsecondary","info","success","accent","warning","error","darkText","lightText","borderLight","inputBorder","containerBg"]);return(V,k)=>(r(),s(n,null,[e(m,{title:c.value.title,breadcrumbs:d.value},null,8,["title","breadcrumbs"]),e(i,null,{default:t(()=>[e(o,{cols:"12",md:"12"},{default:t(()=>[e(_,{title:"Color Palette"},{default:t(()=>[e(i,null,{default:t(()=>[(r(!0),s(n,null,g(u.value,(l,f)=>(r(),h(o,{md:"3",cols:"12",key:f},{default:t(()=>[e(b,{rounded:"md",class:"align-center justify-center d-flex",height:"100",width:"100%",color:l},{default:t(()=>[x("class: "+y(l),1)]),_:2},1032,["color"])]),_:2},1024))),128))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{P as default};

View File

@@ -0,0 +1 @@
import{_ as h}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{o as a,s as t,a as n,w as i,f as b,F as d,u as g,V as C,d as U,e as x,t as c,a8 as B,R as _,c as r,a9 as w,O as v,b as V,aa as N,i as F,q as P,k as f,A as S}from"./index-7c8bc001.js";const D={name:"ConfigPage",components:{UiParentCard:h},data(){return{config_data:{data:[]},save_message_snack:!1,save_message:"",save_message_success:""}},mounted(){this.getConfig()},methods:{getConfig(){_.get("/api/configs").then(o=>{this.config_data=o.data.data,console.log(this.config_data)})},updateConfig(){_.post("/api/configs",this.config_data).then(o=>{console.log(this.config_data),o.data.status==="success"?(this.save_message=o.data.message,this.save_message_snack=!0,this.save_message_success="success"):(this.save_message=o.data.message,this.save_message_snack=!0,this.save_message_success="error")})}}},$=Object.assign(D,{setup(o){return(s,m)=>(a(),t(d,null,[n(C,null,{default:i(()=>[n(b,{cols:"12",md:"12"},{default:i(()=>[(a(!0),t(d,null,g(s.config_data.data,u=>(a(),r(h,{key:u.name,title:u.name,style:{"margin-bottom":"16px"}},{default:i(()=>[(a(!0),t(d,null,g(u.body,e=>(a(),t(d,null,[e.config_type==="item"?(a(),t(d,{key:0},[e.val_type==="bool"?(a(),r(w,{key:0,modelValue:e.value,"onUpdate:modelValue":l=>e.value=l,label:e.name,hint:e.description,color:"primary",inset:""},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="string"?(a(),r(v,{key:1,modelValue:e.value,"onUpdate:modelValue":l=>e.value=l,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="int"?(a(),r(v,{key:2,modelValue:e.value,"onUpdate:modelValue":l=>e.value=l,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="list"?(a(),t(d,{key:3},[V("span",null,c(e.name),1),n(N,{modelValue:e.value,"onUpdate:modelValue":l=>e.value=l,chips:"",clearable:"",label:"请添加",multiple:"","prepend-icon":"mdi-tag-multiple-outline"},{selection:i(({attrs:l,item:p,select:k,selected:y})=>[n(F,P(l,{"model-value":y,closable:"",onClick:k,"onClick:close":O=>s.remove(p)}),{default:i(()=>[V("strong",null,c(p),1)]),_:2},1040,["model-value","onClick","onClick:close"])]),_:2},1032,["modelValue","onUpdate:modelValue"])],64)):f("",!0)],64)):e.config_type==="divider"?(a(),r(S,{key:1,style:{"margin-top":"8px","margin-bottom":"8px"}})):f("",!0)],64))),256))]),_:2},1032,["title"]))),128))]),_:1})]),_:1}),n(U,{icon:"mdi-content-save",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary",onClick:s.updateConfig},null,8,["onClick"]),n(B,{timeout:2e3,elevation:"24",color:s.save_message_success,modelValue:s.save_message_snack,"onUpdate:modelValue":m[0]||(m[0]=u=>s.save_message_snack=u)},{default:i(()=>[x(c(s.save_message),1)]),_:1},8,["color","modelValue"])],64))}});export{$ as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
.CardMediaWrapper{max-width:720px;margin:0 auto;position:relative}.CardMediaBuild{position:absolute;top:0;left:0;width:100%;animation:5s bounce ease-in-out infinite}.CardMediaParts{position:absolute;top:0;left:0;width:100%;animation:10s blink ease-in-out infinite}

View File

@@ -0,0 +1 @@
import{_ as t}from"./_plugin-vue_export-helper-c27b6911.js";import{o,c,w as s,V as i,a as r,b as e,d as l,e as a,f as d}from"./index-7c8bc001.js";const n="/assets/img-error-bg-ab6474a0.svg",_="/assets/img-error-blue-2675a7a9.svg",m="/assets/img-error-text-a6aebfa0.svg",g="/assets/img-error-purple-edee3fbc.svg";const p={},u={class:"text-center"},f=e("div",{class:"CardMediaWrapper"},[e("img",{src:n,alt:"grid",class:"w-100"}),e("img",{src:_,alt:"grid",class:"CardMediaParts"}),e("img",{src:m,alt:"build",class:"CardMediaBuild"}),e("img",{src:g,alt:"build",class:"CardMediaBuild"})],-1),h=e("h1",{class:"text-h1"},"Something is wrong",-1),v=e("p",null,[e("small",null,[a("The page you are looking was moved, removed, "),e("br"),a("renamed, or might never exist! ")])],-1);function x(b,V){return o(),c(i,{"no-gutters":"",class:"h-100vh"},{default:s(()=>[r(d,{class:"d-flex align-center justify-center"},{default:s(()=>[e("div",u,[f,h,v,r(l,{variant:"flat",color:"primary",class:"mt-4",to:"/","prepend-icon":"mdi-home"},{default:s(()=>[a(" Home")]),_:1})])]),_:1})]),_:1})}const C=t(p,[["render",x]]);export{C as default};

View File

@@ -0,0 +1 @@
import{x as b,o as d,c as h,w as e,a,a6 as C,b as i,K as x,e as o,t as u,G as m,d as r,A as E,L as V,a7 as y,J as w,s as p,f as c,F as f,u as $,V as k,q as S,N as B,O as N,P as T,H as j,a8 as D,R as g,j as F}from"./index-7c8bc001.js";const G={class:"d-sm-flex align-center justify-space-between"},v=b({__name:"ExtensionCard",props:{title:String,link:String},setup(n){const s=n,l=t=>{window.open(t,"_blank")};return(t,_)=>(d(),h(w,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(C,{style:{padding:"10px 20px"}},{default:e(()=>[i("div",G,[a(x,null,{default:e(()=>[o(u(s.title),1)]),_:1}),a(m),a(r,{icon:"mdi-link",variant:"plain",onClick:_[0]||(_[0]=z=>l(s.link))})])]),_:1}),a(E),a(V,null,{default:e(()=>[y(t.$slots,"default")]),_:3})]),_:3}))}}),P=i("div",{style:{"background-color":"white",width:"100%",padding:"16px","border-radius":"10px"}},[i("h3",null,"🧩 已安装的插件")],-1),U={style:{"min-height":"180px","max-height":"180px",overflow:"hidden"}},q={class:"d-flex align-center gap-3"},A=i("div",{style:{"background-color":"white",width:"100%",padding:"16px","border-radius":"10px"}},[i("h3",null,"🧩 插件市场 [待开发]")],-1),I=i("span",{class:"text-h5"},"从 Git 仓库链接安装插件",-1),L=i("small",null,"github, gitee, gitlab 等公开的仓库都行。",-1),O=i("br",null,null,-1),R={name:"ExtensionPage",components:{ExtensionCard:v},data(){return{extension_data:{data:[]},save_message_snack:!1,save_message:"",save_message_success:"",extension_url:"",status:"",dialog:!1,snack_message:"",snack_show:!1,snack_success:"success",install_loading:!1,uninstall_loading:!1}},mounted(){this.getExtensions()},methods:{getExtensions(){g.get("/api/extensions").then(n=>{this.extension_data.data=n.data.data,console.log(this.extension_data)})},newExtension(){this.install_loading=!0,console.log(this.install_loading),g.post("/api/extensions/install",{url:this.extension_url}).then(n=>{if(this.install_loading=!1,n.data.status==="error"){this.snack_message=n.data.message,this.snack_show=!0,this.snack_success="error";return}this.extension_data.data=n.data.data,console.log(this.extension_data),this.extension_url="",this.snack_message=n.data.message,this.snack_show=!0,this.snack_success="success",this.dialog=!1,this.getExtensions()}).catch(n=>{this.install_loading=!1,this.snack_message=n,this.snack_show=!0,this.snack_success="error"})},uninstallExtension(n){this.uninstall_loading=!0,g.post("/api/extensions/uninstall",{name:n}).then(s=>{if(this.uninstall_loading=!1,s.data.status==="error"){this.snack_message=s.data.message,this.snack_show=!0,this.snack_success="error";return}this.extension_data.data=s.data.data,console.log(this.extension_data),this.snack_message=s.data.message,this.snack_show=!0,this.snack_success="success",this.dialog=!1,this.getExtensions()}).catch(s=>{this.uninstall_loading=!1,this.snack_message=s,this.snack_show=!0,this.snack_success="error"})}}},J=Object.assign(R,{setup(n){return(s,l)=>(d(),p(f,null,[a(k,null,{default:e(()=>[a(c,{cols:"12",md:"12"},{default:e(()=>[P]),_:1}),(d(!0),p(f,null,$(s.extension_data.data,t=>(d(),h(c,{cols:"12",md:"6",lg:"4"},{default:e(()=>[(d(),h(v,{key:t.name,title:t.name,link:t.repo,style:{"margin-bottom":"16px"}},{default:e(()=>[i("p",U,u(t.desc),1),i("div",q,[a(F,null,{default:e(()=>[o("mdi-account")]),_:1}),i("span",null,u(t.author),1),a(m),a(r,{variant:"plain",onClick:_=>s.uninstallExtension(t.name),loading:s.uninstall_loading},{default:e(()=>[o("卸 载")]),_:2},1032,["onClick","loading"])])]),_:2},1032,["title","link"]))]),_:2},1024))),256)),a(c,{cols:"12",md:"12"},{default:e(()=>[A]),_:1})]),_:1}),a(j,{modelValue:s.dialog,"onUpdate:modelValue":l[3]||(l[3]=t=>s.dialog=t),persistent:"",width:"700"},{activator:e(({props:t})=>[a(r,S(t,{icon:"mdi-plus",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary"}),null,16)]),default:e(()=>[a(w,null,{default:e(()=>[a(x,null,{default:e(()=>[I]),_:1}),a(V,null,{default:e(()=>[a(B,null,{default:e(()=>[a(k,null,{default:e(()=>[a(c,{cols:"12"},{default:e(()=>[a(N,{label:"Git 库链接",modelValue:s.extension_url,"onUpdate:modelValue":l[0]||(l[0]=t=>s.extension_url=t),required:""},null,8,["modelValue"])]),_:1})]),_:1})]),_:1}),L,O,i("small",null,u(s.status),1)]),_:1}),a(T,null,{default:e(()=>[a(m),a(r,{color:"blue-darken-1",variant:"text",onClick:l[1]||(l[1]=t=>s.dialog=!1)},{default:e(()=>[o(" 关闭 ")]),_:1}),a(r,{color:"blue-darken-1",variant:"text",loading:s.install_loading,onClick:l[2]||(l[2]=t=>s.newExtension(s.extension_url))},{default:e(()=>[o(" 安装 ")]),_:1},8,["loading"])]),_:1})]),_:1})]),_:1},8,["modelValue"]),a(D,{timeout:2e3,elevation:"24",color:s.snack_success,modelValue:s.snack_show,"onUpdate:modelValue":l[4]||(l[4]=t=>s.snack_show=t)},{default:e(()=>[o(u(s.snack_message),1)]),_:1},8,["color","modelValue"])],64))}});export{J as default};

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
.custom-devider{border-color:#00000014!important}.googleBtn{border-color:#00000014;margin:30px 0 20px}.outlinedInput .v-field{border:1px solid rgba(0,0,0,.08);box-shadow:none}.orbtn{padding:2px 40px;border-color:#00000014;margin:20px 15px}.pwdInput{position:relative}.pwdInput .v-input__append{position:absolute;right:10px;top:50%;transform:translateY(-50%)}.loginForm .v-text-field .v-field--active input{font-weight:500}.loginBox{max-width:475px;margin:0 auto}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
import{at as _,x as d,D as n,o as c,s as m,a as f,w as p,au as r,b as a,av as o,B as t,aw as h}from"./index-7c8bc001.js";const s={Sidebar_drawer:!0,Customizer_drawer:!1,mini_sidebar:!1,fontTheme:"Roboto",inputBg:!1},l=_({id:"customizer",state:()=>({Sidebar_drawer:s.Sidebar_drawer,Customizer_drawer:s.Customizer_drawer,mini_sidebar:s.mini_sidebar,fontTheme:"Poppins",inputBg:s.inputBg}),getters:{},actions:{SET_SIDEBAR_DRAWER(){this.Sidebar_drawer=!this.Sidebar_drawer},SET_MINI_SIDEBAR(e){this.mini_sidebar=e},SET_FONT(e){this.fontTheme=e}}}),u={class:"logo",style:{display:"flex","align-items":"center"}},b={style:{"font-size":"24px","font-weight":"1000"}},w={style:{"font-size":"20px","font-weight":"1000"}},S={style:{"font-size":"20px"}},z=d({__name:"LogoDark",setup(e){n("rgb(var(--v-theme-primary))"),n("rgb(var(--v-theme-secondary))");const i=l();return(g,B)=>(c(),m("div",u,[f(t(h),{to:"/",style:{"text-decoration":"none",color:"black"}},{default:p(()=>[r(a("span",b,"AstrBot 仪表盘",512),[[o,!t(i).mini_sidebar]]),r(a("span",w,"Astr",512),[[o,t(i).mini_sidebar]]),r(a("span",S,"Bot",512),[[o,t(i).mini_sidebar]])]),_:1})]))}});export{z as _,l as u};

View File

@@ -0,0 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-89ca5198.js";import{_ as i}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{x as n,D as a,o as c,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-7c8bc001.js";const p=["innerHTML"],v=n({__name:"MaterialIcons",setup(b){const s=a({title:"Material Icons"}),r=a('<iframe src="https://materialdesignicons.com/" frameborder="0" width="100%" height="1000"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Material Icons",disabled:!0,href:"#"}]);return(h,M)=>(c(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(i,{title:"Material Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,p)]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -0,0 +1 @@
import{_ as B}from"./LogoDark.vue_vue_type_script_setup_true_lang-4faa128a.js";import{x as y,D as o,o as b,s as U,a as e,w as a,b as n,B as $,d as u,f as d,A as _,e as f,V as r,O as m,an as A,as as E,F,c as T,N as q,J as V,L as P}from"./index-7c8bc001.js";const z="/assets/social-google-a359a253.svg",N=["src"],S=n("span",{class:"ml-2"},"Sign up with Google",-1),D=n("h5",{class:"text-h5 text-center my-4 mb-8"},"Sign up with Email address",-1),G={class:"d-sm-inline-flex align-center mt-2 mb-7 mb-sm-0 font-weight-bold"},L=n("a",{href:"#",class:"ml-1 text-lightText"},"Terms and Condition",-1),O={class:"mt-5 text-right"},j=y({__name:"AuthRegister",setup(w){const c=o(!1),i=o(!1),p=o(""),v=o(""),g=o(),h=o(""),x=o(""),k=o([s=>!!s||"Password is required",s=>s&&s.length<=10||"Password must be less than 10 characters"]),C=o([s=>!!s||"E-mail is required",s=>/.+@.+\..+/.test(s)||"E-mail must be valid"]);function R(){g.value.validate()}return(s,l)=>(b(),U(F,null,[e(u,{block:"",color:"primary",variant:"outlined",class:"text-lightText googleBtn"},{default:a(()=>[n("img",{src:$(z),alt:"google"},null,8,N),S]),_:1}),e(r,null,{default:a(()=>[e(d,{class:"d-flex align-center"},{default:a(()=>[e(_,{class:"custom-devider"}),e(u,{variant:"outlined",class:"orbtn",rounded:"md",size:"small"},{default:a(()=>[f("OR")]),_:1}),e(_,{class:"custom-devider"})]),_:1})]),_:1}),D,e(E,{ref_key:"Regform",ref:g,"lazy-validation":"",action:"/dashboards/analytical",class:"mt-7 loginForm"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:h.value,"onUpdate:modelValue":l[0]||(l[0]=t=>h.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Firstname"},null,8,["modelValue"])]),_:1}),e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:x.value,"onUpdate:modelValue":l[1]||(l[1]=t=>x.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Lastname"},null,8,["modelValue"])]),_:1})]),_:1}),e(m,{modelValue:v.value,"onUpdate:modelValue":l[2]||(l[2]=t=>v.value=t),rules:C.value,label:"Email Address / Username",class:"mt-4 mb-4",required:"",density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary"},null,8,["modelValue","rules"]),e(m,{modelValue:p.value,"onUpdate:modelValue":l[3]||(l[3]=t=>p.value=t),rules:k.value,label:"Password",required:"",density:"comfortable",variant:"outlined",color:"primary","hide-details":"auto","append-icon":i.value?"mdi-eye":"mdi-eye-off",type:i.value?"text":"password","onClick:append":l[4]||(l[4]=t=>i.value=!i.value),class:"pwdInput"},null,8,["modelValue","rules","append-icon","type"]),n("div",G,[e(A,{modelValue:c.value,"onUpdate:modelValue":l[5]||(l[5]=t=>c.value=t),rules:[t=>!!t||"You must agree to continue!"],label:"Agree with?",required:"",color:"primary",class:"ms-n2","hide-details":""},null,8,["modelValue","rules"]),L]),e(u,{color:"secondary",block:"",class:"mt-2",variant:"flat",size:"large",onClick:l[6]||(l[6]=t=>R())},{default:a(()=>[f("Sign Up")]),_:1})]),_:1},512),n("div",O,[e(_),e(u,{variant:"plain",to:"/auth/login",class:"mt-2 text-capitalize mr-n2"},{default:a(()=>[f("Already have an account?")]),_:1})])],64))}});const I={class:"pa-7 pa-sm-12"},J=n("h2",{class:"text-secondary text-h2 mt-8"},"Sign up",-1),Y=n("h4",{class:"text-disabled text-h4 mt-3"},"Enter credentials to continue",-1),M=y({__name:"RegisterPage",setup(w){return(c,i)=>(b(),T(r,{class:"h-100vh","no-gutters":""},{default:a(()=>[e(d,{cols:"12",class:"d-flex align-center bg-lightprimary"},{default:a(()=>[e(q,null,{default:a(()=>[n("div",I,[e(r,{justify:"center"},{default:a(()=>[e(d,{cols:"12",lg:"10",xl:"6",md:"7"},{default:a(()=>[e(V,{elevation:"0",class:"loginBox"},{default:a(()=>[e(V,{variant:"outlined"},{default:a(()=>[e(P,{class:"pa-9"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",class:"text-center"},{default:a(()=>[e(B),J,Y]),_:1})]),_:1}),e(j)]),_:1})]),_:1})]),_:1})]),_:1})]),_:1})])]),_:1})]),_:1})]),_:1}))}});export{M as default};

View File

@@ -0,0 +1 @@
.custom-devider{border-color:#00000014!important}.googleBtn{border-color:#00000014;margin:30px 0 20px}.outlinedInput .v-field{border:1px solid rgba(0,0,0,.08);box-shadow:none}.orbtn{padding:2px 40px;border-color:#00000014;margin:20px 15px}.pwdInput{position:relative}.pwdInput .v-input__append{position:absolute;right:10px;top:50%;transform:translateY(-50%)}.loginBox{max-width:475px;margin:0 auto}

View File

@@ -0,0 +1 @@
import{_ as c}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-89ca5198.js";import{_ as f}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{x as m,D as s,o as l,s as r,a as e,w as a,f as i,V as o,F as d,u as _,J as p,U as b,b as h,t as g}from"./index-7c8bc001.js";const v=m({__name:"ShadowPage",setup(w){const n=s({title:"Shadow Page"}),u=s([{title:"Utilities",disabled:!1,href:"#"},{title:"Shadow",disabled:!0,href:"#"}]);return(V,x)=>(l(),r(d,null,[e(c,{title:n.value.title,breadcrumbs:u.value},null,8,["title","breadcrumbs"]),e(o,null,{default:a(()=>[e(i,{cols:"12",md:"12"},{default:a(()=>[e(f,{title:"Basic Shadow"},{default:a(()=>[e(o,{justify:"center"},{default:a(()=>[(l(),r(d,null,_(25,t=>e(i,{key:t,cols:"auto"},{default:a(()=>[e(p,{height:"100",width:"100",class:b(["mb-5",["d-flex justify-center align-center bg-primary",`elevation-${t}`]])},{default:a(()=>[h("div",null,g(t-1),1)]),_:2},1032,["class"])]),_:2},1024)),64))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -0,0 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-89ca5198.js";import{_ as n}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{x as c,D as a,o as i,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-7c8bc001.js";const b=["innerHTML"],w=c({__name:"TablerIcons",setup(p){const s=a({title:"Tabler Icons"}),r=a('<iframe src="https://tablericons.com/" frameborder="0" width="100%" height="600"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Tabler Icons",disabled:!0,href:"#"}]);return(h,T)=>(i(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(n,{title:"Tabler Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,b)]),_:1})]),_:1})]),_:1})],64))}});export{w as default};

View File

@@ -0,0 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-89ca5198.js";import{_ as v}from"./UiParentCard.vue_vue_type_script_setup_true_lang-03a5c441.js";import{x as f,o as i,c as g,w as e,a,a6 as y,K as b,e as w,t as d,A as C,L as V,a7 as L,J as _,D as o,s as h,f as k,b as t,F as x,u as B,U as H,V as T}from"./index-7c8bc001.js";const s=f({__name:"UiChildCard",props:{title:String},setup(r){const l=r;return(n,c)=>(i(),g(_,{variant:"outlined"},{default:e(()=>[a(y,{class:"py-3"},{default:e(()=>[a(b,{class:"text-h5"},{default:e(()=>[w(d(l.title),1)]),_:1})]),_:1}),a(C),a(V,null,{default:e(()=>[L(n.$slots,"default")]),_:3})]),_:3}))}}),D={class:"d-flex flex-column gap-1"},S={class:"text-caption pa-2 bg-lightprimary"},z=t("div",{class:"text-grey"},"Class",-1),N={class:"font-weight-medium"},U=t("div",null,[t("p",{class:"text-left"},"Left aligned on all viewport sizes."),t("p",{class:"text-center"},"Center aligned on all viewport sizes."),t("p",{class:"text-right"},"Right aligned on all viewport sizes."),t("p",{class:"text-sm-left"},"Left aligned on viewports SM (small) or wider."),t("p",{class:"text-right text-md-left"},"Left aligned on viewports MD (medium) or wider."),t("p",{class:"text-right text-lg-left"},"Left aligned on viewports LG (large) or wider."),t("p",{class:"text-right text-xl-left"},"Left aligned on viewports XL (extra-large) or wider.")],-1),$=t("div",{class:"d-flex justify-space-between flex-row"},[t("a",{href:"#",class:"text-decoration-none"},"Non-underlined link"),t("div",{class:"text-decoration-line-through"},"Line-through text"),t("div",{class:"text-decoration-overline"},"Overline text"),t("div",{class:"text-decoration-underline"},"Underline text")],-1),M=t("div",null,[t("p",{class:"text-high-emphasis"},"High-emphasis has an opacity of 87% in light theme and 100% in dark."),t("p",{class:"text-medium-emphasis"},"Medium-emphasis text and hint text have opacities of 60% in light theme and 70% in dark."),t("p",{class:"text-disabled"},"Disabled text has an opacity of 38% in light theme and 50% in dark.")],-1),A=f({__name:"TypographyPage",setup(r){const l=o({title:"Typography Page"}),n=o([["Heading 1","text-h1"],["Heading 2","text-h2"],["Heading 3","text-h3"],["Heading 4","text-h4"],["Heading 5","text-h5"],["Heading 6","text-h6"],["Subtitle 1","text-subtitle-1"],["Subtitle 2","text-subtitle-2"],["Body 1","text-body-1"],["Body 2","text-body-2"],["Button","text-button"],["Caption","text-caption"],["Overline","text-overline"]]),c=o([{title:"Utilities",disabled:!1,href:"#"},{title:"Typography",disabled:!0,href:"#"}]);return(O,F)=>(i(),h(x,null,[a(m,{title:l.value.title,breadcrumbs:c.value},null,8,["title","breadcrumbs"]),a(T,null,{default:e(()=>[a(k,{cols:"12",md:"12"},{default:e(()=>[a(v,{title:"Basic Typography"},{default:e(()=>[a(s,{title:"Heading"},{default:e(()=>[t("div",D,[(i(!0),h(x,null,B(n.value,([p,u])=>(i(),g(_,{variant:"outlined",key:p,class:"my-4"},{default:e(()=>[t("div",{class:H([u,"pa-2"])},d(p),3),t("div",S,[z,t("div",N,d(u),1)])]),_:2},1024))),128))])]),_:1}),a(s,{title:"Text-alignment",class:"mt-8"},{default:e(()=>[U]),_:1}),a(s,{title:"Decoration",class:"mt-8"},{default:e(()=>[$]),_:1}),a(s,{title:"Opacity",class:"mt-8"},{default:e(()=>[M]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{A as default};

View File

@@ -0,0 +1 @@
import{x as n,o,c as i,w as e,a,a6 as d,b as c,K as u,e as p,t as _,a7 as s,A as f,L as V,J as m}from"./index-7c8bc001.js";const C={class:"d-sm-flex align-center justify-space-between"},h=n({__name:"UiParentCard",props:{title:String},setup(l){const r=l;return(t,x)=>(o(),i(m,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(d,null,{default:e(()=>[c("div",C,[a(u,null,{default:e(()=>[p(_(r.title),1)]),_:1}),s(t.$slots,"action")])]),_:3}),a(f),a(V,null,{default:e(()=>[s(t.$slots,"default")]),_:3})]),_:3}))}});export{h as _};

View File

@@ -0,0 +1 @@
const s=(t,r)=>{const o=t.__vccOpts||t;for(const[c,e]of r)o[c]=e;return o};export{s as _};

View File

@@ -0,0 +1,34 @@
<svg width="676" height="391" viewBox="0 0 676 391" fill="none" xmlns="http://www.w3.org/2000/svg">
<g opacity="0.09">
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 4.49127 197.53)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 342.315 387.578)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 28.0057 211.105)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 365.829 374.002)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 51.52 224.68)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 389.344 360.428)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 75.0345 238.255)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 412.858 346.852)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 98.5488 251.83)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 436.372 333.277)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 122.063 265.405)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 459.887 319.703)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 145.578 278.979)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 483.401 306.127)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 169.092 292.556)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 506.916 292.551)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 192.597 306.127)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 530.43 278.977)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 216.111 319.703)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 553.944 265.402)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 239.626 333.277)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 577.459 251.827)" stroke="black"/>
<path d="M263.231 346.905L601.064 151.871" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 600.973 238.252)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 286.654 360.428)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 624.487 224.677)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 310.169 374.002)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 648.002 211.102)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(0.866041 -0.499972 -0.866041 -0.499972 333.683 387.578)" stroke="black"/>
<line y1="-0.5" x2="390.089" y2="-0.5" transform="matrix(-0.866041 -0.499972 -0.866041 0.499972 671.516 197.527)" stroke="black"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 3.9 KiB

View File

@@ -0,0 +1,43 @@
<svg width="676" height="395" viewBox="0 0 676 395" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="26.998" height="26.8293" transform="matrix(0.866041 -0.499972 0.866041 0.499972 361.873 290.126)" fill="#E3F2FD"/>
<rect width="24.2748" height="24.1231" transform="matrix(0.866041 -0.499972 0.866041 0.499972 364.249 291.115)" fill="#90CAF9"/>
<rect width="26.998" height="26.8293" transform="matrix(0.866041 -0.499972 0.866041 0.499972 291.67 86.4912)" fill="#E3F2FD"/>
<rect width="24.2748" height="24.1231" transform="matrix(0.866041 -0.499972 0.866041 0.499972 294.046 87.48)" fill="#90CAF9"/>
<g filter="url(#filter0_d)">
<path d="M370.694 211.828L365.394 208.768V215.835L365.404 215.829C365.459 216.281 365.785 216.724 366.383 217.069L417.03 246.308C418.347 247.068 420.481 247.068 421.798 246.308L468.671 219.248C469.374 218.842 469.702 218.301 469.654 217.77V210.861L464.282 213.962L418.024 187.257C416.708 186.497 414.573 186.497 413.257 187.257L370.694 211.828Z" fill="url(#paint0_linear)"/>
</g>
<rect width="59.6284" height="63.9858" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 364 208.812)" fill="#90CAF9"/>
<rect width="59.6284" height="63.9858" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 364 208.812)" fill="url(#paint1_linear)"/>
<rect width="56.6816" height="60.8238" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 366.645 208.761)" fill="url(#paint2_linear)"/>
<path d="M421.238 206.161C421.238 206.434 421.62 206.655 422.092 206.655L432.159 206.656C435.164 206.656 437.6 208.063 437.601 209.798C437.602 211.533 435.166 212.939 432.162 212.938L422.09 212.937C421.62 212.937 421.24 213.157 421.24 213.428L421.241 215.814C421.241 216.087 421.624 216.308 422.096 216.308L432.689 216.309C438.917 216.31 443.967 213.395 443.965 209.799C443.964 206.202 438.914 203.286 432.684 203.286L422.086 203.284C421.617 203.284 421.236 203.504 421.237 203.775L421.238 206.161Z" fill="#1E88E5"/>
<path d="M413.422 213.43C413.422 213.157 413.039 212.936 412.567 212.936L402.896 212.935C399.891 212.935 397.455 211.528 397.454 209.793C397.453 208.059 399.889 206.652 402.894 206.653L412.57 206.654C413.039 206.654 413.419 206.435 413.419 206.164L413.418 203.777C413.418 203.504 413.035 203.283 412.563 203.283L402.366 203.282C396.138 203.281 391.089 206.197 391.09 209.793C391.091 213.389 396.141 216.305 402.371 216.306L412.573 216.307C413.042 216.307 413.423 216.088 413.423 215.817L413.422 213.43Z" fill="#1E88E5"/>
<path d="M407.999 198.145L411.211 201.235C411.266 201.288 411.332 201.336 411.405 201.379C411.813 201.614 412.461 201.669 412.979 201.49C413.59 201.278 413.787 200.821 413.421 200.469L410.209 197.379C409.843 197.027 409.051 196.913 408.441 197.124C407.831 197.335 407.633 197.793 407.999 198.145Z" fill="#1E88E5"/>
<path d="M416.235 200.853C416.235 201.058 416.38 201.244 416.613 201.379C416.846 201.513 417.168 201.597 417.524 201.597C418.236 201.596 418.813 201.263 418.813 200.852L418.812 197.021C418.811 196.61 418.234 196.277 417.522 196.277C416.811 196.278 416.234 196.611 416.234 197.022L416.235 200.853Z" fill="#1E88E5"/>
<path d="M421.627 200.47C421.317 200.769 421.412 201.143 421.82 201.379C421.893 201.421 421.977 201.459 422.069 201.491C422.68 201.703 423.472 201.588 423.838 201.236L427.047 198.147C427.413 197.794 427.215 197.337 426.605 197.126C425.994 196.915 425.203 197.029 424.836 197.381L421.627 200.47Z" fill="#1E88E5"/>
<path d="M427.056 221.447L423.844 218.357C423.478 218.005 422.686 217.891 422.076 218.102C421.466 218.314 421.268 218.771 421.634 219.123L424.846 222.213C424.901 222.266 424.967 222.314 425.04 222.357C425.448 222.592 426.097 222.647 426.614 222.468C427.225 222.257 427.423 221.799 427.056 221.447Z" fill="#1E88E5"/>
<path d="M418.82 218.739C418.82 218.328 418.243 217.995 417.531 217.995C416.819 217.995 416.242 218.329 416.242 218.74L416.243 222.57C416.244 222.776 416.388 222.962 416.621 223.096C416.854 223.231 417.177 223.314 417.533 223.314C418.245 223.314 418.822 222.981 418.821 222.57L418.82 218.739Z" fill="#1E88E5"/>
<path d="M413.428 219.122C413.794 218.77 413.596 218.312 412.986 218.101C412.375 217.89 411.584 218.004 411.217 218.356L408.008 221.445C407.698 221.744 407.793 222.118 408.201 222.354C408.274 222.396 408.358 222.434 408.45 222.466C409.061 222.678 409.853 222.563 410.219 222.211L413.428 219.122Z" fill="#1E88E5"/>
<defs>
<filter id="filter0_d" x="301.394" y="186.687" width="232.264" height="208.191" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feColorMatrix in="SourceAlpha" type="matrix" values="0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 127 0"/>
<feOffset dy="84"/>
<feGaussianBlur stdDeviation="32"/>
<feColorMatrix type="matrix" values="0 0 0 0 0.129412 0 0 0 0 0.588235 0 0 0 0 0.952941 0 0 0 0.2 0"/>
<feBlend mode="normal" in2="BackgroundImageFix" result="effect1_dropShadow"/>
<feBlend mode="normal" in="SourceGraphic" in2="effect1_dropShadow" result="shape"/>
</filter>
<linearGradient id="paint0_linear" x1="417.526" y1="205.789" x2="365.394" y2="216.782" gradientUnits="userSpaceOnUse">
<stop stop-color="#2196F3"/>
<stop offset="1" stop-color="#B1DCFF"/>
</linearGradient>
<linearGradient id="paint1_linear" x1="0.503035" y1="2.68177" x2="20.3032" y2="42.2842" gradientUnits="userSpaceOnUse">
<stop stop-color="#FAFAFA" stop-opacity="0.74"/>
<stop offset="1" stop-color="#91CBFA"/>
</linearGradient>
<linearGradient id="paint2_linear" x1="-18.5494" y1="-44.8799" x2="14.7845" y2="40.5766" gradientUnits="userSpaceOnUse">
<stop stop-color="#FAFAFA" stop-opacity="0.74"/>
<stop offset="1" stop-color="#91CBFA"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

@@ -0,0 +1,42 @@
<svg width="710" height="391" viewBox="0 0 710 391" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="26.9258" height="26.7576" transform="matrix(0.866041 -0.499972 0.866041 0.499972 161.088 154.333)" fill="#EDE7F6"/>
<rect width="24.9267" height="24.7709" transform="matrix(0.866041 -0.499972 0.866041 0.499972 162.809 155.327)" fill="#B39DDB"/>
<rect width="26.9258" height="26.7576" transform="matrix(0.866041 -0.499972 0.866041 0.499972 536.744 181.299)" fill="#EDE7F6"/>
<rect width="24.9267" height="24.7709" transform="matrix(0.866041 -0.499972 0.866041 0.499972 538.465 182.292)" fill="#B39DDB"/>
<g filter="url(#filter0_d)">
<path d="M67.7237 137.573V134.673H64.009V140.824L64.0177 140.829C64.0367 141.477 64.4743 142.121 65.3305 142.615L103.641 164.733C105.393 165.744 108.232 165.744 109.983 164.733L204.044 110.431C204.879 109.949 205.316 109.324 205.355 108.693L205.355 108.692V108.68C205.358 108.628 205.358 108.576 205.355 108.523L205.362 102.335L200.065 104.472L165.733 84.6523C163.982 83.6413 161.142 83.6413 159.391 84.6523L67.7237 137.573Z" fill="url(#paint0_linear)"/>
</g>
<rect width="115.933" height="51.5596" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 62.1588 134.683)" fill="#673AB7"/>
<rect width="115.933" height="51.5596" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 62.1588 134.683)" fill="url(#paint1_linear)" fill-opacity="0.3"/>
<mask id="mask0" mask-type="alpha" maskUnits="userSpaceOnUse" x="64" y="78" width="141" height="81">
<rect width="115.933" height="51.5596" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 62.1588 134.683)" fill="#673AB7"/>
</mask>
<g mask="url(#mask0)">
</g>
<mask id="mask1" mask-type="alpha" maskUnits="userSpaceOnUse" x="64" y="78" width="141" height="81">
<rect width="115.933" height="51.5596" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 62.1588 134.683)" fill="#673AB7"/>
</mask>
<g mask="url(#mask1)">
<rect width="64.3732" height="64.3732" rx="5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 111.303 81.6006)" fill="#5E35B1"/>
<rect opacity="0.7" x="0.866041" width="63.3732" height="63.3732" rx="4.5" transform="matrix(0.866041 -0.499972 0.866041 0.499972 79.1848 87.8305)" stroke="#5E35B1"/>
</g>
<defs>
<filter id="filter0_d" x="0.0090332" y="83.894" width="269.353" height="229.597" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feColorMatrix in="SourceAlpha" type="matrix" values="0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 127 0"/>
<feOffset dy="84"/>
<feGaussianBlur stdDeviation="32"/>
<feColorMatrix type="matrix" values="0 0 0 0 0.403922 0 0 0 0 0.227451 0 0 0 0 0.717647 0 0 0 0.2 0"/>
<feBlend mode="normal" in2="BackgroundImageFix" result="effect1_dropShadow"/>
<feBlend mode="normal" in="SourceGraphic" in2="effect1_dropShadow" result="shape"/>
</filter>
<linearGradient id="paint0_linear" x1="200.346" y1="102.359" x2="71.0293" y2="158.071" gradientUnits="userSpaceOnUse">
<stop stop-color="#A491C8"/>
<stop offset="1" stop-color="#D7C5F8"/>
</linearGradient>
<linearGradient id="paint1_linear" x1="8.1531" y1="-0.145767" x2="57.1962" y2="72.3003" gradientUnits="userSpaceOnUse">
<stop stop-color="white"/>
<stop offset="1" stop-color="white" stop-opacity="0"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 3.3 KiB

View File

@@ -0,0 +1,27 @@
<svg width="676" height="391" viewBox="0 0 676 391" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M267.744 237.142L279.699 230.24L300.636 242.329L288.682 249.231L313.566 263.598L286.344 279.314L261.46 264.947L215.984 291.203L197.779 282.558L169.334 211.758L169.092 211.618L196.313 195.902L267.744 237.142ZM219.359 265.077L240.523 252.859L204.445 232.029L205.487 234.589L219.359 265.077Z" fill="#FFAB91"/>
<path d="M469.959 120.206L481.913 113.304L502.851 125.392L490.897 132.294L515.78 146.661L488.559 162.377L463.675 148.011L418.199 174.266L399.994 165.621L371.548 94.8211L371.307 94.6816L398.528 78.9654L469.959 120.206ZM421.574 148.141L442.737 135.922L406.66 115.093L407.701 117.653L421.574 148.141Z" fill="#FFAB91"/>
<path d="M204.523 235.027V232.237L219.401 265.014L240.555 252.926V255.018L218.936 267.339L204.523 235.027Z" fill="#D84315"/>
<path d="M406.738 118.09V115.301L421.616 148.078L442.77 135.99V138.082L421.151 150.402L406.738 118.09Z" fill="#D84315"/>
<rect width="109.114" height="136.405" transform="matrix(0.866025 -0.5 0.866025 0.5 220.507 181.925)" fill="url(#paint0_linear)"/>
<rect width="40.2357" height="70.0545" transform="matrix(0.866025 -0.5 0.866025 0.5 280.437 201.886)" fill="url(#paint1_linear)"/>
<rect x="25.1147" width="80.1144" height="107.405" transform="matrix(0.866025 -0.5 0.866025 0.5 223.872 194.482)" stroke="#1565C0" stroke-width="29"/>
<rect x="25.1147" width="80.1144" height="107.405" transform="matrix(0.866025 -0.5 0.866025 0.5 223.872 194.482)" stroke="url(#paint2_linear)" stroke-width="29"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M279.517 230.177L267.662 237.15L196.064 195.772L168.866 211.58L169.331 212.097L170.096 214.002L196.436 198.795L267.866 240.035L279.821 233.133L298.211 243.751L300.787 242.265L279.517 230.177ZM291.278 250.695L288.804 252.124L311.1 264.996L313.805 263.418L291.278 250.695Z" fill="#D84315"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M481.732 113.24L469.877 120.214L398.279 78.8359L371.081 94.6433L371.546 95.1603L372.311 97.0652L398.651 81.8581L470.081 123.099L482.036 116.196L500.426 126.814L503.002 125.328L481.732 113.24ZM493.493 133.759L491.019 135.187L513.315 148.06L516.02 146.482L493.493 133.759Z" fill="#D84315"/>
<path d="M288.674 252.229V249.207L291.929 251.067L288.674 252.229Z" fill="#D84315"/>
<defs>
<linearGradient id="paint0_linear" x1="77.7511" y1="139.902" x2="-10.8629" y2="8.75671" gradientUnits="userSpaceOnUse">
<stop stop-color="#3076C8"/>
<stop offset="0.992076" stop-color="#91CBFA"/>
</linearGradient>
<linearGradient id="paint1_linear" x1="25.8162" y1="51.0447" x2="68.7073" y2="-5.41524" gradientUnits="userSpaceOnUse">
<stop stop-color="#2E75C7"/>
<stop offset="1" stop-color="#4283CC"/>
</linearGradient>
<linearGradient id="paint2_linear" x1="-16.1224" y1="-47.972" x2="123.494" y2="290.853" gradientUnits="userSpaceOnUse">
<stop stop-color="white"/>
<stop offset="1" stop-color="white" stop-opacity="0"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 2.9 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,6 @@
<svg width="22" height="22" viewBox="0 0 22 22" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5.06129 13.2253L4.31871 15.9975L1.60458 16.0549C0.793457 14.5504 0.333374 12.8292 0.333374 11C0.333374 9.23119 0.763541 7.56319 1.52604 6.09448H1.52662L3.94296 6.53748L5.00146 8.93932C4.77992 9.58519 4.65917 10.2785 4.65917 11C4.65925 11.783 4.80108 12.5332 5.06129 13.2253Z" fill="#FBBB00"/>
<path d="M21.4804 9.00732C21.6029 9.65257 21.6668 10.3189 21.6668 11C21.6668 11.7637 21.5865 12.5086 21.4335 13.2271C20.9143 15.6722 19.5575 17.8073 17.678 19.3182L17.6774 19.3177L14.6339 19.1624L14.2031 16.4734C15.4503 15.742 16.425 14.5974 16.9384 13.2271H11.2346V9.00732H17.0216H21.4804Z" fill="#518EF8"/>
<path d="M17.6772 19.3176L17.6777 19.3182C15.8498 20.7875 13.5277 21.6666 11 21.6666C6.93783 21.6666 3.40612 19.3962 1.60449 16.0549L5.0612 13.2253C5.96199 15.6294 8.28112 17.3408 11 17.3408C12.1686 17.3408 13.2634 17.0249 14.2029 16.4734L17.6772 19.3176Z" fill="#28B446"/>
<path d="M17.8085 2.78892L14.353 5.61792C13.3807 5.01017 12.2313 4.65908 11 4.65908C8.21963 4.65908 5.85713 6.44896 5.00146 8.93925L1.52658 6.09442H1.526C3.30125 2.67171 6.8775 0.333252 11 0.333252C13.5881 0.333252 15.9612 1.25517 17.8085 2.78892Z" fill="#F14336"/>
</svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

1
addons/dashboard/dist/favicon.svg vendored Normal file
View File

@@ -0,0 +1 @@
<svg t="1702013028016" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="1541" width="200" height="200"><path d="M0 0m204.8 0l614.4 0q204.8 0 204.8 204.8l0 614.4q0 204.8-204.8 204.8l-614.4 0q-204.8 0-204.8-204.8l0-614.4q0-204.8 204.8-204.8Z" fill="#FFEC9C" p-id="1542"></path><path d="M819.2 0H534.272A756.48 756.48 0 0 0 0 483.584V819.2a204.8 204.8 0 0 0 204.8 204.8h614.4a204.8 204.8 0 0 0 204.8-204.8V204.8a204.8 204.8 0 0 0-204.8-204.8z" fill="#FFE98A" p-id="1543"></path><path d="M819.2 0h-3.84a755.2 755.2 0 0 0-539.392 1024H819.2a204.8 204.8 0 0 0 204.8-204.8V204.8a204.8 204.8 0 0 0-204.8-204.8z" fill="#FFE471" p-id="1544"></path><path d="M497.152 721.152A752.384 752.384 0 0 0 560.384 1024H819.2a204.8 204.8 0 0 0 204.8-204.8V204.8a204.8 204.8 0 0 0-89.088-168.96 755.2 755.2 0 0 0-437.76 685.312z" fill="#FFE161" p-id="1545"></path><path d="M526.08 140.032l98.304 199.168L844.8 371.2a15.616 15.616 0 0 1 8.704 25.6l-159.744 156.16 37.632 219.136a15.616 15.616 0 0 1-22.528 16.384l-196.608-102.4-196.608 102.4a15.616 15.616 0 0 1-22.528-16.384l37.12-219.136-159.232-155.136a15.616 15.616 0 0 1 8.704-25.6l219.904-32 98.304-199.168a15.616 15.616 0 0 1 28.16-1.024z" fill="#FFF5CC" p-id="1546"></path><path d="M665.6 409.6a444.16 444.16 0 0 0 25.6-61.44l-65.536-9.472-99.584-198.656a15.616 15.616 0 0 0-27.904 0l-98.304 199.168L179.2 371.2a15.616 15.616 0 0 0-8.704 25.6l159.744 156.16-15.104 87.04A407.808 407.808 0 0 0 665.6 409.6z" fill="#FFFFFF" p-id="1547"></path></svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

21
addons/dashboard/dist/index.html vendored Normal file
View File

@@ -0,0 +1,21 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" href="/favicon.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="keywords" content="AstrBot Soulter" />
<meta name="description" content="AstrBot Dashboard" />
<link
rel="stylesheet"
href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&family=Poppins:wght@400;500;600;700&family=Roboto:wght@400;500;700&display=swap"
/>
<title>AstrBot - 仪表盘</title>
<script type="module" crossorigin src="/assets/index-7c8bc001.js"></script>
<link rel="stylesheet" href="/assets/index-0f1523f3.css">
</head>
<body>
<div id="app"></div>
</body>
</html>

526
addons/dashboard/helper.py Normal file
View File

@@ -0,0 +1,526 @@
from addons.dashboard.server import AstrBotDashBoard, DashBoardData
from pydantic import BaseModel
from typing import Union, Optional
import uuid
from util import general_utils as gu
from util.cmd_config import CmdConfig
from dataclasses import dataclass
import sys
import os
import threading
import time
def shutdown_bot(delay_s: int):
time.sleep(delay_s)
py = sys.executable
os.execl(py, py, *sys.argv)
@dataclass
class DashBoardConfig():
config_type: str
name: Optional[str] = None
description: Optional[str] = None
path: Optional[str] = None # 仅 item 才需要
body: Optional[list['DashBoardConfig']] = None # 仅 group 才需要
value: Optional[Union[list, dict, str, int, bool]] = None # 仅 item 才需要
val_type: Optional[str] = None # 仅 item 才需要
class DashBoardHelper():
def __init__(self, dashboard_data: DashBoardData, config: dict):
dashboard_data.configs = {
"data": []
}
self.parse_default_config(dashboard_data, config)
self.dashboard_data: DashBoardData = dashboard_data
self.dashboard = AstrBotDashBoard(self.dashboard_data)
self.key_map = {} # key: uuid, value: config key name
self.cc = CmdConfig()
@self.dashboard.register("post_configs")
def on_post_configs(post_configs: dict):
try:
gu.log(f"收到配置更新请求", gu.LEVEL_INFO, tag="可视化面板")
self.save_config(post_configs)
self.parse_default_config(self.dashboard_data, self.cc.get_all())
# 重启
threading.Thread(target=shutdown_bot, args=(2,), daemon=True).start()
except Exception as e:
gu.log(f"在保存配置时发生错误:{e}", gu.LEVEL_ERROR, tag="可视化面板")
raise e
# 将 config.yaml、 中的配置解析到 dashboard_data.configs 中
def parse_default_config(self, dashboard_data: DashBoardData, config: dict):
try:
bot_platform_group = DashBoardConfig(
config_type="group",
name="机器人平台配置",
description="机器人平台配置描述",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用 QQ 频道平台",
description="就是你想到的那个 QQ 频道平台。详见 q.qq.com",
value=config['qqbot']['enable'],
path="qqbot.enable",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="QQ机器人APPID",
description="详见 q.qq.com",
value=config['qqbot']['appid'],
path="qqbot.appid",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="QQ机器人令牌",
description="详见 q.qq.com",
value=config['qqbot']['token'],
path="qqbot.token",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="QQ机器人 Secret",
description="详见 q.qq.com",
value=config['qqbot_secret'],
path="qqbot_secret",
),
DashBoardConfig(
config_type="divider"
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用 GO-CQHTTP 平台",
description="gocq 是一个基于 HTTP 协议的 CQHTTP 协议的实现。详见 github.com/Mrs4s/go-cqhttp",
value=config['gocqbot']['enable'],
path="gocqbot.enable",
)
]
)
proxy_group = DashBoardConfig(
config_type="group",
name="代理配置",
description="代理配置描述",
body=[
DashBoardConfig(
config_type="item",
val_type="string",
name="HTTP 代理地址",
description="建议上下一致",
value=config['http_proxy'],
path="proxy",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="HTTPS 代理地址",
description="建议上下一致",
value=config['https_proxy'],
path="proxy",
)
]
)
general_platform_detail_group = DashBoardConfig(
config_type="group",
name="通用平台配置",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启动消息文字转图片",
description="启动后,机器人会将消息转换为图片发送,以降低风控风险。",
value=config['qq_pic_mode'],
path="qq_pic_mode",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="消息限制时间",
description="在此时间内,机器人不会回复同一个用户的消息。单位:秒",
value=config['limit']['time'],
path="limit.time",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="消息限制次数",
description="在上面的时间内,如果用户发送消息超过此次数,则机器人不会回复。单位:次",
value=config['limit']['count'],
path="limit.count",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="回复前缀",
description="[xxxx] 你好! 其中xxxx是你可以填写的前缀。如果为空则不显示。",
value=config['reply_prefix'],
path="reply_prefix",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="管理员用户 ID",
description="对机器人 !myid 即可获得。如果此功能不可用,请加群 322154837",
value=config['gocq_qqchan_admin'],
path="gocq_qqchan_admin",
),
DashBoardConfig(
config_type="item",
val_type="list",
name="通用管理员用户 ID同上此项支持多个管理员",
description="",
value=config['other_admins'],
path="other_admins",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="独立会话",
description="是否启用独立会话模式,即 1 个用户自然账号 1 个会话。",
value=config['uniqueSessionMode'],
path="uniqueSessionMode",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否允许 QQ 频道私聊",
description="仅针对 QQ 频道 SDK而非 GO-CQHTTP。如果启用那么机器人会响应私聊消息。",
value=config['direct_message_mode'],
path="direct_message_mode",
),
]
)
gocq_platform_detail_group = DashBoardConfig(
config_type="group",
name="GO-CQHTTP 平台配置",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="string",
name="HTTP 服务器地址",
description="",
value=config['gocq_host'],
path="gocq_host",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="HTTP 服务器端口",
description="",
value=config['gocq_http_port'],
path="gocq_http_port",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="WebSocket 服务器端口",
description="",
value=config['gocq_websocket_port'],
path="gocq_websocket_port",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应群消息",
description="",
value=config['gocq_react_group'],
path="gocq_react_group",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应私聊消息",
description="",
value=config['gocq_react_friend'],
path="gocq_react_friend",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应群成员增加消息",
description="",
value=config['gocq_react_group_increase'],
path="gocq_react_group_increase",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否响应频道消息",
description="",
value=config['gocq_react_guild'],
path="gocq_react_guild",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="转发阈值(字符数)",
description="机器人回复的消息长度超出这个值后,会被折叠成转发卡片发出以减少刷屏。",
value=config['qq_forward_threshold'],
path="qq_forward_threshold",
),
]
)
llm_group = DashBoardConfig(
config_type="group",
name="LLM 配置",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="list",
name="OpenAI API KEY",
description="OpenAI API 的 KEY。支持使用非官方但是兼容的 API。",
value=config['openai']['key'],
path="openai.key",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI API 节点地址",
description="OpenAI API 的节点地址,配合非官方 API 使用。如果不想填写,那么请填写 none",
value=config['openai']['api_base'],
path="openai.api_base",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI 模型",
description="OpenAI 模型。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['model'],
path="openai.chatGPTConfigs.model",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="OpenAI 最大生成长度",
description="OpenAI 最大生成长度。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['max_tokens'],
path="openai.chatGPTConfigs.max_tokens",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI 温度",
description="OpenAI 温度。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['temperature'],
path="openai.chatGPTConfigs.temperature",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI top_p",
description="OpenAI top_p。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['top_p'],
path="openai.chatGPTConfigs.top_p",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI frequency_penalty",
description="OpenAI frequency_penalty。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['frequency_penalty'],
path="openai.chatGPTConfigs.frequency_penalty",
),
DashBoardConfig(
config_type="item",
val_type="float",
name="OpenAI presence_penalty",
description="OpenAI presence_penalty。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['chatGPTConfigs']['presence_penalty'],
path="openai.chatGPTConfigs.presence_penalty",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="OpenAI 总生成长度限制",
description="OpenAI 总生成长度限制。详见 https://platform.openai.com/docs/api-reference/chat",
value=config['openai']['total_tokens_limit'],
path="openai.total_tokens_limit",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI 图像生成模型",
description="OpenAI 图像生成模型。",
value=config['openai_image_generate']['model'],
path="openai_image_generate.model",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI 图像生成大小",
description="OpenAI 图像生成大小。",
value=config['openai_image_generate']['size'],
path="openai_image_generate.size",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI 图像生成风格",
description="OpenAI 图像生成风格。修改前请参考 OpenAI 官方文档",
value=config['openai_image_generate']['style'],
path="openai_image_generate.style",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="OpenAI 图像生成质量",
description="OpenAI 图像生成质量。修改前请参考 OpenAI 官方文档",
value=config['openai_image_generate']['quality'],
path="openai_image_generate.quality",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="大语言模型问题题首提示词",
description="如果填写了此项,在每个对大语言模型的请求中,都会在问题前加上此提示词。",
value=config['llm_env_prompt'],
path="llm_env_prompt",
),
]
)
baidu_aip_group = DashBoardConfig(
config_type="group",
name="百度内容审核",
description="需要去申请",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启动百度内容审核服务",
description="",
value=config['baidu_aip']['enable'],
path="baidu_aip.enable"
),
# "app_id": null,
# "api_key": null,
# "secret_key": null
DashBoardConfig(
config_type="item",
val_type="string",
name="APP ID",
description="",
value=config['baidu_aip']['app_id'],
path="baidu_aip.app_id"
),
DashBoardConfig(
config_type="item",
val_type="string",
name="API KEY",
description="",
value=config['baidu_aip']['api_key'],
path="baidu_aip.api_key"
),
DashBoardConfig(
config_type="item",
val_type="string",
name="SECRET KEY",
description="",
value=config['baidu_aip']['secret_key'],
path="baidu_aip.secret_key"
)
]
)
other_group = DashBoardConfig(
config_type="group",
name="其他配置",
description="其他配置描述",
body=[
# 人格
DashBoardConfig(
config_type="item",
val_type="string",
name="默认人格文本",
description="默认人格文本",
value=config['default_personality_str'],
path="default_personality_str",
),
DashBoardConfig(
config_type="item",
val_type="string",
name="面板用户名",
description="是的,就是你理解的这个面板的用户名",
value=config['dashboard_username'],
path="dashboard_username",
),
]
)
dashboard_data.configs['data'] = [
bot_platform_group,
general_platform_detail_group,
gocq_platform_detail_group,
proxy_group,
llm_group,
other_group,
baidu_aip_group
]
except Exception as e:
gu.log(f"配置文件解析错误:{e}", gu.LEVEL_ERROR)
raise e
def save_config(self, post_config: dict):
'''
根据 path 解析并保存配置
'''
queue = []
for config in post_config['data']:
queue.append(config)
while len(queue) > 0:
config = queue.pop(0)
if config['config_type'] == "group":
for item in config['body']:
queue.append(item)
elif config['config_type'] == "item":
if config['path'] is None or config['path'] == "":
continue
path = config['path'].split('.')
if len(path) == 0:
continue
if config['val_type'] == "bool":
self.cc.put_by_dot_str(config['path'], config['value'])
elif config['val_type'] == "string":
self.cc.put_by_dot_str(config['path'], config['value'])
elif config['val_type'] == "int":
try:
self.cc.put_by_dot_str(config['path'], int(config['value']))
except:
raise ValueError(f"配置项 {config['name']} 的值必须是整数")
elif config['val_type'] == "float":
try:
self.cc.put_by_dot_str(config['path'], float(config['value']))
except:
raise ValueError(f"配置项 {config['name']} 的值必须是浮点数")
elif config['val_type'] == "list":
if config['value'] is None:
self.cc.put_by_dot_str(config['path'], [])
elif not isinstance(config['value'], list):
raise ValueError(f"配置项 {config['name']} 的值必须是列表")
self.cc.put_by_dot_str(config['path'], config['value'])
else:
raise NotImplementedError(f"未知或者未实现的的配置项类型:{config['val_type']}")
def run(self):
self.dashboard.run()

233
addons/dashboard/server.py Normal file
View File

@@ -0,0 +1,233 @@
from flask import Flask, request
from flask.logging import default_handler
from werkzeug.serving import make_server
import datetime
from util import general_utils as gu
from dataclasses import dataclass
import logging
from cores.database.conn import dbConn
from util.cmd_config import CmdConfig
import util.plugin_util as putil
@dataclass
class DashBoardData():
stats: dict
configs: dict
logs: dict
plugins: list[dict]
@dataclass
class Response():
status: str
message: str
data: dict
class AstrBotDashBoard():
def __init__(self, dashboard_data: DashBoardData):
self.dashboard_data = dashboard_data
self.dashboard_be = Flask(__name__, static_folder="dist", static_url_path="/")
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
self.funcs = {}
self.cc = CmdConfig()
@self.dashboard_be.get("/")
def index():
# 返回页面
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.post("/api/authenticate")
def authenticate():
username = self.cc.get("dashboard_username", "")
password = self.cc.get("dashboard_password", "")
# 获得请求体
post_data = request.json
if post_data["username"] == username and post_data["password"] == password:
return Response(
status="success",
message="登录成功。",
data={
"token": "astrbot-test-token",
"username": username
}
).__dict__
else:
return Response(
status="error",
message="用户名或密码错误。",
data=None
).__dict__
@self.dashboard_be.post("/api/change_password")
def change_password():
password = self.cc.get("dashboard_password", "")
# 获得请求体
post_data = request.json
if post_data["password"] == password:
self.cc.put("dashboard_password", post_data["new_password"])
return Response(
status="success",
message="修改成功。",
data=None
).__dict__
else:
return Response(
status="error",
message="原密码错误。",
data=None
).__dict__
@self.dashboard_be.get("/api/stats")
def get_stats():
db_inst = dbConn()
all_session = db_inst.get_all_stat_session()
last_24_message = db_inst.get_last_24h_stat_message()
# last_24_platform = db_inst.get_last_24h_stat_platform()
platforms = db_inst.get_platform_cnt_total()
self.dashboard_data.stats["session"] = []
self.dashboard_data.stats["session_total"] = db_inst.get_session_cnt_total()
self.dashboard_data.stats["message"] = last_24_message
self.dashboard_data.stats["message_total"] = db_inst.get_message_cnt_total()
self.dashboard_data.stats["platform"] = platforms
return Response(
status="success",
message="",
data=self.dashboard_data.stats
).__dict__
@self.dashboard_be.get("/api/configs")
def get_configs():
return Response(
status="success",
message="",
data=self.dashboard_data.configs
).__dict__
@self.dashboard_be.post("/api/configs")
def post_configs():
post_configs = request.json
try:
self.funcs["post_configs"](post_configs)
return Response(
status="success",
message="保存成功~ 机器人将在 2 秒内重启以应用新的配置。",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=self.dashboard_data.configs
).__dict__
@self.dashboard_be.get("/api/logs")
def get_logs():
return Response(
status="success",
message="",
data=self.dashboard_data.logs
).__dict__
@self.dashboard_be.get("/api/extensions")
def get_plugins():
"""
{
"name": "GoodPlugins",
"repo": "https://gitee.com/soulter/goodplugins",
"author": "soulter",
"desc": "一些好用的插件",
"version": "1.0"
}
"""
_plugin_resp = []
for plugin in self.dashboard_data.plugins:
_p = self.dashboard_data.plugins[plugin]
_t = {
"name": _p["info"]["name"],
"repo": '' if "repo" not in _p["info"] else _p["info"]["repo"],
"author": _p["info"]["author"],
"desc": _p["info"]["desc"],
"version": _p["info"]["version"]
}
_plugin_resp.append(_t)
return Response(
status="success",
message="",
data=_plugin_resp
).__dict__
@self.dashboard_be.post("/api/extensions/install")
def install_plugin():
post_data = request.json
repo_url = post_data["url"]
try:
gu.log(f"正在安装插件 {repo_url}", tag="可视化面板")
putil.install_plugin(repo_url, self.dashboard_data.plugins)
gu.log(f"安装插件 {repo_url} 成功", tag="可视化面板")
return Response(
status="success",
message="安装成功~",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/uninstall")
def uninstall_plugin():
post_data = request.json
plugin_name = post_data["name"]
try:
gu.log(f"正在卸载插件 {plugin_name}", tag="可视化面板")
putil.uninstall_plugin(plugin_name, self.dashboard_data.plugins)
gu.log(f"卸载插件 {plugin_name} 成功", tag="可视化面板")
return Response(
status="success",
message="卸载成功~",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/update")
def update_plugin():
post_data = request.json
plugin_name = post_data["name"]
try:
gu.log(f"正在更新插件 {plugin_name}", tag="可视化面板")
putil.update_plugin(plugin_name, self.dashboard_data.plugins)
gu.log(f"更新插件 {plugin_name} 成功", tag="可视化面板")
return Response(
status="success",
message="更新成功~",
data=None
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
def register(self, name: str):
def decorator(func):
self.funcs[name] = func
return func
return decorator
def run(self):
ip_address = gu.get_local_ip_addresses()
ip_str = f"http://{ip_address}:6185\n\thttp://localhost:6185"
gu.log(f"\n\n==================\n您可以访问:\n\n\t{ip_str}\n\n来登录可视化面板。\n注意: 所有配置项现已全量迁移至 cmd_config.json 文件下。您可以登录可视化面板在线修改配置。\n==================\n\n", tag="可视化面板")
# self.dashboard_be.run(host="0.0.0.0", port=6185)
http_server = make_server('0.0.0.0', 6185, self.dashboard_be)
http_server.serve_forever()

View File

@@ -0,0 +1,5 @@
# helloworld
QQChannelChatGPT项目的测试插件
A test plugin for QQChannelChatGPT plugin feature

View File

@@ -0,0 +1,66 @@
from nakuru.entities.components import *
from nakuru import (
GroupMessage,
FriendMessage
)
from botpy.message import Message, DirectMessage
from model.platform.qq import QQ
from cores.qqbot.global_object import (
AstrMessageEvent,
CommandResult
)
'''
注意改插件名噢格式XXXPlugin 或 Main
小提示:把此模板仓库 fork 之后 clone 到机器人文件夹下的 addons/plugins/ 目录下,然后用 Pycharm/VSC 等工具打开可获更棒的编程体验(自动补全等)
'''
class HelloWorldPlugin:
"""
初始化函数, 可以选择直接pass
"""
def __init__(self) -> None:
print("hello, world!")
"""
机器人程序会调用此函数。
返回规范: bool: 插件是否响应该消息 (所有的消息均会调用每一个载入的插件, 如果不响应, 则应返回 False)
Tuple: Non e或者长度为 3 的元组。如果不响应, 返回 None 如果响应, 第 1 个参数为指令是否调用成功, 第 2 个参数为返回的消息链列表, 第 3 个参数为指令名称
例子:一个名为"yuanshen"的插件;当接收到消息为“原神 可莉”, 如果不想要处理此消息则返回False, None如果想要处理但是执行失败了返回True, tuple([False, "请求失败。", "yuanshen"]) 执行成功了返回True, tuple([True, "结果文本", "yuanshen"])
"""
def run(self, ame: AstrMessageEvent):
if ame.message_str == "helloworld":
# return True, tuple([True, "Hello World!!", "helloworld"])
return CommandResult(
hit=True,
success=True,
message_chain=[Plain("Hello World!!")],
command_name="helloworld"
)
else:
return CommandResult(
hit=False,
success=False,
message_chain=None,
command_name=None
)
"""
插件元信息。
当用户输入 plugin v 插件名称 时,会调用此函数,返回帮助信息。
返回参数要求(必填)dict{
"name": str, # 插件名称
"desc": str, # 插件简短描述
"help": str, # 插件帮助信息
"version": str, # 插件版本
"author": str, # 插件作者
"repo": str, # 插件仓库地址 [ 可选 ]
"homepage": str, # 插件主页 [ 可选 ]
}
"""
def info(self):
return {
"name": "helloworld",
"desc": "测试插件",
"help": "测试插件, 回复 helloworld 即可触发",
"version": "v1.2",
"author": "Soulter"
}

View File

@@ -1,45 +0,0 @@
from revChatGPT.V1 import Chatbot
class revChatGPT:
def __init__(self, config):
if 'password' in config:
config['password'] = str(config['password'])
self.chatbot = Chatbot(config=config)
def chat(self, prompt):
resp = ''
"""
Base class for exceptions in this module.
Error codes:
-1: User error
0: Unknown error
1: Server error
2: Rate limit error
3: Invalid request error
4: Expired access token error
5: Invalid access token error
6: Prohibited concurrent query error
"""
err_count = 0
retry_count = 5
while err_count < retry_count:
try:
for data in self.chatbot.ask(prompt):
resp = data["message"]
break
except BaseException as e:
try:
print("[RevChatGPT] 请求出现了一些问题, 正在重试。次数"+str(err_count))
err_count += 1
if err_count >= retry_count:
raise e
except BaseException:
err_count += 1
print("[RevChatGPT] "+str(resp))
return resp

View File

@@ -1,47 +0,0 @@
import asyncio
from EdgeGPT import Chatbot, ConversationStyle
import json
class revEdgeGPT:
def __init__(self):
self.busy = False
self.wait_stack = []
with open('./cookies.json', 'r') as f:
cookies = json.load(f)
self.bot = Chatbot(cookies=cookies)
def is_busy(self):
return self.busy
async def reset(self):
try:
await self.bot.reset()
return False
except BaseException:
return True
async def chat(self, prompt):
if self.busy:
return
self.busy = True
resp = 'err'
err_count = 0
retry_count = 5
while err_count < retry_count:
try:
resp = await self.bot.ask(prompt=prompt, conversation_style=ConversationStyle.creative)
resp = resp['item']['messages'][len(resp['item']['messages'])-1]['text']
if resp == prompt:
resp += '\n\n如果你没有让我复述你的话,那代表我可能不想和你继续这个话题了,请输入/reset重置会话😶'
break
except BaseException as e:
print(e.with_traceback)
err_count += 1
if err_count >= retry_count:
raise e
print("[RevEdgeGPT] 请求出现了一些问题, 正在重试。次数"+str(err_count))
self.busy = False
print("[RevEdgeGPT] "+str(resp))
return resp

323
botpy.log
View File

@@ -1,323 +0,0 @@
2022-12-08 14:29:09,486 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 14:29:10,173 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 14:29:10,174 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 14:29:10,175 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 14:29:10,175 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 14:29:10,335 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 14:29:10,460 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 14:29:10,461 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 16:17:18,117 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:17:18,355 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/users/@me, 错误代码: 401, 返回内容: {'message': 'wrong bot token', 'code': 11242}, trace_id:829d8c60d296a3edcfac3d776dbd8b47
2022-12-08 16:18:59,759 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:19:00,266 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 16:19:00,267 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 16:19:00,268 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 16:19:00,268 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 16:19:00,412 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 16:19:00,522 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 16:19:00,522 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 16:20:14,446 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:20:15,035 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 16:20:15,036 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 16:20:15,037 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 16:20:15,037 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 16:20:15,232 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 16:20:15,320 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 16:20:15,321 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 16:42:06,957 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:42:07,468 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 16:42:07,469 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 16:42:07,469 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 16:42:07,470 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 16:42:07,672 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 16:42:07,862 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 16:42:07,863 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 16:45:44,758 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:45:45,439 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 16:45:45,440 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 16:45:45,441 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 16:45:45,441 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 16:45:45,751 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 16:45:45,858 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 16:45:45,859 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 16:47:16,567 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:47:17,008 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 16:47:17,009 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 16:47:17,009 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 16:47:17,010 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 16:47:17,187 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 16:47:17,284 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 16:47:17,285 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 16:48:39,358 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:48:45,366 [WARNING] (http.py:188)request 请求超时,请求连接: https://api.sgroup.qq.com/gateway/bot
2022-12-08 16:48:53,301 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:48:53,789 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 16:48:53,790 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 16:48:53,791 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 16:48:53,791 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 16:48:53,969 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 16:48:54,062 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 16:48:54,063 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 16:49:26,466 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150805/messages, 错误代码: 403, 返回内容: {'code': 304003, 'message': 'url not allowed'}, trace_id:b201dd7b37649dcf47b82d54c58bc5dd
2022-12-08 16:51:59,155 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:51:59,728 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 16:51:59,729 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 16:51:59,730 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 16:51:59,730 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 16:51:59,887 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 16:52:00,022 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 16:52:00,023 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 16:52:28,760 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150805/messages, 错误代码: 403, 返回内容: {'code': 304003, 'message': 'url not allowed'}, trace_id:e62b072c9f0184f6bf7dd03a00fc0ef3
2022-12-08 16:53:53,370 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:53:53,890 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 16:53:53,891 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 16:53:53,892 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 16:53:53,892 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 16:53:54,101 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 16:53:54,194 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 16:53:54,195 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 16:54:26,287 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150805/messages, 错误代码: 501, 返回内容: None, trace_id:None
2022-12-08 16:55:01,062 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 16:55:01,601 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 16:55:01,601 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 16:55:01,602 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 16:55:01,603 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 16:55:01,742 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 16:55:01,816 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 16:55:01,817 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 17:00:29,939 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 17:00:30,583 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 17:00:30,584 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 17:00:30,585 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 17:00:30,585 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 17:00:30,839 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 17:00:30,943 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 17:00:30,944 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 17:00:53,548 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150658/messages, 错误代码: 403, 返回内容: {'code': 304003, 'message': 'url not allowed'}, trace_id:2ea547d78364cea60583ed450e589a17
2022-12-08 17:10:31,405 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 17:10:32,081 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 17:10:32,082 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 17:10:32,083 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 17:10:32,083 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 17:10:32,342 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 17:10:32,449 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 17:10:32,450 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 17:13:47,495 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 17:13:47,993 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 17:13:47,994 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 17:13:47,995 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 17:13:47,995 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 17:13:48,198 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 17:13:48,300 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 17:13:48,302 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 17:16:31,326 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150658/messages, 错误代码: 501, 返回内容: None, trace_id:None
2022-12-08 17:22:03,924 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 17:22:04,671 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 17:22:04,672 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 17:22:04,673 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 17:22:04,673 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 17:22:04,856 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 17:22:04,958 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 17:22:04,959 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 17:52:04,868 [INFO] (gateway.py:54)on_closed [botpy] 关闭, 返回码: 4009, 返回信息: Session timed out
2022-12-08 17:52:09,874 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 17:52:09,876 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 17:52:09,877 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 17:52:10,104 [INFO] (gateway.py:169)ws_resume [botpy] 重连启动...
2022-12-08 17:52:10,170 [INFO] (gateway.py:85)on_message [botpy] 机器人重连成功!
2022-12-08 17:52:10,171 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:00:00,757 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:00:01,443 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:00:01,443 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:00:01,444 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:00:01,445 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:00:01,602 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:00:01,823 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:00:01,824 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:03:13,211 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:03:14,082 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:03:14,084 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:03:14,084 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:03:14,085 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:03:14,289 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:03:14,498 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:03:14,499 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:16:29,246 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:16:30,054 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:16:30,055 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:16:30,056 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:16:30,056 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:16:30,293 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:16:30,424 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:16:30,424 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:16:41,898 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150658/messages, 错误代码: 403, 返回内容: {'code': 304003, 'message': 'url not allowed'}, trace_id:3cc9fc95ac27cbbc1f1667f1aa11bf8d
2022-12-08 18:18:16,615 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:18:17,399 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:18:17,400 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:18:17,400 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:18:17,401 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:18:17,606 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:18:17,731 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:18:17,732 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:19:07,357 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:19:07,941 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:19:07,942 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:19:07,942 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:19:07,943 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:19:08,247 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:19:08,335 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:19:08,336 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:20:46,416 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:20:47,023 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:20:47,023 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:20:47,024 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:20:47,025 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:20:47,475 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:20:47,640 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:20:47,641 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:21:24,437 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:21:24,991 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:21:24,992 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:21:24,993 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:21:24,993 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:21:25,165 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:21:25,265 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:21:25,266 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:24:18,469 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:24:19,076 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:24:19,077 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:24:19,077 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:24:19,078 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:24:19,252 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:24:19,339 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:24:19,341 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:27:13,898 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:27:14,553 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:27:14,554 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:27:14,554 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:27:14,555 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:27:14,747 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:27:14,850 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:27:14,851 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:30:47,647 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:30:48,327 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:30:48,328 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:30:48,329 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:30:48,330 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:30:48,736 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:30:48,873 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:30:48,874 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 18:31:57,250 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 18:31:57,955 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 18:31:57,956 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 18:31:57,957 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 18:31:57,957 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 18:31:58,165 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 18:31:58,269 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 18:31:58,270 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 19:01:58,099 [INFO] (gateway.py:54)on_closed [botpy] 关闭, 返回码: 4009, 返回信息: Session timed out
2022-12-08 19:02:03,110 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 19:02:03,111 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 19:02:03,111 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 19:02:03,314 [INFO] (gateway.py:169)ws_resume [botpy] 重连启动...
2022-12-08 19:02:03,381 [INFO] (gateway.py:85)on_message [botpy] 机器人重连成功!
2022-12-08 19:02:03,382 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 19:32:03,274 [INFO] (gateway.py:54)on_closed [botpy] 关闭, 返回码: 4009, 返回信息: Session timed out
2022-12-08 19:32:08,271 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 19:32:08,272 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 19:32:08,272 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 19:32:08,534 [INFO] (gateway.py:169)ws_resume [botpy] 重连启动...
2022-12-08 19:32:08,604 [INFO] (gateway.py:85)on_message [botpy] 机器人重连成功!
2022-12-08 19:32:08,605 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 20:02:08,495 [INFO] (gateway.py:54)on_closed [botpy] 关闭, 返回码: 4009, 返回信息: Session timed out
2022-12-08 20:02:13,497 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 20:02:13,497 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 20:02:13,498 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 20:02:13,797 [INFO] (gateway.py:169)ws_resume [botpy] 重连启动...
2022-12-08 20:02:13,914 [INFO] (gateway.py:85)on_message [botpy] 机器人重连成功!
2022-12-08 20:02:13,915 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 20:31:53,159 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 20:31:58,306 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 20:31:58,307 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 20:31:58,308 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 20:31:58,308 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 20:31:58,748 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 20:31:58,924 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 20:31:58,925 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 20:34:26,596 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 20:34:27,610 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 20:34:27,611 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 20:34:27,613 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 20:34:27,613 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 20:34:27,919 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 20:34:28,022 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 20:34:28,023 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 20:35:47,952 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 20:35:48,499 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 20:35:48,500 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 20:35:48,500 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 20:35:48,501 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 20:35:48,723 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 20:35:48,793 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 20:35:48,794 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 20:55:03,450 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150658/messages, 错误代码: 403, 返回内容: {'code': 304003, 'message': 'url not allowed'}, trace_id:c4c4d23f9b2829d03ae9a7dca0184fe9
2022-12-08 21:05:48,678 [INFO] (gateway.py:54)on_closed [botpy] 关闭, 返回码: 4009, 返回信息: Session timed out
2022-12-08 21:05:53,700 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 21:05:53,701 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 21:05:53,702 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 21:05:54,145 [INFO] (gateway.py:169)ws_resume [botpy] 重连启动...
2022-12-08 21:05:54,232 [INFO] (gateway.py:85)on_message [botpy] 机器人重连成功!
2022-12-08 21:05:54,233 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 21:07:39,468 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 21:07:40,244 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 21:07:40,245 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 21:07:40,246 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 21:07:40,247 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 21:07:40,553 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 21:07:40,659 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 21:07:40,660 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 21:30:11,249 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 21:30:12,217 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 21:30:12,218 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 21:30:12,219 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 21:30:12,219 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 21:30:12,627 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 21:30:12,730 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 21:30:12,731 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 21:36:34,660 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150658/messages, 错误代码: 501, 返回内容: None, trace_id:None
2022-12-08 21:38:39,294 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150658/messages, 错误代码: 501, 返回内容: None, trace_id:None
2022-12-08 21:39:43,233 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 21:39:43,878 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 21:39:43,879 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 21:39:43,880 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 21:39:43,880 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 21:39:44,099 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 21:39:44,279 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 21:39:44,281 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 21:40:38,103 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 21:40:39,100 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 21:40:39,101 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 21:40:39,102 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 21:40:39,102 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 21:40:39,402 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 21:40:39,502 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 21:40:39,503 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 21:40:52,805 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150658/messages, 错误代码: 501, 返回内容: None, trace_id:None
2022-12-08 21:41:39,703 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 21:41:40,399 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 21:41:40,400 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 21:41:40,401 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 21:41:40,401 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 21:41:40,644 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 21:41:40,740 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 21:41:40,741 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 21:41:56,705 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150658/messages, 错误代码: 501, 返回内容: None, trace_id:None
2022-12-08 21:42:55,625 [INFO] (client.py:159)_bot_login [botpy] 登录机器人账号中...
2022-12-08 21:42:56,278 [INFO] (client.py:178)_bot_init [botpy] 程序启动...
2022-12-08 21:42:56,279 [INFO] (connection.py:59)multi_run [botpy] 最大并发连接数: 1, 启动会话数: 1
2022-12-08 21:42:56,280 [INFO] (client.py:236)bot_connect [botpy] 会话启动中...
2022-12-08 21:42:56,281 [INFO] (gateway.py:110)ws_connect [botpy] 启动中...
2022-12-08 21:42:56,718 [INFO] (gateway.py:136)ws_identify [botpy] 鉴权中...
2022-12-08 21:42:56,819 [INFO] (gateway.py:80)on_message [botpy] 机器人「SoGPT-测试中」启动成功!
2022-12-08 21:42:56,820 [INFO] (gateway.py:217)_send_heart [botpy] 心跳维持启动...
2022-12-08 21:43:09,794 [ERROR] (http.py:73)_handle_response [botpy] 接口请求异常,请求连接: https://api.sgroup.qq.com/channels/7150658/messages, 错误代码: 501, 返回内容: None, trace_id:None

View File

@@ -1,11 +1,85 @@
# 如果你不知道怎么部署,请务必查看https://soulter.top/posts/qpdg.html
# 如果你不知道怎么部署请查看https://soulter.top/posts/qpdg.html
# 不一定需要key了如果你没有key但有openAI账号或者必应账号可以考虑使用下面的逆向库
###############平台设置#################
# QQ频道机器人
# QQ开放平台的appid和令牌
# q.qq.com
# enable为true则启用false则不启用
qqbot:
enable: true
appid:
token:
# QQ机器人
# enable为true则启用false则不启用
# 需要安装GO-CQHTTP配合使用。
# 文档https://docs.go-cqhttp.org/
# 请将go-cqhttp的配置文件的sever部分粘贴为以下内容否则无法使用
# 请先启动go-cqhttp再启动本程序
#
# servers:
# - http:
# host: 127.0.0.1
# version: 0
# port: 5700
# timeout: 5
# - ws:
# address: 127.0.0.1:6700
# middlewares:
# <<: *default
gocqbot:
enable: false
# 设置是否一个人一个会话
uniqueSessionMode: false
# QChannelBot 的版本请勿修改此字段否则可能产生一些bug
version: 3.0
# [Beta] 转储历史记录时间间隔(分钟)
dump_history_interval: 10
# 一个用户只能在time秒内发送count条消息
limit:
time: 60
count: 5
# 公告
notice: "此机器人由Github项目QQChannelChatGPT驱动。"
# 是否打开私信功能
# 设置为true则频道成员可以私聊机器人。
# 设置为false则频道成员不能私聊机器人。
direct_message_mode: true
# 系统代理
# http_proxy: http://localhost:7890
# https_proxy: http://localhost:7890
# 自定义回复前缀,如[Rev]或其他务必加引号以防止不必要的bug。
reply_prefix:
openai_official: "[GPT]"
rev_chatgpt: "[Rev]"
rev_edgegpt: "[RevBing]"
# 百度内容审核服务
# 新用户免费5万次调用。https://cloud.baidu.com/doc/ANTIPORN/index.html
baidu_aip:
enable: false
app_id:
api_key:
secret_key:
###############语言模型设置#################
# OpenAI官方API
# 注意已支持多key自动切换方法
# key:
# - sk-xxxxxx
# - sk-xxxxxx
# 在下方非注释的地方使用以上格式
# 关于api_base可以使用一些云函数如腾讯、阿里来避免国内被墙的问题。
# 详见:
# https://github.com/Ice-Hazymoon/openai-scf-proxy
@@ -26,48 +100,13 @@ openai:
total_tokens_limit: 5000
# QQ开放平台的appid和令牌
# q.qq.com
qqbot:
appid:
token:
# 设置是否一个人一个会话
uniqueSessionMode: false
# QChannelBot 的版本请勿修改此字段否则可能产生一些bug
version: 2.8
# [Beta] 转储历史记录时间间隔(分钟)
dump_history_interval: 10
# 一个用户只能在time秒内发送count条消息
limit:
time: 60
count: 5
# 公告
notice: "此机器人由Github项目QQChannelChatGPT驱动。"
# 是否打开私信功能
# 设置为true则频道成员可以私聊机器人。
# 设置为false则频道成员不能私聊机器人。
direct_message_mode: true
# 系统代理
# http_proxy: http://localhost:7890
# https_proxy: http://localhost:7890
################外带程序(插件)################
# 百度内容审核服务
# 新用户免费5万次调用。https://cloud.baidu.com/doc/ANTIPORN/index.html
baidu_aip:
enable: false
app_id:
api_key:
secret_key:
# 逆向文心一言【暂时不可用,请勿使用】
rev_ernie:
enable: false
# 逆向New Bing
# 需要在项目根目录下创建cookies.json并粘贴cookies进去。
# 详见https://soulter.top/posts/qpdg.html
rev_edgegpt:
enable: false
@@ -85,14 +124,14 @@ rev_edgegpt:
# - email: 第2个账户
# password: 第2个账户密码
# - ....
# 支持使用session_token\access_token登录
# 支持使用access_token登录
# 例:
# - session_token: xxxxx
# - access_token: xxxx
# 请严格按照上面这个格式填写。
# 逆向ChatGPT库的email-password登录方式不工作建议使用access_token登录
# 获取access_token的方法详见https://soulter.top/posts/qpdg.html
rev_ChatGPT:
enable: false
account:
- email:
password:
- access_token:

View File

@@ -1,7 +1,7 @@
import sqlite3
import yaml
# TODO: 数据库缓存prompt
import time
from typing import Tuple
class dbConn():
def __init__(self):
@@ -15,7 +15,33 @@ class dbConn():
CREATE TABLE IF NOT EXISTS tb_session(
qq_id VARCHAR(32) PRIMARY KEY,
history TEXT
)
);
'''
)
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_stat_session(
platform VARCHAR(32),
session_id VARCHAR(32),
cnt INTEGER
);
'''
)
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_stat_message(
ts INTEGER,
cnt INTEGER
);
'''
)
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_stat_platform(
ts INTEGER,
platform VARCHAR(32),
cnt INTEGER
);
'''
)
@@ -81,6 +107,188 @@ class dbConn():
)
conn.commit()
def increment_stat_session(self, platform, session_id, cnt):
# if not exist, insert
conn = self.conn
c = conn.cursor()
if self.check_stat_session(platform, session_id):
c.execute(
'''
UPDATE tb_stat_session SET cnt = cnt + ? WHERE platform = ? AND session_id = ?
''', (cnt, platform, session_id)
)
conn.commit()
else:
c.execute(
'''
INSERT INTO tb_stat_session(platform, session_id, cnt) VALUES (?, ?, ?)
''', (platform, session_id, cnt)
)
conn.commit()
def check_stat_session(self, platform, session_id):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_session WHERE platform = ? AND session_id = ?
''', (platform, session_id)
)
return c.fetchone() is not None
def get_all_stat_session(self):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_session
'''
)
return c.fetchall()
def get_session_cnt_total(self):
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT COUNT(*) FROM tb_stat_session
'''
)
return c.fetchone()[0]
def increment_stat_message(self, ts, cnt):
# 以一个小时为单位。ts的单位是秒。
# 找到最近的一个小时,如果没有,就插入
conn = self.conn
c = conn.cursor()
ok, new_ts = self.check_stat_message(ts)
if ok:
c.execute(
'''
UPDATE tb_stat_message SET cnt = cnt + ? WHERE ts = ?
''', (cnt, new_ts)
)
conn.commit()
else:
c.execute(
'''
INSERT INTO tb_stat_message(ts, cnt) VALUES (?, ?)
''', (new_ts, cnt)
)
conn.commit()
def check_stat_message(self, ts) -> Tuple[bool, int]:
# 换算成当地整点的时间戳
ts = ts - ts % 3600
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_message WHERE ts = ?
''', (ts, )
)
if c.fetchone() is not None:
return True, ts
else:
return False, ts
def get_last_24h_stat_message(self):
# 获取最近24小时的消息统计
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_message WHERE ts > ?
''', (time.time() - 86400, )
)
return c.fetchall()
def get_message_cnt_total(self) -> int:
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT SUM(cnt) FROM tb_stat_message
'''
)
return c.fetchone()[0]
def increment_stat_platform(self, ts, platform, cnt):
# 以一个小时为单位。ts的单位是秒。
# 找到最近的一个小时,如果没有,就插入
conn = self.conn
c = conn.cursor()
ok, new_ts = self.check_stat_platform(ts, platform)
if ok:
c.execute(
'''
UPDATE tb_stat_platform SET cnt = cnt + ? WHERE ts = ? AND platform = ?
''', (cnt, new_ts, platform)
)
conn.commit()
else:
c.execute(
'''
INSERT INTO tb_stat_platform(ts, platform, cnt) VALUES (?, ?, ?)
''', (new_ts, platform, cnt)
)
conn.commit()
def check_stat_platform(self, ts, platform):
# 换算成当地整点的时间戳
ts = ts - ts % 3600
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_platform WHERE ts = ? AND platform = ?
''', (ts, platform)
)
if c.fetchone() is not None:
return True, ts
else:
return False, ts
def get_last_24h_stat_platform(self):
# 获取最近24小时的消息统计
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT * FROM tb_stat_platform WHERE ts > ?
''', (time.time() - 86400, )
)
return c.fetchall()
def get_platform_cnt_total(self) -> int:
conn = self.conn
c = conn.cursor()
c.execute(
'''
SELECT platform, SUM(cnt) FROM tb_stat_platform GROUP BY platform
'''
)
# return c.fetchall()
platforms = []
ret = c.fetchall()
for i in ret:
# platforms[i[0]] = i[1]
platforms.append({
"name": i[0],
"count": i[1]
})
return platforms
def close(self):
self.conn.close()

23
cores/monitor/perf.py Normal file
View File

@@ -0,0 +1,23 @@
'''
监测机器性能
- Bot 内存使用量
- CPU 占用率
'''
import psutil
from cores.qqbot.global_object import GlobalObject
import time
def run_monitor(global_object: GlobalObject):
'''运行监测'''
start_time = time.time()
while True:
stat = global_object.dashboard_data.stats
# 程序占用的内存大小
mem = psutil.Process().memory_info().rss / 1024 / 1024 # MB
stat['sys_perf'] = {
'memory': mem,
'cpu': psutil.cpu_percent()
}
stat['sys_start_time'] = start_time
time.sleep(30)

View File

@@ -1,156 +0,0 @@
import openai
import yaml
from util.errors.errors import PromptExceededError
import json
import time
import os
import sys
inst = None
# 适配pyinstaller
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
key_record_path = abs_path+'chatgpt_key_record'
class ChatGPT:
def __init__(self, cfg):
self.key_list = []
if 'api_base' in cfg and cfg['api_base'] != 'none' and cfg['api_base'] != '':
openai.api_base = cfg['api_base']
if cfg['key'] != '' and cfg['key'] != None:
print("[System] 读取ChatGPT Key成功")
self.key_list = cfg['key']
# openai.api_key = cfg['key']
else:
input("[System] 请先去完善ChatGPT的Key。详情请前往https://beta.openai.com/account/api-keys")
# init key record
self.init_key_record()
chatGPT_configs = cfg['chatGPTConfigs']
print(f'[System] 加载ChatGPTConfigs: {chatGPT_configs}')
self.chatGPT_configs = chatGPT_configs
self.openai_configs = cfg
def chat(self, req, image_mode = False):
# ChatGPT API 2023/3/2
# messages = [{"role": "user", "content": prompt}]
try:
response = openai.ChatCompletion.create(
messages=req,
**self.chatGPT_configs
)
except Exception as e:
print(e)
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
print("[System] 当前Key已超额或者不正常,正在切换")
self.key_stat[openai.api_key]['exceed'] = True
self.save_key_record()
response, is_switched = self.handle_switch_key(req)
if not is_switched:
# 所有Key都超额或不正常
raise e
else:
response = openai.ChatCompletion.create(
messages=req,
**self.chatGPT_configs
)
self.key_stat[openai.api_key]['used'] += response['usage']['total_tokens']
self.save_key_record()
print("[ChatGPT] "+str(response["choices"][0]["message"]["content"]))
return str(response["choices"][0]["message"]["content"]).strip(), response['usage']['total_tokens']
def handle_switch_key(self, req):
# messages = [{"role": "user", "content": prompt}]
while True:
is_all_exceed = True
for key in self.key_stat:
if not self.key_stat[key]['exceed']:
is_all_exceed = False
openai.api_key = key
print(f"[System] 切换到Key: {key}, 已使用token: {self.key_stat[key]['used']}")
if len(req) > 0:
try:
response = openai.ChatCompletion.create(
messages=req,
**self.chatGPT_configs
)
return response, True
except Exception as e:
print(e)
if 'You exceeded' in str(e):
print("[System] 当前Key已超额,正在切换")
self.key_stat[openai.api_key]['exceed'] = True
self.save_key_record()
time.sleep(1)
continue
else:
return True
if is_all_exceed:
print("[System] 所有Key已超额")
return None, False
def getConfigs(self):
return self.openai_configs
def save_key_record(self):
with open(key_record_path, 'w', encoding='utf-8') as f:
json.dump(self.key_stat, f)
def get_key_stat(self):
return self.key_stat
def get_key_list(self):
return self.key_list
# 添加key
def append_key(self, key, sponsor):
self.key_list.append(key)
self.key_stat[key] = {'exceed': False, 'used': 0, 'sponsor': sponsor}
self.save_key_record()
self.init_key_record()
# 检查key是否可用
def check_key(self, key):
pre_key = openai.api_key
openai.api_key = key
messages = [{"role": "user", "content": "1"}]
try:
response = openai.ChatCompletion.create(
messages=messages,
**self.chatGPT_configs
)
openai.api_key = pre_key
return True
except Exception as e:
pass
openai.api_key = pre_key
return False
#将key_list的key转储到key_record中并记录相关数据
def init_key_record(self):
if not os.path.exists(key_record_path):
with open(key_record_path, 'w', encoding='utf-8') as f:
json.dump({}, f)
with open(key_record_path, 'r', encoding='utf-8') as keyfile:
try:
self.key_stat = json.load(keyfile)
except Exception as e:
print(e)
self.key_stat = {}
finally:
for key in self.key_list:
if key not in self.key_stat:
self.key_stat[key] = {'exceed': False, 'used': 0}
# if openai.api_key is None:
# openai.api_key = key
else:
# if self.key_stat[key]['exceed']:
# print(f"Key: {key} 已超额")
# continue
# else:
# if openai.api_key is None:
# openai.api_key = key
# print(f"使用Key: {key}, 已使用token: {self.key_stat[key]['used']}")
pass
if openai.api_key == None:
self.handle_switch_key("")
self.save_key_record()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,113 @@
from model.platform.qqchan import QQChan, NakuruGuildMember, NakuruGuildMessage
from model.platform.qq import QQ
from model.provider.provider import Provider
from addons.dashboard.server import DashBoardData
from nakuru import (
CQHTTP,
GroupMessage,
GroupMemberIncrease,
FriendMessage,
GuildMessage,
Notify
)
from typing import Union
class GlobalObject:
'''
存放一些公用的数据,用于在不同模块(如core与command)之间传递
'''
nick: str # gocq 的昵称
base_config: dict # config.yaml
cached_plugins: dict # 缓存的插件
web_search: bool # 是否开启了网页搜索
reply_prefix: str
admin_qq: str
admin_qqchan: str
uniqueSession: bool
cnt_total: int
platform_qq: QQ
platform_qqchan: QQChan
default_personality: dict
dashboard_data: DashBoardData
stat: dict
def __init__(self):
self.nick = None # gocq 的昵称
self.base_config = None # config.yaml
self.cached_plugins = {} # 缓存的插件
self.web_search = False # 是否开启了网页搜索
self.reply_prefix = None
self.admin_qq = "123456"
self.admin_qqchan = "123456"
self.uniqueSession = False
self.cnt_total = 0
self.platform_qq = None
self.platform_qqchan = None
self.default_personality = None
self.dashboard_data = None
self.stat = {}
'''
{
"config": {},
"session": [
{
"platform": "qq",
"session_id": 123456,
"cnt": 0
},
{...}
],
"message": [
// 以一小时为单位
{
"ts": 1234567,
"cnt": 0
}
]
}
'''
class AstrMessageEvent():
message_str: str # 纯消息字符串
message_obj: Union[GroupMessage, FriendMessage, GuildMessage, NakuruGuildMessage] # 消息对象
gocq_platform: QQ
qq_sdk_platform: QQChan
platform: str # `gocq` 或 `qqchan`
role: str # `admin` 或 `member`
global_object: GlobalObject # 一些公用数据
session_id: int # 会话id (可能是群id也可能是某个user的id。取决于是否开启了 uniqueSession)
def __init__(self, message_str: str,
message_obj: Union[GroupMessage, FriendMessage, GuildMessage, NakuruGuildMessage],
gocq_platform: QQ,
qq_sdk_platform: QQChan,
platform: str,
role: str,
global_object: GlobalObject,
llm_provider: Provider = None,
session_id: int = None):
self.message_str = message_str
self.message_obj = message_obj
self.gocq_platform = gocq_platform
self.qq_sdk_platform = qq_sdk_platform
self.platform = platform
self.role = role
self.global_object = global_object
self.llm_provider = llm_provider
self.session_id = session_id
class CommandResult():
'''
用于在Command中返回多个值
'''
def __init__(self, hit: bool, success: bool, message_chain: list, command_name: str = "unknown_command") -> None:
self.hit = hit
self.success = success
self.message_chain = message_chain
self.command_name = command_name
def _result_tuple(self):
return (self.success, self.message_chain, self.command_name)

252
main.py
View File

@@ -1,163 +1,90 @@
import threading
import time
import asyncio
import os, sys
import signal
import requests,json
from pip._internal import main as pipmain
import warnings
import traceback
import threading
import logging
warnings.filterwarnings("ignore")
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
def main():
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%H:%M:%S',
)
# config.yaml 配置文件加载和环境确认
try:
import cores.qqbot.core as qqBot
import yaml
from yaml.scanner import ScannerError
import util.general_utils as gu
ymlfile = open(abs_path+"configs/config.yaml", 'r', encoding='utf-8')
cfg = yaml.safe_load(ymlfile)
except ImportError as import_error:
print(import_error)
input("第三方库未完全安装完毕,请退出程序重试。")
except FileNotFoundError as file_not_found:
print(file_not_found)
input("配置文件不存在,请检查是否已经下载配置文件。")
except ScannerError as e:
print(traceback.format_exc())
input("config.yaml 配置文件格式错误,请遵守 yaml 格式。")
def main(loop, event):
import cores.qqbot.core as qqBot
import yaml
ymlfile = open(abs_path+"configs/config.yaml", 'r', encoding='utf-8')
cfg = yaml.safe_load(ymlfile)
if 'http_proxy' in cfg:
# 设置代理
if 'http_proxy' in cfg and cfg['http_proxy'] != '':
os.environ['HTTP_PROXY'] = cfg['http_proxy']
if 'https_proxy' in cfg:
if 'https_proxy' in cfg and cfg['https_proxy'] != '':
os.environ['HTTPS_PROXY'] = cfg['https_proxy']
os.environ['NO_PROXY'] = 'cn.bing.com,https://api.sgroup.qq.com'
provider = privider_chooser(cfg)
print('[System] 当前语言模型提供商: ' + provider)
# 执行Bot
qqBot.initBot(cfg, provider)
# 检查并创建 temp 文件夹
if not os.path.exists(abs_path + "temp"):
os.mkdir(abs_path+"temp")
# 语言模型提供商选择器
# 目前有OpenAI官方API、逆向库
def privider_chooser(cfg):
if 'rev_ChatGPT' in cfg and cfg['rev_ChatGPT']['enable']:
return 'rev_chatgpt'
elif 'rev_ernie' in cfg and cfg['rev_ernie']['enable']:
return 'rev_ernie'
elif 'rev_edgegpt' in cfg and cfg['rev_edgegpt']['enable']:
return 'rev_edgegpt'
else:
return 'openai_official'
# 启动主程序cores/qqbot/core.py
qqBot.initBot(cfg)
# 仅支持linux
def hot_update():
target = 'target.tar'
time.sleep(5)
while(True):
if os.path.exists('version.txt'):
version_file = open('version.txt', 'r', encoding='utf-8')
vs = version_file.read()
version = float(vs)
else:
version = 0
if not os.path.exists(target):
version = 0
try:
res = requests.get("https://soulter.top/channelbot/update.json")
res_obj = json.loads(res.text)
ol_version = float(res_obj['version'])
if ol_version > version:
print('发现新版本: ' + str(ol_version))
res = requests.get(res_obj['linux-url'], stream=True)
filesize = res.headers["Content-Length"]
print('文件大小: ' + str(int(filesize) / 1024 / 1024) + 'MB')
print('正在更新文件...')
chunk_size = 1024
times = int(filesize) // chunk_size
show = 1 / times
show2 = 1 / times
start = 1
with open(target, "wb") as pyFile:
for chunk in res.iter_content(chunk_size=chunk_size):
if chunk:
pyFile.write(chunk)
if start <= times:
print(f"\r下载进度: {show:.2%}",end="",flush=True)
start += 1
show += show2
else:
sys.stdout.write(f"下载进度: 100%\n")
print('更新完成')
print('解压覆盖')
os.system(f"tar -zxvf {target}")
version = ol_version
version_file = open('version.txt', 'w+', encoding='utf-8')
version_file.write(str(res_obj['version']))
version_file.flush()
version_file.close()
try:
update_version(version)
except BaseException as e:
print(e)
print('自启动')
py = sys.executable
os.execl(py, py, *sys.argv)
time.sleep(60*60*3)
except BaseException as e:
print(e)
print("upd出现异常, 请联系QQ905617992")
time.sleep(60*60*3)
def update_version(ver):
if not os.path.exists('update_record'):
object_id = ''
else:
object_id = open("update_record", 'r', encoding='utf-8').read()
addr = 'unknown'
try:
addr = requests.get('http://myip.ipip.net', timeout=5).text
except BaseException:
pass
try:
ts = str(time.time())
# md = hashlib.md5((ts+'QAZ1rQLY1ZufHrZlpuUiNff7').encode())
headers = {
'X-LC-Id': 'UqfXTWW15nB7iMT0OHvYrDFb-gzGzoHsz',
'X-LC-Key': 'QAZ1rQLY1ZufHrZlpuUiNff7',
'Content-Type': 'application/json'
}
d = {"data": {'version':'win-hot-update'+str(ver), 'addr': addr}}
d = json.dumps(d).encode("utf-8")
res = requests.put(f'https://uqfxtww1.lc-cn-n1-shared.com/1.1/classes/version_record/{object_id}', headers = headers, data = d)
if json.loads(res.text)['code'] == 1:
res = requests.post(f'https://uqfxtww1.lc-cn-n1-shared.com/1.1/classes/version_record', headers = headers, data = d)
object_id = json.loads(res.text)['objectId']
object_id_file = open("update_record", 'w+', encoding='utf-8')
object_id_file.write(str(object_id))
object_id_file.flush()
object_id_file.close()
except BaseException as e:
print(e)
def check_env():
if not (sys.version_info.major == 3 and sys.version_info.minor >= 8):
print("请使用Python3.8运行本项目")
def check_env(ch_mirror=False):
if not (sys.version_info.major == 3 and sys.version_info.minor >= 9):
print("请使用Python3.9+运行本项目")
input("按任意键退出...")
exit()
if os.path.exists('requirements.txt'):
pth = 'requirements.txt'
else:
pth = 'QQChannelChatGPT'+ os.sep +'requirements.txt'
print("正在检查更新第三方库...")
try:
import openai
import botpy
import yaml
except Exception as e:
# print(e)
try:
print("安装依赖库中...")
os.system("pip3 install openai")
os.system("pip3 install qq-botpy")
os.system("pip3 install pyyaml")
print("安装依赖库完毕...")
except BaseException:
print("\n安装第三方库异常.请自行安装或者联系QQ905617992.")
# 检查key
with open(abs_path+"configs/config.yaml", 'r', encoding='utf-8') as ymlfile:
import yaml
cfg = yaml.safe_load(ymlfile)
if cfg['openai']['key'] == '' or cfg['openai']['key'] == None:
print("请先在configs/config.yaml下添加一个可用的OpenAI Key。详情请前往https://beta.openai.com/account/api-keys")
if cfg['qqbot']['appid'] == '' or cfg['qqbot']['token'] == '' or cfg['qqbot']['appid'] == None or cfg['qqbot']['token'] == None:
print("请先在configs/config.yaml下完善appid和token令牌(在https://q.qq.com/上注册一个QQ机器人即可获得)")
if ch_mirror:
print("使用阿里云镜像")
pipmain(['install', '-r', pth, '-i', 'https://mirrors.aliyun.com/pypi/simple/', '--quiet'])
else:
pipmain(['install', '-r', pth, '--quiet'])
except BaseException as e:
print(e)
while True:
res = input("安装失败。\n如报错ValueError: check_hostname requires server_hostname请尝试先关闭代理后重试。\n1.输入y回车重试\n2. 输入c回车使用国内镜像源下载\n3. 输入其他按键回车继续往下执行。")
if res == "y":
try:
pipmain(['install', '-r', pth])
break
except BaseException as e:
print(e)
continue
elif res == "c":
try:
pipmain(['install', '-r', pth, '-i', 'https://mirrors.aliyun.com/pypi/simple/'])
break
except BaseException as e:
print(e)
continue
else:
break
print("第三方库检查完毕。")
def get_platform():
import platform
@@ -172,15 +99,22 @@ def get_platform():
print("other")
if __name__ == "__main__":
global pid
pid = os.getpid()
global ma_type
print("程序PID:"+str(pid))
check_env()
bot_event = threading.Event()
loop = asyncio.get_event_loop()
# ma_type = get_platform()
# if ma_type == 'linux':
# threading.Thread(target=hot_update).start()
main(loop, bot_event)
args = sys.argv
if '-cn' in args:
check_env(True)
else:
check_env()
if '-replit' in args:
print("[System] 启动Replit Web保活服务...")
try:
from util.webapp_replit import keep_alive
keep_alive()
except BaseException as e:
print(e)
print(f"[System-err] Replit Web保活服务启动失败:{str(e)}")
t = threading.Thread(target=main, daemon=False)
t.start()
t.join()

431
model/command/command.py Normal file
View File

@@ -0,0 +1,431 @@
import json
from util import general_utils as gu
has_git = True
try:
import git.exc
from git.repo import Repo
except BaseException as e:
gu.log("你正运行在无Git环境下暂时将无法使用插件、热更新功能。")
has_git = False
import os
import sys
import requests
from model.provider.provider import Provider
import json
import util.plugin_util as putil
import shutil
import importlib
from util.cmd_config import CmdConfig as cc
from model.platform.qq import QQ
import stat
from nakuru.entities.components import (
Plain,
Image
)
from PIL import Image as PILImage
from cores.qqbot.global_object import GlobalObject, AstrMessageEvent
from pip._internal import main as pipmain
from cores.qqbot.global_object import CommandResult
PLATFORM_QQCHAN = 'qqchan'
PLATFORM_GOCQ = 'gocq'
# 指令功能的基类,通用的(不区分语言模型)的指令就在这实现
class Command:
def __init__(self, provider: Provider, global_object: GlobalObject = None):
self.provider = provider
self.global_object = global_object
def check_command(self,
message,
session_id: str,
role,
platform,
message_obj):
# 插件
cached_plugins = self.global_object.cached_plugins
ame = AstrMessageEvent(
message_str=message,
message_obj=message_obj,
gocq_platform=self.global_object.platform_qq,
qq_sdk_platform=self.global_object.platform_qqchan,
platform=platform,
role=role,
global_object=self.global_object,
session_id = session_id
)
for k, v in cached_plugins.items():
try:
result = v["clsobj"].run(ame)
if isinstance(result, CommandResult):
hit = result.hit
res = result._result_tuple()
print(hit, res)
elif isinstance(result, tuple):
hit = result[0]
res = result[1]
else:
raise TypeError("插件返回值格式错误。")
if hit:
return True, res
except TypeError as e:
# 参数不匹配,尝试使用旧的参数方案
try:
hit, res = v["clsobj"].run(message, role, platform, message_obj, self.global_object.platform_qq)
if hit:
return True, res
except BaseException as e:
gu.log(f"{k}插件异常,原因: {str(e)}\n已安装插件: {cached_plugins.keys}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。", level=gu.LEVEL_WARNING)
except BaseException as e:
gu.log(f"{k} 插件异常,原因: {str(e)}\n已安装插件: {cached_plugins.keys}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。", level=gu.LEVEL_WARNING)
if self.command_start_with(message, "nick"):
return True, self.set_nick(message, platform, role)
if self.command_start_with(message, "plugin"):
return True, self.plugin_oper(message, role, cached_plugins, platform)
if self.command_start_with(message, "myid") or self.command_start_with(message, "!myid"):
return True, self.get_my_id(message_obj)
if self.command_start_with(message, "nconf") or self.command_start_with(message, "newconf"):
return True, self.get_new_conf(message, role)
if self.command_start_with(message, "web"): # 网页搜索
return True, self.web_search(message)
if self.command_start_with(message, "keyword"):
return True, self.keyword(message_obj, role)
if self.command_start_with(message, "ip"):
ip = requests.get("https://myip.ipip.net", timeout=5).text
return True, f"机器人 IP 信息:{ip}", "ip"
return False, None
def web_search(self, message):
if message == "web on":
self.global_object.web_search = True
return True, "已开启网页搜索", "web"
elif message == "web off":
self.global_object.web_search = False
return True, "已关闭网页搜索", "web"
return True, f"网页搜索功能当前状态: {self.global_object.web_search}", "web"
def get_my_id(self, message_obj):
return True, f"你的ID{str(message_obj.sender.tiny_id)}", "plugin"
def get_new_conf(self, message, role):
if role != "admin":
return False, f"你的身份组{role}没有权限使用此指令。", "newconf"
l = message.split(" ")
if len(l) <= 1:
obj = cc.get_all()
p = gu.create_text_image("【cmd_config.json】", json.dumps(obj, indent=4, ensure_ascii=False))
return True, [Image.fromFileSystem(p)], "newconf"
'''
插件指令
'''
def plugin_oper(self, message: str, role: str, cached_plugins: dict, platform: str):
if not has_git:
return False, "你正在运行在无Git环境下暂时将无法使用插件、热更新功能。", "plugin"
l = message.split(" ")
if len(l) < 2:
p = gu.create_text_image("【插件指令面板】", "安装插件: \nplugin i 插件Github地址\n卸载插件: \nplugin d 插件名 \n重载插件: \nplugin reload\n查看插件列表:\nplugin l\n更新插件: plugin u 插件名\n")
return True, [Image.fromFileSystem(p)], "plugin"
else:
if l[1] == "i":
if role != "admin":
return False, f"你的身份组{role}没有权限安装插件", "plugin"
try:
putil.install_plugin(l[2], cached_plugins)
return True, "插件拉取并载入成功~", "plugin"
except BaseException as e:
return False, f"拉取插件失败,原因: {str(e)}", "plugin"
elif l[1] == "d":
if role != "admin":
return False, f"你的身份组{role}没有权限删除插件", "plugin"
try:
putil.uninstall_plugin(l[2], cached_plugins)
return True, "插件卸载成功~", "plugin"
except BaseException as e:
return False, f"卸载插件失败,原因: {str(e)}", "plugin"
elif l[1] == "u":
try:
putil.update_plugin(l[2], cached_plugins)
return True, "\n更新插件成功!!", "plugin"
except BaseException as e:
return False, f"更新插件失败,原因: {str(e)}\n建议: 使用 plugin i 指令进行覆盖安装(插件数据可能会丢失)", "plugin"
elif l[1] == "l":
try:
plugin_list_info = "\n".join([f"{k}: \n名称: {v['info']['name']}\n简介: {v['info']['desc']}\n版本: {v['info']['version']}\n作者: {v['info']['author']}\n" for k, v in cached_plugins.items()])
p = gu.create_text_image("【已激活插件列表】", plugin_list_info + "\n使用plugin v 插件名 查看插件帮助\n")
return True, [Image.fromFileSystem(p)], "plugin"
except BaseException as e:
return False, f"获取插件列表失败,原因: {str(e)}", "plugin"
elif l[1] == "v":
try:
if l[2] in cached_plugins:
info = cached_plugins[l[2]]["info"]
p = gu.create_text_image(f"【插件信息】", f"名称: {info['name']}\n{info['desc']}\n版本: {info['version']}\n作者: {info['author']}\n\n帮助:\n{info['help']}")
return True, [Image.fromFileSystem(p)], "plugin"
else:
return False, "未找到该插件", "plugin"
except BaseException as e:
return False, f"获取插件信息失败,原因: {str(e)}", "plugin"
# elif l[1] == "reload":
# if role != "admin":
# return False, f"你的身份组{role}没有权限重载插件", "plugin"
# for plugin in cached_plugins:
# try:
# print(f"更新插件 {plugin} 依赖...")
# plugin_path = os.path.join(ppath, cached_plugins[plugin]["root_dir_name"])
# if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
# mm = pipmain(['install', '-r', os.path.join(plugin_path, "requirements.txt"), "--quiet"])
# if mm != 0:
# return False, "插件依赖安装失败需要您手动pip安装对应插件的依赖。", "plugin"
# except BaseException as e:
# print(f"插件{plugin}依赖安装失败,原因: {str(e)}")
# try:
# ok, err = self.plugin_reload(cached_plugins, all = True)
# if ok:
# return True, "\n重载插件成功~", "plugin"
# else:
# # if os.path.exists(plugin_path):
# # shutil.rmtree(plugin_path)
# return False, f"插件重载失败。\n跟踪: \n{err}", "plugin"
# except BaseException as e:
# return False, f"插件重载失败,原因: {str(e)}", "plugin"
elif l[1] == "dev":
if role != "admin":
return False, f"你的身份组{role}没有权限开发者模式", "plugin"
return True, "cached_plugins: \n" + str(cached_plugins), "plugin"
'''
nick: 存储机器人的昵称
'''
def set_nick(self, message: str, platform: str, role: str = "member"):
if role != "admin":
return True, "你无权使用该指令 :P", "nick"
if platform == PLATFORM_GOCQ:
l = message.split(" ")
if len(l) == 1:
return True, "【设置机器人昵称】示例:\n支持多昵称\nnick 昵称1 昵称2 昵称3", "nick"
nick = l[1:]
cc.put("nick_qq", nick)
self.global_object.nick = tuple(nick)
return True, f"设置成功!现在你可以叫我这些昵称来提问我啦~", "nick"
elif platform == PLATFORM_QQCHAN:
nick = message.split(" ")[2]
return False, "QQ频道平台不支持为机器人设置昵称。", "nick"
def general_commands(self):
return {
"help": "帮助",
"keyword": "设置关键词/关键指令回复",
"update": "更新面板",
"update latest": "更新到最新版本",
"update r": "重启机器人",
"reset": "重置会话",
"nick": "设置机器人昵称",
"plugin": "插件安装、卸载和重载",
"web on/off": "启动或关闭网页搜索能力",
"/bing": "切换到bing模型",
"/gpt": "切换到OpenAI ChatGPT API",
"/revgpt": "切换到网页版ChatGPT",
}
def help_messager(self, commands: dict, platform: str, cached_plugins: dict = None):
try:
resp = requests.get("https://soulter.top/channelbot/notice.json").text
notice = json.loads(resp)["notice"]
except BaseException as e:
notice = ""
msg = "# Help Center\n## 指令列表\n"
# msg = "Github项目名QQChannelChatGPT, 有问题提交issue, 欢迎Star\n【指令列表】\n"
for key, value in commands.items():
msg += f"`{key}` - {value}\n"
# plugins
if cached_plugins != None:
plugin_list_info = "\n".join([f"`{k}` {v['info']['name']}\n{v['info']['desc']}\n" for k, v in cached_plugins.items()])
if plugin_list_info.strip() != "":
msg += "\n## 插件列表\n> 使用plugin v 插件名 查看插件帮助\n"
msg += plugin_list_info
msg += notice
try:
# p = gu.create_text_image("【Help Center】", msg)
p = gu.create_markdown_image(msg)
return [Image.fromFileSystem(p)]
except BaseException as e:
gu.log(str(e))
finally:
return msg
# 接受可变参数
def command_start_with(self, message: str, *args):
for arg in args:
if message.startswith(arg) or message.startswith('/'+arg):
return True
return False
# keyword: 关键字
def keyword(self, message_obj, role: str):
if role != "admin":
return True, "你没有权限使用该指令", "keyword"
plain_text = ""
image_url = ""
for comp in message_obj.message:
if isinstance(comp, Plain):
plain_text += comp.text
elif isinstance(comp, Image) and image_url == "":
if comp.url is None:
image_url = comp.file
else:
image_url = comp.url
l = plain_text.split(" ")
if len(l) < 3 and image_url == "":
return True, """
【设置关键词回复】示例:
1. keyword hi 你好
当发送hi的时候会回复你好
2. keyword /hi 你好
当发送/hi时会回复你好
3. keyword d hi
删除hi关键词的回复
4. keyword hi <图片>
当发送hi时会回复图片
""", "keyword"
del_mode = False
if l[1] == "d":
del_mode = True
try:
if os.path.exists("keyword.json"):
with open("keyword.json", "r", encoding="utf-8") as f:
keyword = json.load(f)
if del_mode:
# 删除关键词
if l[2] not in keyword:
return False, "该关键词不存在", "keyword"
else: del keyword[l[2]]
else:
keyword[l[1]] = {
"plain_text": " ".join(l[2:]),
"image_url": image_url
}
else:
if del_mode:
return False, "该关键词不存在", "keyword"
keyword = {
l[1]: {
"plain_text": " ".join(l[2:]),
"image_url": image_url
}
}
with open("keyword.json", "w", encoding="utf-8") as f:
json.dump(keyword, f, ensure_ascii=False, indent=4)
f.flush()
if del_mode:
return True, "删除成功: "+l[2], "keyword"
if image_url == "":
return True, "设置成功: "+l[1]+" "+" ".join(l[2:]), "keyword"
else:
return True, [Plain("设置成功: "+l[1]+" "+" ".join(l[2:])), Image.fromURL(image_url)], "keyword"
except BaseException as e:
return False, "设置失败: "+str(e), "keyword"
def update(self, message: str, role: str):
if not has_git:
return False, "你正在运行在无Git环境下暂时将无法使用插件、热更新功能。", "update"
if role != "admin":
return True, "你没有权限使用该指令", "keyword"
l = message.split(" ")
try:
repo = Repo()
except git.exc.InvalidGitRepositoryError:
try:
repo = Repo(path="QQChannelChatGPT")
except git.exc.InvalidGitRepositoryError:
repo = Repo(path="AstrBot")
if len(l) == 1:
curr_branch = repo.active_branch.name
# 得到本地版本号和最新版本号
now_commit = repo.head.commit
# 得到远程3条commit列表, 包含commit信息
origin = repo.remotes.origin
origin.fetch()
commits = list(repo.iter_commits(curr_branch, max_count=3))
commits_log = ''
index = 1
for commit in commits:
if commit.message.endswith("\n"):
commits_log += f"[{index}] {commit.message}-----------\n"
else:
commits_log += f"[{index}] {commit.message}\n-----------\n"
index+=1
remote_commit_hash = origin.refs.master.commit.hexsha[:6]
return True, f"当前分支: {curr_branch}\n当前版本: {now_commit.hexsha[:6]}\n最新版本: {remote_commit_hash}\n\n3条commit(非最新):\n{str(commits_log)}\nTips:\n1. 使用 update latest 更新至最新版本;\n2. 使用 update checkout <分支名> 切换代码分支。", "update"
else:
if l[1] == "latest":
try:
origin = repo.remotes.origin
origin.fetch()
commits = list(repo.iter_commits('master', max_count=1))
commit_log = commits[0].message
tag = "update"
if len(l) == 3 and l[2] == "r":
tag = "update latest r"
return True, f"更新成功。新版本内容: \n{commit_log}\nps:重启后生效。输入update r重启重启指令不返回任何确认信息", tag
except BaseException as e:
return False, "更新失败: "+str(e), "update"
if l[1] == "r":
py = sys.executable
os.execl(py, py, *sys.argv)
if l[1] == 'checkout':
# 切换分支
if len(l) < 3:
return False, "请提供分支名,如 /update checkout dev_dashboard", "update"
try:
origin = repo.remotes.origin
origin.fetch()
repo.git.checkout(l[2])
# 获得最新的 commit
commits = list(repo.iter_commits(max_count=1))
commit_log = commits[0].message
return True, f"切换分支成功,机器人将在 5 秒内重新启动以应用新的功能。\n当前分支: {l[2]}\n此分支最近更新: \n{commit_log}", "update latest r"
except BaseException as e:
return False, f"切换分支失败。原因: {str(e)}", "update"
def reset(self):
return False
def set(self):
return False
def unset(self):
return False
def key(self):
return False
def help(self):
return False
def status(self):
return False
def token(self):
return False
def his(self):
return False
def draw(self):
return False

View File

@@ -0,0 +1,287 @@
from model.command.command import Command
from model.provider.openai_official import ProviderOpenAIOfficial
from cores.qqbot.personality import personalities
from model.platform.qq import QQ
from util import general_utils as gu
from cores.qqbot.global_object import GlobalObject
class CommandOpenAIOfficial(Command):
def __init__(self, provider: ProviderOpenAIOfficial, global_object: GlobalObject):
self.provider = provider
self.cached_plugins = {}
self.global_object = global_object
self.personality_str = ""
super().__init__(provider, global_object)
def check_command(self,
message: str,
session_id: str,
role: str,
platform: str,
message_obj):
self.platform = platform
hit, res = super().check_command(
message,
session_id,
role,
platform,
message_obj
)
if hit:
return True, res
if self.command_start_with(message, "reset", "重置"):
return True, self.reset(session_id, message)
elif self.command_start_with(message, "his", "历史"):
return True, self.his(message, session_id)
elif self.command_start_with(message, "token"):
return True, self.token(session_id)
elif self.command_start_with(message, "gpt"):
return True, self.gpt()
elif self.command_start_with(message, "status"):
return True, self.status()
elif self.command_start_with(message, "count"):
return True, self.count()
elif self.command_start_with(message, "help", "帮助"):
return True, self.help()
elif self.command_start_with(message, "unset"):
return True, self.unset(session_id)
elif self.command_start_with(message, "set"):
return True, self.set(message, session_id)
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
elif self.command_start_with(message, "", "draw"):
return True, self.draw(message)
elif self.command_start_with(message, "key"):
return True, self.key(message)
elif self.command_start_with(message, "switch"):
return True, self.switch(message)
return False, None
def help(self):
commands = super().general_commands()
commands[''] = '画画'
commands['key'] = '添加OpenAI key'
commands['set'] = '人格设置面板'
commands['gpt'] = '查看gpt配置信息'
commands['status'] = '查看key使用状态'
commands['token'] = '查看本轮会话token'
return True, super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help"
def reset(self, session_id: str, message: str = "reset"):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "reset"
l = message.split(" ")
if len(l) == 1:
self.provider.forget(session_id)
return True, "重置成功", "reset"
if len(l) == 2 and l[1] == "p":
self.provider.forget(session_id)
if self.personality_str != "":
self.set(self.personality_str, session_id) # 重新设置人格
return True, "重置成功", "reset"
def his(self, message: str, session_id: str):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "his"
#分页每页5条
msg = ''
size_per_page = 3
page = 1
if message[4:]:
page = int(message[4:])
# 检查是否有过历史记录
if session_id not in self.provider.session_dict:
msg = f"历史记录为空"
return True, msg, "his"
l = self.provider.session_dict[session_id]
max_page = len(l)//size_per_page + 1 if len(l)%size_per_page != 0 else len(l)//size_per_page
p = self.provider.get_prompts_by_cache_list(self.provider.session_dict[session_id], divide=True, paging=True, size=size_per_page, page=page)
return True, f"历史记录如下:\n{p}\n{page}页 | 共{max_page}\n*输入/his 2跳转到第2页", "his"
def token(self, session_id: str):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "token"
return True, f"会话的token数: {self.provider.get_user_usage_tokens(self.provider.session_dict[session_id])}\n系统最大缓存token数: {self.provider.max_tokens}", "token"
def gpt(self):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "gpt"
return True, f"OpenAI GPT配置:\n {self.provider.chatGPT_configs}", "gpt"
def status(self):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "status"
chatgpt_cfg_str = ""
key_stat = self.provider.get_key_stat()
index = 1
max = 9000000
gg_count = 0
total = 0
tag = ''
for key in key_stat.keys():
sponsor = ''
total += key_stat[key]['used']
if key_stat[key]['exceed']:
gg_count += 1
continue
if 'sponsor' in key_stat[key]:
sponsor = key_stat[key]['sponsor']
chatgpt_cfg_str += f" |-{index}: {key[-8:]} {key_stat[key]['used']}/{max} {sponsor}{tag}\n"
index += 1
return True, f"⭐使用情况({str(gg_count)}个已用):\n{chatgpt_cfg_str}", "status"
def count(self):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型。", "reset"
guild_count, guild_msg_count, guild_direct_msg_count, session_count = self.provider.get_stat()
return True, f"【本指令部分统计可能已经过时】\n当前会话数: {len(self.provider.session_dict)}\n共有频道数: {guild_count} \n共有消息数: {guild_msg_count}\n私信数: {guild_direct_msg_count}\n历史会话数: {session_count}", "count"
def key(self, message: str):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "reset"
l = message.split(" ")
if len(l) == 1:
msg = "感谢您赞助keykey为官方API使用请以以下格式赞助:\n/key xxxxx"
return True, msg, "key"
key = l[1]
if self.provider.check_key(key):
self.provider.append_key(key)
return True, f"*★,°*:.☆( ̄▽ ̄)/$:*.°★* 。\n该Key被验证为有效。感谢你的赞助~"
else:
return True, "该Key被验证为无效。也许是输入错误了或者重试。", "key"
def switch(self, message: str):
'''
切换账号
'''
l = message.split(" ")
if len(l) == 1:
_, ret, _ = self.status()
curr_ = self.provider.get_curr_key()
if curr_ is None:
ret += "当前您未选择账号。输入/switch <账号序号>切换账号。"
else:
ret += f"当前您选择的账号为:{curr_[-8:]}。输入/switch <账号序号>切换账号。"
return True, ret, "switch"
elif len(l) == 2:
try:
key_stat = self.provider.get_key_stat()
index = int(l[1])
if index > len(key_stat) or index < 1:
return True, "账号序号不合法。", "switch"
else:
try:
new_key = list(key_stat.keys())[index-1]
ret = self.provider.check_key(new_key)
self.provider.set_key(new_key)
except BaseException as e:
return True, "账号切换失败,原因: " + str(e), "switch"
return True, f"账号切换成功。", "switch"
except BaseException as e:
return True, "未知错误: "+str(e), "switch"
else:
return True, "参数过多。", "switch"
def unset(self, session_id: str):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "unset"
self.provider.curr_personality = {}
self.provider.forget(session_id)
return True, "已清除人格并重置历史记录。", "unset"
def set(self, message: str, session_id: str):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "set"
l = message.split(" ")
if len(l) == 1:
return True, f"【人格文本由PlexPt开源项目awesome-chatgpt-pr \
ompts-zh提供】\n设置人格: \n/set 人格名。例如/set 编剧\n人格列表: /set list\n人格详细信息: \
/set view 人格名\n自定义人格: /set 人格文本\n重置会话(清除人格): /reset\n重置会话(保留人格): /reset p\n【当前人格】: {str(self.provider.curr_personality)}", "set"
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f" |-{key}\n"
msg += '\n\n*输入/set view 人格名查看人格详细信息'
msg += '\n*不定时更新人格库,请及时更新本项目。'
return True, msg, "set"
elif l[1] == "view":
if len(l) == 2:
return True, "请输入/set view 人格名", "set"
ps = l[2].strip()
if ps in personalities:
msg = f"人格{ps}的详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格{ps}不存在"
return True, msg, "set"
else:
ps = l[1].strip()
if ps in personalities:
self.provider.curr_personality = {
'name': ps,
'prompt': personalities[ps]
}
self.provider.session_dict[session_id] = []
new_record = {
"user": {
"role": "user",
"content": personalities[ps],
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id].append(new_record)
self.personality_str = message
return True, f"人格{ps}已设置。", "set"
else:
self.provider.curr_personality = {
'name': '自定义人格',
'prompt': ps
}
new_record = {
"user": {
"role": "user",
"content": ps,
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id] = []
self.provider.session_dict[session_id].append(new_record)
self.personality_str = message
return True, f"自定义人格已设置。 \n人格信息: {ps}", "set"
def draw(self, message):
if self.provider is None:
return False, "未启动OpenAI ChatGPT语言模型.", "draw"
if message.startswith("/画"):
message = message[2:]
elif message.startswith(""):
message = message[1:]
try:
# 画图模式传回3个参数
img_url = self.provider.image_chat(message)
return True, img_url, "draw"
except Exception as e:
if 'exceeded' in str(e):
return f"OpenAI API错误。原因\n{str(e)} \n超额了。可自己搭建一个机器人(Github仓库QQChannelChatGPT)"
return False, f"图片生成失败: {e}", "draw"

View File

@@ -0,0 +1,134 @@
from model.command.command import Command
from model.provider.rev_chatgpt import ProviderRevChatGPT
from model.platform.qq import QQ
from cores.qqbot.personality import personalities
from cores.qqbot.global_object import GlobalObject
class CommandRevChatGPT(Command):
def __init__(self, provider: ProviderRevChatGPT, global_object: GlobalObject):
self.provider = provider
self.cached_plugins = {}
self.global_object = global_object
self.personality_str = ""
super().__init__(provider, global_object)
def check_command(self,
message: str,
session_id: str,
role: str,
platform: str,
message_obj):
self.platform = platform
hit, res = super().check_command(
message,
session_id,
role,
platform,
message_obj
)
if hit:
return True, res
if self.command_start_with(message, "help", "帮助"):
return True, self.help()
elif self.command_start_with(message, "reset"):
return True, self.reset(session_id, message)
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
elif self.command_start_with(message, "set"):
return True, self.set(message, session_id)
elif self.command_start_with(message, "switch"):
return True, self.switch(message, session_id)
return False, None
def reset(self, session_id, message: str):
l = message.split(" ")
if len(l) == 1:
self.provider.forget(session_id)
return True, "重置完毕。", "reset"
if len(l) == 2 and l[1] == "p":
self.provider.forget(session_id)
ret = self.provider.text_chat(self.personality_str)
return True, f"重置完毕(保留人格)。\n\n{ret}", "reset"
def set(self, message: str, session_id: str):
l = message.split(" ")
if len(l) == 1:
return True, f"设置人格: \n/set 人格名或人格文本。例如/set 编剧\n人格列表: /set list\n人格详细信息: \
/set view 人格名\n重置会话(清除人格): /reset\n重置会话(保留人格): /reset p", "set"
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f" |-{key}\n"
msg += '\n\n*输入/set view 人格名查看人格详细信息'
msg += '\n*不定时更新人格库,请及时更新本项目。'
return True, msg, "set"
elif l[1] == "view":
if len(l) == 2:
return True, "请输入/set view 人格名", "set"
ps = l[2].strip()
if ps in personalities:
msg = f"人格【{ps}】详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格【{ps}】不存在。"
return True, msg, "set"
else:
ps = l[1].strip()
if ps in personalities:
self.reset(session_id, "reset")
self.personality_str = personalities[ps]
ret = self.provider.text_chat(self.personality_str, session_id)
return True, f"人格【{ps}】已设置。\n\n{ret}", "set"
else:
self.reset(session_id, "reset")
self.personality_str = ps
ret = self.provider.text_chat(ps, session_id)
return True, f"人格信息已设置。\n\n{ret}", "set"
def switch(self, message: str, session_id: str):
'''
切换账号
'''
l = message.split(" ")
rev_chatgpt = self.provider.get_revchatgpt()
if len(l) == 1:
ret = "当前账号:\n"
index = 0
curr_ = None
for revstat in rev_chatgpt:
index += 1
ret += f"[{index}]. {revstat['id']}\n"
# if session_id in revstat['user']:
# curr_ = revstat['id']
for user in revstat['user']:
if session_id == user['id']:
curr_ = revstat['id']
break
if curr_ is None:
ret += "当前您未选择账号。输入/switch <账号序号>切换账号。"
else:
ret += f"当前您选择的账号为:{curr_}。输入/switch <账号序号>切换账号。"
return True, ret, "switch"
elif len(l) == 2:
try:
index = int(l[1])
if index > len(self.provider.rev_chatgpt) or index < 1:
return True, "账号序号不合法。", "switch"
else:
# pop
for revstat in self.provider.rev_chatgpt:
if session_id in revstat['user']:
revstat['user'].remove(session_id)
# append
self.provider.rev_chatgpt[index - 1]['user'].append(session_id)
return True, f"切换账号成功。当前账号为:{self.provider.rev_chatgpt[index - 1]['id']}", "switch"
except BaseException:
return True, "账号序号不合法。", "switch"
else:
return True, "参数过多。", "switch"
def help(self):
commands = super().general_commands()
commands['set'] = '设置人格'
return True, super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help"

View File

@@ -0,0 +1,52 @@
from model.command.command import Command
from model.provider.rev_edgegpt import ProviderRevEdgeGPT
import asyncio
from model.platform.qq import QQ
from cores.qqbot.global_object import GlobalObject
class CommandRevEdgeGPT(Command):
def __init__(self, provider: ProviderRevEdgeGPT, global_object: GlobalObject):
self.provider = provider
self.cached_plugins = {}
self.global_object = global_object
super().__init__(provider, global_object)
def check_command(self,
message: str,
session_id: str,
role: str,
platform: str,
message_obj):
self.platform = platform
hit, res = super().check_command(
message,
session_id,
role,
platform,
message_obj
)
if hit:
return True, res
if self.command_start_with(message, "reset"):
return True, self.reset()
elif self.command_start_with(message, "help"):
return True, self.help()
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
return False, None
def reset(self, loop = None):
if self.provider is None:
return False, "未启动Bing语言模型.", "reset"
res = asyncio.run_coroutine_threadsafe(self.provider.forget(), loop).result()
print(res)
if res:
return res, "重置成功", "reset"
else:
return res, "重置失败", "reset"
def help(self):
return True, super().help_messager(super().general_commands(), self.platform, self.global_object.cached_plugins), "help"

190
model/platform/qq.py Normal file
View File

@@ -0,0 +1,190 @@
from nakuru.entities.components import Plain, At, Image, Node
from util import general_utils as gu
from util.cmd_config import CmdConfig
import asyncio
from nakuru import (
CQHTTP,
GuildMessage,
GroupMessage,
FriendMessage
)
from typing import Union
import time
class FakeSource:
def __init__(self, type, group_id):
self.type = type
self.group_id = group_id
class QQ:
def __init__(self, is_start: bool, cc: CmdConfig = None, gocq_loop = None) -> None:
self.is_start = is_start
self.gocq_loop = gocq_loop
self.cc = cc
self.waiting = {}
self.gocq_cnt = 0
def run_bot(self, gocq):
self.client: CQHTTP = gocq
self.client.run()
def get_msg_loop(self):
return self.gocq_loop
def get_cnt(self):
return self.gocq_cnt
def set_cnt(self, cnt):
self.gocq_cnt = cnt
async def send_qq_msg(self,
source,
res,
image_mode=None):
self.gocq_cnt += 1
if not self.is_start:
raise Exception("管理员未启动GOCQ平台")
"""
res可以是一个数组, 也就是gocq的消息链。
插件开发者请使用send方法, 可以不用直接调用这个方法。
"""
gu.log("回复GOCQ消息: "+str(res), level=gu.LEVEL_INFO, tag="GOCQ", max_len=300)
if isinstance(source, int):
source = FakeSource("GroupMessage", source)
# str convert to CQ Message Chain
if isinstance(res, str):
res_str = res
res = []
if source.type == "GroupMessage" and not isinstance(source, FakeSource):
res.append(At(qq=source.user_id))
res.append(Plain(text=res_str))
# if image mode, put all Plain texts into a new picture.
if image_mode is None:
image_mode = self.cc.get('qq_pic_mode', False)
if image_mode and isinstance(res, list):
plains = []
news = []
for i in res:
if isinstance(i, Plain):
plains.append(i.text)
else:
news.append(i)
plains_str = "".join(plains).strip()
if plains_str != "" and len(plains_str) > 50:
p = gu.create_markdown_image("".join(plains))
news.append(Image.fromFileSystem(p))
res = news
# 回复消息链
if isinstance(res, list) and len(res) > 0:
if source.type == "GuildMessage":
await self.client.sendGuildChannelMessage(source.guild_id, source.channel_id, res)
return
elif source.type == "FriendMessage":
await self.client.sendFriendMessage(source.user_id, res)
return
elif source.type == "GroupMessage":
# 过长时forward发送
plain_text_len = 0
image_num = 0
for i in res:
if isinstance(i, Plain):
plain_text_len += len(i.text)
elif isinstance(i, Image):
image_num += 1
if plain_text_len > self.cc.get('qq_forward_threshold', 200):
# 删除At
for i in res:
if isinstance(i, At):
res.remove(i)
node = Node(res)
# node.content = res
node.uin = 123456
node.name = f"bot"
node.time = int(time.time())
# print(node)
nodes=[node]
await self.client.sendGroupForwardMessage(source.group_id, nodes)
return
await self.client.sendGroupMessage(source.group_id, res)
return
def send(self,
to,
res,
image_mode=False,
):
'''
提供给插件的发送QQ消息接口, 不用在外部await。
参数说明第一个参数可以是消息对象也可以是QQ群号。第二个参数是消息内容消息内容可以是消息链列表也可以是纯文字信息
第三个参数是是否开启图片模式,如果开启,那么所有纯文字信息都会被合并成一张图片。
'''
try:
asyncio.run_coroutine_threadsafe(self.send_qq_msg(to, res, image_mode), self.gocq_loop).result()
except BaseException as e:
raise e
def send_guild(self,
message_obj,
res,
):
'''
提供给插件的发送GOCQ QQ频道消息接口, 不用在外部await。
参数说明:第一个参数必须是消息对象, 第二个参数是消息内容(消息内容可以是消息链列表,也可以是纯文字信息)。
'''
try:
asyncio.run_coroutine_threadsafe(self.send_qq_msg(message_obj, res), self.gocq_loop).result()
except BaseException as e:
raise e
def create_text_image(title: str, text: str, max_width=30, font_size=20):
'''
文本转图片。
title: 标题
text: 文本内容
max_width: 文本宽度最大值默认30
font_size: 字体大小默认20
返回:文件路径
'''
try:
img = gu.word2img(title, text, max_width, font_size)
p = gu.save_temp_img(img)
return p
except Exception as e:
raise e
def wait_for_message(self, group_id) -> Union[GroupMessage, FriendMessage, GuildMessage]:
'''
等待下一条消息,超时 300s 后抛出异常
'''
self.waiting[group_id] = ''
cnt = 0
while True:
if group_id in self.waiting and self.waiting[group_id] != '':
# 去掉
ret = self.waiting[group_id]
del self.waiting[group_id]
return ret
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)
def get_client(self):
return self.client
def nakuru_method_invoker(self, func, *args, **kwargs):
"""
返回一个方法调用器可以用来立即调用nakuru的方法。
"""
try:
ret = asyncio.run_coroutine_threadsafe(func(*args, **kwargs), self.gocq_loop).result()
return ret
except BaseException as e:
raise e

217
model/platform/qqchan.py Normal file
View File

@@ -0,0 +1,217 @@
import io
import botpy
from PIL import Image as PILImage
from botpy.message import Message, DirectMessage
import re
import asyncio
import requests
from cores.qqbot.personality import personalities
from util import general_utils as gu
from nakuru.entities.components import Plain, At, Image
from botpy.types.message import Reference
from botpy import Client
import time
class NakuruGuildMember():
tiny_id: int # 发送者识别号
user_id: int # 发送者识别号
title: str
nickname: str # 昵称
role: int # 角色
icon_url: str # 头像url
class NakuruGuildMessage():
type: str = "GuildMessage"
self_id: int # bot的qq号
self_tiny_id: int # bot的qq号
sub_type: str # 消息类型
message_id: str # 消息id
guild_id: int # 频道号
channel_id: int # 子频道号
user_id: int # 发送者qq号
message: list # 消息内容
sender: NakuruGuildMember # 发送者信息
raw_message: Message
def __str__(self) -> str:
return str(self.__dict__)
class QQChan():
def __init__(self, cnt: dict = None) -> None:
self.qqchan_cnt = 0
self.waiting: dict = {}
def get_cnt(self):
return self.qqchan_cnt
def set_cnt(self, cnt):
self.qqchan_cnt = cnt
def run_bot(self, botclient: Client, appid, token):
intents = botpy.Intents(public_guild_messages=True, direct_message=True)
self.client = botclient
self.client.run(appid=appid, token=token)
# gocq-频道SDK兼容层
def gocq_compatible_send(self, gocq_message_chain: list):
plain_text = ""
image_path = None # only one img supported
for i in gocq_message_chain:
if isinstance(i, Plain):
plain_text += i.text
elif isinstance(i, Image) and image_path == None:
if i.path is not None:
image_path = i.path
else:
image_path = i.file
return plain_text, image_path
# gocq-频道SDK兼容层
def gocq_compatible_receive(self, message: Message) -> NakuruGuildMessage:
ngm = NakuruGuildMessage()
try:
ngm.self_id = message.mentions[0].id
ngm.self_tiny_id = message.mentions[0].id
except:
ngm.self_id = 0
ngm.self_tiny_id = 0
ngm.sub_type = "normal"
ngm.message_id = message.id
ngm.guild_id = int(message.guild_id)
ngm.channel_id = int(message.channel_id)
ngm.user_id = int(message.author.id)
msg = []
plain_content = message.content.replace("<@!"+str(ngm.self_id)+">", "").strip()
msg.append(Plain(plain_content))
if message.attachments:
for i in message.attachments:
if i.content_type.startswith("image"):
url = i.url
if not url.startswith("http"):
url = "https://"+url
img = Image.fromURL(url)
msg.append(img)
ngm.message = msg
ngm.sender = NakuruGuildMember()
ngm.sender.tiny_id = int(message.author.id)
ngm.sender.user_id = int(message.author.id)
ngm.sender.title = ""
ngm.sender.nickname = message.author.username
ngm.sender.role = 0
ngm.sender.icon_url = message.author.avatar
ngm.raw_message = message
return ngm
def send_qq_msg(self,
message: NakuruGuildMessage,
res: list):
'''
回复频道消息
'''
gu.log("回复QQ频道消息: "+str(res), level=gu.LEVEL_INFO, tag="QQ频道", max_len=500)
self.qqchan_cnt += 1
plain_text = ""
image_path = None
if isinstance(res, list):
# 兼容gocq
plain_text, image_path = self.gocq_compatible_send(res)
elif isinstance(res, str):
plain_text = res
# print(plain_text, image_path)
msg_ref = None
if message.raw_message is not None:
msg_ref = Reference(message_id=message.raw_message.id, ignore_get_message_error=False)
if image_path is not None:
msg_ref = None
if image_path.startswith("http"):
pic_res = requests.get(image_path, stream = True)
if pic_res.status_code == 200:
image = PILImage.open(io.BytesIO(pic_res.content))
image_path = gu.save_temp_img(image)
try:
# reply_res = asyncio.run_coroutine_threadsafe(message.raw_message.reply(content=str(plain_text), message_reference = msg_ref, file_image=image_path), self.client.loop)
reply_res = asyncio.run_coroutine_threadsafe(self.client.api.post_message(channel_id=str(message.channel_id),
content=str(plain_text),
msg_id=message.message_id,
file_image=image_path,
message_reference=msg_ref), self.client.loop)
reply_res.result()
except BaseException as e:
# 分割过长的消息
if "msg over length" in str(e):
split_res = []
split_res.append(plain_text[:len(plain_text)//2])
split_res.append(plain_text[len(plain_text)//2:])
for i in split_res:
reply_res = asyncio.run_coroutine_threadsafe(self.client.api.post_message(channel_id=str(message.channel_id),
content=str(i),
msg_id=message.message_id,
file_image=image_path,
message_reference=msg_ref), self.client.loop)
reply_res.result()
else:
# 发送qq信息
try:
# 防止被qq频道过滤消息
plain_text = plain_text.replace(".", " . ")
reply_res = asyncio.run_coroutine_threadsafe(self.client.api.post_message(channel_id=str(message.channel_id),
content=str(plain_text),
msg_id=message.message_id,
file_image=image_path,
message_reference=msg_ref), self.client.loop).result() # 发送信息
except BaseException as e:
print("QQ频道API错误: \n"+str(e))
try:
# reply_res = asyncio.run_coroutine_threadsafe(message.raw_message.reply(content=str(str.join(" ", plain_text)), message_reference = msg_ref, file_image=image_path), self.client.loop)
reply_res = asyncio.run_coroutine_threadsafe(self.client.api.post_message(channel_id=str(message.channel_id),
content=str(str.join(" ", plain_text)),
msg_id=message.message_id,
file_image=image_path,
message_reference=msg_ref), self.client.loop).result()
except BaseException as e:
plain_text = re.sub(r'(https|http)?:\/\/(\w|\.|\/|\?|\=|\&|\%)*\b', '[被隐藏的链接]', str(e), flags=re.MULTILINE)
plain_text = plain_text.replace(".", "·")
reply_res = asyncio.run_coroutine_threadsafe(self.client.api.post_message(channel_id=str(message.channel_id),
content=plain_text,
msg_id=message.message_id,
file_image=image_path,
message_reference=msg_ref), self.client.loop).result()
# send(message, f"QQ频道API错误{str(e)}\n下面是格式化后的回答\n{f_res}")
def push_message(self, channel_id: int, message_chain: list, message_id: int = None):
'''
推送消息, 如果有 message_id那么就是回复消息。
'''
_n = NakuruGuildMessage()
_n.channel_id = channel_id
_n.message_id = message_id
self.send_qq_msg(_n, message_chain)
def send(self, message_obj, message_chain: list):
'''
发送信息
'''
self.send_qq_msg(message_obj, message_chain)
def wait_for_message(self, channel_id: int) -> NakuruGuildMessage:
'''
等待指定 channel_id 的下一条信息,超时 300s 后抛出异常
'''
self.waiting[channel_id] = ''
cnt = 0
while True:
if channel_id in self.waiting and self.waiting[channel_id] != '':
# 去掉
ret = self.waiting[channel_id]
del self.waiting[channel_id]
return ret
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)

188
model/platform/qqgroup.py Normal file
View File

@@ -0,0 +1,188 @@
import requests
import asyncio
import websockets
from websockets import WebSocketClientProtocol
import json
import inspect
from typing import Callable, Awaitable, Union
from pydantic import BaseModel
import datetime
class Event(BaseModel):
GroupMessage: str = "GuildMessage"
class Sender(BaseModel):
user_id: str
member_openid: str
class MessageComponent(BaseModel):
type: str
class PlainText(MessageComponent):
text: str
class Image(MessageComponent):
path: str
file: str
url: str
class MessageChain(list):
def append(self, __object: MessageComponent) -> None:
if not isinstance(__object, MessageComponent):
raise TypeError("不受支持的消息链元素类型。回复的消息链必须是 MessageComponent 的子类。")
return super().append(__object)
def insert(self, __index: int, __object: MessageComponent) -> None:
if not isinstance(__object, MessageComponent):
raise TypeError("不受支持的消息链元素类型。回复的消息链必须是 MessageComponent 的子类。")
return super().insert(__index, __object)
def parse_from_nakuru(self, nakuru_message_chain: Union[list, str]) -> None:
if isinstance(nakuru_message_chain, str):
self.append(PlainText(type='Plain', text=nakuru_message_chain))
else:
for i in nakuru_message_chain:
if i['type'] == 'Plain':
self.append(PlainText(type='Plain', text=i['text']))
elif i['type'] == 'Image':
self.append(Image(path=i['path'], file=i['file'], url=i['url']))
class Message(BaseModel):
type: str
user_id: str
member_openid: str
message_id: str
group_id: str
group_openid: str
content: str
message: MessageChain
time: int
sender: Sender
class UnofficialQQBotSDK:
GET_APP_ACCESS_TOKEN_URL = "https://bots.qq.com/app/getAppAccessToken"
OPENAPI_BASE_URL = "https://api.sgroup.qq.com"
def __init__(self, appid: str, client_secret: str) -> None:
self.appid = appid
self.client_secret = client_secret
self.events: dict[str, Awaitable] = {}
def run_bot(self) -> None:
self.__get_access_token()
self.__get_wss_endpoint()
asyncio.get_event_loop().run_until_complete(self.__ws_client())
def __get_access_token(self) -> None:
res = requests.post(self.GET_APP_ACCESS_TOKEN_URL, json={
"appId": self.appid,
"clientSecret": self.client_secret
}, headers={
"Content-Type": "application/json"
})
res = res.json()
code = res['code'] if 'code' in res else 1
if 'access_token' not in res:
raise Exception(f"获取 access_token 失败。原因:{res['message'] if 'message' in res else '未知'}")
self.access_token = 'QQBot ' + res['access_token']
def __auth_header(self) -> str:
return {
'Authorization': self.access_token,
'X-Union-Appid': self.appid,
}
def __get_wss_endpoint(self):
res = requests.get(self.OPENAPI_BASE_URL + "/gateway", headers=self.__auth_header())
self.wss_endpoint = res.json()['url']
# print("wss_endpoint: " + self.wss_endpoint)
async def __behav_heartbeat(self, ws: WebSocketClientProtocol, t: int):
while True:
await asyncio.sleep(t - 1)
try:
await ws.send(json.dumps({
"op": 1,
"d": self.s
}))
except:
print("heartbeat error.")
async def __handle_msg(self, ws: WebSocketClientProtocol, msg: dict):
if msg['op'] == 10:
asyncio.get_event_loop().create_task(self.__behav_heartbeat(ws, msg['d']['heartbeat_interval'] / 1000))
# 鉴权获得session
await ws.send(json.dumps({
"op": 2,
"d": {
"token": self.access_token,
"intents": 33554432,
"shard": [0, 1],
"properties": {
"$os": "linux",
"$browser": "my_library",
"$device": "my_library"
}
}
}))
if msg['op'] == 0:
# ready
data = msg['d']
event_typ: str = msg['t'] if 't' in msg else None
if event_typ == 'GROUP_AT_MESSAGE_CREATE':
if 'GroupMessage' in self.events:
coro = self.events['GroupMessage']
else:
return
message_chain = MessageChain()
message_chain.append(PlainText(type="Plain", text=data['content']))
group_message = Message(
type='GroupMessage',
user_id=data['author']['id'],
member_openid=data['author']['member_openid'],
message_id=data['id'],
group_id=data['group_id'],
group_openid=data['group_openid'],
content=data['content'],
# 2023-11-24T19:51:11+08:00
time=int(datetime.datetime.strptime(data['timestamp'], "%Y-%m-%dT%H:%M:%S%z").timestamp()),
sender=Sender(
user_id=data['author']['id'],
member_openid=data['author']['member_openid']
),
message=message_chain
)
await coro(self, group_message)
async def send(self, message: Message, message_chain: MessageChain) -> None:
# todo: 消息链转换支持更多类型。
plain_text = ""
for i in message_chain:
if isinstance(i, PlainText):
plain_text += i.text
requests.post(self.OPENAPI_BASE_URL + f"/v2/groups/{message.group_openid}/messages", headers=self.__auth_header(), json={
"content": plain_text,
"message_type": 0,
"msg_id": message.message_id
})
async def __ws_client(self):
self.s = 0
async with websockets.connect(self.wss_endpoint) as websocket:
while True:
msg = await websocket.recv()
msg = json.loads(msg)
if 's' in msg:
self.s = msg['s']
await self.__handle_msg(websocket, msg)
def on(self, event: str) -> None:
def wrapper(func: Awaitable):
if inspect.iscoroutinefunction(func) == False:
raise TypeError("func must be a coroutine function")
self.events[event] = func
return wrapper

View File

@@ -0,0 +1,472 @@
from openai import OpenAI
from openai.types.chat.chat_completion import ChatCompletion
from openai.types.images_response import ImagesResponse
import json
import time
import os
import sys
from cores.database.conn import dbConn
from model.provider.provider import Provider
import threading
from util import general_utils as gu
from util.cmd_config import CmdConfig
import traceback
import tiktoken
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
class ProviderOpenAIOfficial(Provider):
def __init__(self, cfg):
self.cc = CmdConfig()
self.key_list = []
# 如果 cfg['key']中有长度为1的字符串那么是格式错误直接报错
for key in cfg['key']:
if len(key) == 1:
input("检查到了长度为 1 的Key。配置文件中的 openai.key 处的格式错误 (符号 - 的后面要加空格),请退出程序并检查配置文件,按回车跳过。")
raise BaseException("配置文件格式错误")
if cfg['key'] != '' and cfg['key'] != None:
self.key_list = cfg['key']
else:
input("[System] 请先去完善ChatGPT的Key。详情请前往https://beta.openai.com/account/api-keys")
if len(self.key_list) == 0:
raise Exception("您打开了 OpenAI 模型服务,但是未填写 key。请前往填写。")
self.key_stat = {}
for k in self.key_list:
self.key_stat[k] = {'exceed': False, 'used': 0}
self.api_base = None
if 'api_base' in cfg and cfg['api_base'] != 'none' and cfg['api_base'] != '':
self.api_base = cfg['api_base']
gu.log(f"设置 api_base 为: {self.api_base}")
# openai client
self.client = OpenAI(
api_key=self.key_list[0],
base_url=self.api_base
)
self.openai_model_configs: dict = cfg['chatGPTConfigs']
gu.log(f'加载 OpenAI Chat Configs: {self.openai_model_configs}')
self.openai_configs = cfg
# 会话缓存
self.session_dict = {}
# 最大缓存token
self.max_tokens = cfg['total_tokens_limit']
# 历史记录持久化间隔时间
self.history_dump_interval = 20
self.enc = tiktoken.get_encoding("cl100k_base")
# 读取历史记录
try:
db1 = dbConn()
for session in db1.get_all_session():
self.session_dict[session[0]] = json.loads(session[1])['data']
gu.log("读取历史记录成功。")
except BaseException as e:
gu.log("读取历史记录失败,但不影响使用。", level=gu.LEVEL_ERROR)
# 读取统计信息
if not os.path.exists(abs_path+"configs/stat"):
with open(abs_path+"configs/stat", 'w', encoding='utf-8') as f:
json.dump({}, f)
self.stat_file = open(abs_path+"configs/stat", 'r', encoding='utf-8')
global count
res = self.stat_file.read()
if res == '':
count = {}
else:
try:
count = json.loads(res)
except BaseException:
pass
# 创建转储定时器线程
threading.Thread(target=self.dump_history, daemon=True).start()
# 人格
self.curr_personality = {}
# 转储历史记录
def dump_history(self):
time.sleep(10)
db = dbConn()
while True:
try:
# print("转储历史记录...")
for key in self.session_dict:
# print("TEST: "+str(db.get_session(key)))
data = self.session_dict[key]
data_json = {
'data': data
}
if db.check_session(key):
db.update_session(key, json.dumps(data_json))
else:
db.insert_session(key, json.dumps(data_json))
# print("转储历史记录完毕")
except BaseException as e:
print(e)
# 每隔10分钟转储一次
time.sleep(10*self.history_dump_interval)
def personality_set(self, default_personality: dict, session_id: str):
self.curr_personality = default_personality
new_record = {
"user": {
"role": "user",
"content": default_personality['prompt'],
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.session_dict[session_id].append(new_record)
def text_chat(self, prompt,
session_id = None,
image_url = None,
function_call=None,
extra_conf: dict = None,
default_personality: dict = None):
if session_id is None:
session_id = "unknown"
if "unknown" in self.session_dict:
del self.session_dict["unknown"]
# 会话机制
if session_id not in self.session_dict:
self.session_dict[session_id] = []
fjson = {}
try:
f = open(abs_path+"configs/session", "r", encoding="utf-8")
fjson = json.loads(f.read())
f.close()
except:
pass
finally:
fjson[session_id] = 'true'
f = open(abs_path+"configs/session", "w", encoding="utf-8")
f.write(json.dumps(fjson))
f.flush()
f.close()
if len(self.session_dict[session_id]) == 0:
# 设置默认人格
if default_personality is not None:
self.personality_set(default_personality, session_id)
# 使用 tictoken 截断消息
_encoded_prompt = self.enc.encode(prompt)
if self.openai_model_configs['max_tokens'] < len(_encoded_prompt):
prompt = self.enc.decode(_encoded_prompt[:int(self.openai_model_configs['max_tokens']*0.80)])
gu.log(f"注意,有一部分 prompt 文本由于超出 token 限制而被截断。", level=gu.LEVEL_WARNING, max_len=300)
cache_data_list, new_record, req = self.wrap(prompt, session_id, image_url)
gu.log(f"CACHE_DATA_: {str(cache_data_list)}", level=gu.LEVEL_DEBUG, max_len=99999)
gu.log(f"OPENAI REQUEST: {str(req)}", level=gu.LEVEL_DEBUG, max_len=9999)
retry = 0
response = None
err = ''
# 截断倍率
truncate_rate = 0.75
use_gpt4v = False
for i in req:
if isinstance(i['content'], list):
use_gpt4v = True
break
if image_url is not None:
use_gpt4v = True
if use_gpt4v:
conf = self.openai_model_configs.copy()
conf['model'] = 'gpt-4-vision-preview'
else:
conf = self.openai_model_configs
if extra_conf is not None:
conf.update(extra_conf)
while retry < 10:
try:
if function_call is None:
response = self.client.chat.completions.create(
messages=req,
**conf
)
else:
response = self.client.chat.completions.create(
messages=req,
tools = function_call,
**conf
)
break
except Exception as e:
print(traceback.format_exc())
if 'Invalid content type. image_url is only supported by certain models.' in str(e):
raise e
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
gu.log("当前Key已超额或异常, 正在切换", level=gu.LEVEL_WARNING)
self.key_stat[self.client.api_key]['exceed'] = True
is_switched = self.handle_switch_key()
if not is_switched:
# 所有Key都超额或不正常
raise e
retry -= 1
elif 'maximum context length' in str(e):
gu.log("token超限, 清空对应缓存,并进行消息截断")
self.session_dict[session_id] = []
prompt = prompt[:int(len(prompt)*truncate_rate)]
truncate_rate -= 0.05
cache_data_list, new_record, req = self.wrap(prompt, session_id)
elif 'Limit: 3 / min. Please try again in 20s.' in str(e) or "OpenAI response error" in str(e):
time.sleep(30)
continue
else:
gu.log(str(e), level=gu.LEVEL_ERROR)
time.sleep(2)
err = str(e)
retry += 1
if retry >= 10:
gu.log(r"如果报错, 且您的机器在中国大陆内, 请确保您的电脑已经设置好代理软件(梯子), 并在配置文件设置了系统代理地址。详见https://github.com/Soulter/QQChannelChatGPT/wiki/%E4%BA%8C%E3%80%81%E9%A1%B9%E7%9B%AE%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E9%85%8D%E7%BD%AE", max_len=999)
raise BaseException("连接出错: "+str(err))
assert isinstance(response, ChatCompletion)
gu.log(f"OPENAI RESPONSE: {response.usage}", level=gu.LEVEL_DEBUG, max_len=9999)
# 结果分类
choice = response.choices[0]
if choice.message.content != None:
# 文本形式
chatgpt_res = str(choice.message.content).strip()
elif choice.message.tool_calls != None and len(choice.message.tool_calls) > 0:
# tools call (function calling)
return choice.message.tool_calls[0].function
self.key_stat[self.client.api_key]['used'] += response.usage.total_tokens
current_usage_tokens = response.usage.total_tokens
# 超过指定tokens 尽可能的保留最多的条目直到小于max_tokens
if current_usage_tokens > self.max_tokens:
t = current_usage_tokens
index = 0
while t > self.max_tokens:
if index >= len(cache_data_list):
break
# 保留人格信息
if cache_data_list[index]['type'] != 'personality':
t -= int(cache_data_list[index]['single_tokens'])
del cache_data_list[index]
else:
index += 1
# 删除完后更新相关字段
self.session_dict[session_id] = cache_data_list
# cache_prompt = get_prompts_by_cache_list(cache_data_list)
# 添加新条目进入缓存的prompt
new_record['AI'] = {
'role': 'assistant',
'content': chatgpt_res,
}
new_record['usage_tokens'] = current_usage_tokens
if len(cache_data_list) > 0:
new_record['single_tokens'] = current_usage_tokens - int(cache_data_list[-1]['usage_tokens'])
else:
new_record['single_tokens'] = current_usage_tokens
cache_data_list.append(new_record)
self.session_dict[session_id] = cache_data_list
return chatgpt_res
def image_chat(self, prompt, img_num = 1, img_size = "1024x1024"):
retry = 0
image_url = ''
image_generate_configs = self.cc.get("openai_image_generate", None)
while retry < 5:
try:
response: ImagesResponse = self.client.images.generate(
prompt=prompt,
**image_generate_configs
)
image_url = []
for i in range(img_num):
image_url.append(response.data[i].url)
break
except Exception as e:
gu.log(str(e), level=gu.LEVEL_ERROR)
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(
e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
gu.log("当前 Key 已超额或者不正常, 正在切换", level=gu.LEVEL_WARNING)
self.key_stat[self.client.api_key]['exceed'] = True
is_switched = self.handle_switch_key()
if not is_switched:
# 所有Key都超额或不正常
raise e
elif 'Your request was rejected as a result of our safety system.' in str(e):
gu.log("您的请求被 OpenAI 安全系统拒绝, 请稍后再试", level=gu.LEVEL_WARNING)
raise e
else:
retry += 1
if retry >= 5:
raise BaseException("连接超时")
return image_url
def forget(self, session_id = None) -> bool:
if session_id is None:
return False
self.session_dict[session_id] = []
return True
'''
获取缓存的会话
'''
def get_prompts_by_cache_list(self, cache_data_list, divide=False, paging=False, size=5, page=1):
prompts = ""
if paging:
page_begin = (page-1)*size
page_end = page*size
if page_begin < 0:
page_begin = 0
if page_end > len(cache_data_list):
page_end = len(cache_data_list)
cache_data_list = cache_data_list[page_begin:page_end]
for item in cache_data_list:
prompts += str(item['user']['role']) + ":\n" + str(item['user']['content']) + "\n"
prompts += str(item['AI']['role']) + ":\n" + str(item['AI']['content']) + "\n"
if divide:
prompts += "----------\n"
return prompts
def get_user_usage_tokens(self,cache_list):
usage_tokens = 0
for item in cache_list:
usage_tokens += int(item['single_tokens'])
return usage_tokens
'''
获取统计信息
'''
def get_stat(self):
try:
f = open(abs_path+"configs/stat", "r", encoding="utf-8")
fjson = json.loads(f.read())
f.close()
guild_count = 0
guild_msg_count = 0
guild_direct_msg_count = 0
for k,v in fjson.items():
guild_count += 1
guild_msg_count += v['count']
guild_direct_msg_count += v['direct_count']
session_count = 0
f = open(abs_path+"configs/session", "r", encoding="utf-8")
fjson = json.loads(f.read())
f.close()
for k,v in fjson.items():
session_count += 1
return guild_count, guild_msg_count, guild_direct_msg_count, session_count
except:
return -1, -1, -1, -1
# 包装信息
def wrap(self, prompt, session_id, image_url = None):
if image_url is not None:
prompt = [
{
"type": "text",
"text": prompt
},
{
"type": "image_url",
"image_url": {
"url": image_url
}
}
]
# 获得缓存信息
context = self.session_dict[session_id]
new_record = {
"user": {
"role": "user",
"content": prompt,
},
"AI": {},
'type': "common",
'usage_tokens': 0,
}
req_list = []
for i in context:
if 'user' in i:
req_list.append(i['user'])
if 'AI' in i:
req_list.append(i['AI'])
req_list.append(new_record['user'])
return context, new_record, req_list
def handle_switch_key(self):
# messages = [{"role": "user", "content": prompt}]
is_all_exceed = True
for key in self.key_stat:
if key == None or self.key_stat[key]['exceed']:
continue
is_all_exceed = False
self.client.api_key = key
gu.log(f"切换到Key: {key}, 已使用token: {self.key_stat[key]['used']}", level=gu.LEVEL_INFO)
break
if is_all_exceed:
gu.log("所有Key已超额", level=gu.LEVEL_CRITICAL)
return False
return True
def get_configs(self):
return self.openai_configs
def get_key_stat(self):
return self.key_stat
def get_key_list(self):
return self.key_list
def get_curr_key(self):
return self.client.api_key
def set_key(self, key):
self.client.api_key = key
# 添加key
def append_key(self, key, sponsor):
self.key_list.append(key)
self.key_stat[key] = {'exceed': False, 'used': 0, 'sponsor': sponsor}
# 检查key是否可用
def check_key(self, key):
client_ = OpenAI(
api_key=key,
base_url=self.api_base
)
messages = [{"role": "user", "content": "please just echo `test`"}]
client_.chat.completions.create(
messages=messages,
**self.openai_model_configs
)
return True

View File

@@ -0,0 +1,13 @@
import abc
class Provider:
def __init__(self, cfg):
pass
@abc.abstractmethod
def text_chat(self, prompt, session_id, image_url: None, function_call: None, extra_conf: dict = None, default_personality: dict = None) -> str:
pass
@abc.abstractmethod
def forget(self, session_id = None) -> bool:
pass

View File

@@ -0,0 +1,218 @@
from revChatGPT.V1 import Chatbot
from revChatGPT import typings
from model.provider.provider import Provider
from util import general_utils as gu
from util import cmd_config as cc
import time
class ProviderRevChatGPT(Provider):
def __init__(self, config, base_url = None):
if base_url == "":
base_url = None
self.rev_chatgpt: list[dict] = []
self.cc = cc.CmdConfig()
for i in range(0, len(config['account'])):
try:
gu.log(f"创建逆向ChatGPT负载{str(i+1)}中...", level=gu.LEVEL_INFO, tag="RevChatGPT")
if 'password' in config['account'][i]:
gu.log(f"创建逆向ChatGPT负载{str(i+1)}失败: 已不支持账号密码登录请使用access_token方式登录。", level=gu.LEVEL_ERROR, tag="RevChatGPT")
continue
rev_account_config = {
'access_token': config['account'][i]['access_token'],
}
if self.cc.get("rev_chatgpt_model") != "":
rev_account_config['model'] = self.cc.get("rev_chatgpt_model")
if len(self.cc.get("rev_chatgpt_plugin_ids")) > 0:
rev_account_config['plugin_ids'] = self.cc.get("rev_chatgpt_plugin_ids")
if self.cc.get("rev_chatgpt_PUID") != "":
rev_account_config['PUID'] = self.cc.get("rev_chatgpt_PUID")
if len(self.cc.get("rev_chatgpt_unverified_plugin_domains")) > 0:
rev_account_config['unverified_plugin_domains'] = self.cc.get("rev_chatgpt_unverified_plugin_domains")
cb = Chatbot(config=rev_account_config, base_url=base_url)
# cb.captcha_solver = self.__captcha_solver
# 后八位c
g_id = rev_account_config['access_token'][-8:]
revstat = {
'id': g_id,
'obj': cb,
'busy': False,
'user': []
}
self.rev_chatgpt.append(revstat)
except BaseException as e:
gu.log(f"创建逆向ChatGPT负载{str(i+1)}失败: {str(e)}", level=gu.LEVEL_ERROR, tag="RevChatGPT")
def forget(self, session_id = None) -> bool:
for i in self.rev_chatgpt:
for user in i['user']:
if session_id == user['id']:
try:
i['obj'].reset_chat()
return True
except BaseException as e:
gu.log(f"重置RevChatGPT失败。原因: {str(e)}", level=gu.LEVEL_ERROR, tag="RevChatGPT")
return False
return False
def get_revchatgpt(self) -> list:
return self.rev_chatgpt
def request_text(self, prompt: str, bot) -> str:
resp = ''
err_count = 0
retry_count = 5
while err_count < retry_count:
try:
for data in bot.ask(prompt):
resp = data["message"]
break
except typings.Error as e:
if e.code == typings.ErrorType.INVALID_ACCESS_TOKEN_ERROR:
raise e
if e.code == typings.ErrorType.EXPIRED_ACCESS_TOKEN_ERROR:
raise e
if e.code == typings.ErrorType.PROHIBITED_CONCURRENT_QUERY_ERROR:
raise e
if "Your authentication token has expired. Please try signing in again." in str(e):
raise e
if "The message you submitted was too long" in str(e):
raise e
if "You've reached our limit of messages per hour." in str(e):
raise e
if "Rate limited by proxy" in str(e):
gu.log(f"触发请求频率限制, 60秒后自动重试。", level=gu.LEVEL_WARNING, tag="RevChatGPT")
time.sleep(60)
err_count += 1
gu.log(f"请求异常: {str(e)},正在重试。({str(err_count)})", level=gu.LEVEL_WARNING, tag="RevChatGPT")
if err_count >= retry_count:
raise e
except BaseException as e:
err_count += 1
gu.log(f"请求异常: {str(e)},正在重试。({str(err_count)})", level=gu.LEVEL_WARNING, tag="RevChatGPT")
if err_count >= retry_count:
raise e
if resp == '':
resp = "RevChatGPT请求异常。"
# print("[RevChatGPT] "+str(resp))
return resp
def text_chat(self, prompt,
session_id = None,
image_url = None,
function_call=None,
extra_conf: dict = None,
default_personality: dict = None) -> str:
# 选择一个人少的账号。
selected_revstat = None
min_revstat = None
min_ = None
new_user = False
conversation_id = ''
parent_id = ''
for revstat in self.rev_chatgpt:
for user in revstat['user']:
if session_id == user['id']:
selected_revstat = revstat
conversation_id = user['conversation_id']
parent_id = user['parent_id']
break
if min_ is None:
min_ = len(revstat['user'])
min_revstat = revstat
elif len(revstat['user']) < min_:
min_ = len(revstat['user'])
min_revstat = revstat
# if session_id in revstat['user']:
# selected_revstat = revstat
# break
if selected_revstat is None:
selected_revstat = min_revstat
selected_revstat['user'].append({
'id': session_id,
'conversation_id': '',
'parent_id': ''
})
new_user = True
gu.log(f"选择账号{str(selected_revstat)}", tag="RevChatGPT", level=gu.LEVEL_DEBUG)
while selected_revstat['busy']:
gu.log(f"账号忙碌,等待中...", tag="RevChatGPT", level=gu.LEVEL_DEBUG)
time.sleep(1)
selected_revstat['busy'] = True
if not new_user:
# 非新用户,则使用其专用的会话
selected_revstat['obj'].conversation_id = conversation_id
selected_revstat['obj'].parent_id = parent_id
else:
# 新用户,则使用新的会话
selected_revstat['obj'].reset_chat()
res = ''
err_msg = ''
err_cnt = 0
while err_cnt < 15:
try:
res = self.request_text(prompt, selected_revstat['obj'])
selected_revstat['busy'] = False
# 记录新用户的会话
if new_user:
i = 0
for user in selected_revstat['user']:
if user['id'] == session_id:
selected_revstat['user'][i]['conversation_id'] = selected_revstat['obj'].conversation_id
selected_revstat['user'][i]['parent_id'] = selected_revstat['obj'].parent_id
break
i += 1
return res.strip()
except BaseException as e:
if "Your authentication token has expired. Please try signing in again." in str(e):
raise Exception(f"此账号(access_token后8位为{selected_revstat['id']})的access_token已过期请重新获取或者切换账号。")
if "The message you submitted was too long" in str(e):
raise Exception("发送的消息太长,请分段发送。")
if "You've reached our limit of messages per hour." in str(e):
raise Exception("触发RevChatGPT请求频率限制。请1小时后再试或者切换账号。")
gu.log(f"请求异常: {str(e)}", level=gu.LEVEL_WARNING, tag="RevChatGPT")
err_cnt += 1
time.sleep(3)
raise Exception(f'回复失败。原因:{err_msg}。如果您设置了多个账号,可以使用/switch指令切换账号。输入/switch查看详情。')
# while self.is_all_busy():
# time.sleep(1)
# res = ''
# err_msg = ''
# cursor = 0
# for revstat in self.rev_chatgpt:
# cursor += 1
# if not revstat['busy']:
# try:
# revstat['busy'] = True
# res = self.request_text(prompt, revstat['obj'])
# revstat['busy'] = False
# return res.strip()
# # todo: 细化错误管理
# except BaseException as e:
# revstat['busy'] = False
# gu.log(f"请求出现问题: {str(e)}", level=gu.LEVEL_WARNING, tag="RevChatGPT")
# err_msg += f"账号{cursor} - 错误原因: {str(e)}"
# continue
# else:
# err_msg += f"账号{cursor} - 错误原因: 忙碌"
# continue
# raise Exception(f'回复失败。错误跟踪:{err_msg}')
def is_all_busy(self) -> bool:
for revstat in self.rev_chatgpt:
if not revstat['busy']:
return False
return True

View File

@@ -0,0 +1,104 @@
from model.provider.provider import Provider
from EdgeGPT import Chatbot, ConversationStyle
import json
import os
from util import general_utils as gu
from util.cmd_config import CmdConfig as cc
import time
class ProviderRevEdgeGPT(Provider):
def __init__(self):
raise Exception("Bing 逆向已停止维护,不可用,请使用 ChatGPT 官方 API。")
self.busy = False
self.wait_stack = []
with open('./cookies.json', 'r') as f:
cookies = json.load(f)
proxy = cc.get("bing_proxy", None)
if proxy == "":
proxy = None
# q = Query("Hello, bing!", cookie_files="./cookies.json")
# print(q)
self.bot = EdgeChatbot(cookies=cookies, proxy = "http://127.0.0.1:7890")
ret = self.bot.ask_stream("Hello, bing!", conversation_style=ConversationStyle.creative, wss_link="wss://ai.nothingnessvoid.tech/sydney/ChatHub")
# self.bot = Chatbot(cookies=cookies, proxy = proxy)
for i in ret:
print(i, flush=True)
def is_busy(self):
return self.busy
async def forget(self, session_id = None):
try:
await self.bot.reset()
return True
except BaseException:
return False
async def text_chat(self, prompt, platform = 'none', image_url=None, function_call=None):
while self.busy:
time.sleep(1)
self.busy = True
resp = 'err'
err_count = 0
retry_count = 5
while err_count < retry_count:
try:
resp = await self.bot.ask(prompt=prompt, conversation_style=ConversationStyle.creative)
# print("[RevEdgeGPT] "+str(resp))
if 'messages' not in resp['item']:
await self.bot.reset()
msj_obj = resp['item']['messages'][len(resp['item']['messages'])-1]
reply_msg = msj_obj['text']
if 'sourceAttributions' in msj_obj:
reply_source = msj_obj['sourceAttributions']
else:
reply_source = []
if 'throttling' in resp['item']:
throttling = resp['item']['throttling']
# print(throttling)
else:
throttling = None
if 'I\'m sorry but I prefer not to continue this conversation. I\'m still learning so I appreciate your understanding and patience.' in reply_msg:
self.busy = False
return '', 0
if reply_msg == prompt:
# resp += '\n\n如果你没有让我复述你的话那代表我可能不想和你继续这个话题了请输入reset重置会话😶'
await self.forget()
err_count += 1
continue
if reply_source is None:
# 不想答复
return '', 0
else:
if platform != 'qqchan':
index = 1
if len(reply_source) > 0:
reply_msg += "\n\n信息来源:\n"
for i in reply_source:
reply_msg += f"[{str(index)}]: {i['seeMoreUrl']} | {i['providerDisplayName']}\n"
index += 1
if throttling is not None:
if throttling['numUserMessagesInConversation'] == throttling['maxNumUserMessagesInConversation']:
# 达到上限,重置会话
await self.forget()
if throttling['numUserMessagesInConversation'] > throttling['maxNumUserMessagesInConversation']:
await self.forget()
err_count += 1
continue
reply_msg += f"\n[{throttling['numUserMessagesInConversation']}/{throttling['maxNumUserMessagesInConversation']}]"
break
except BaseException as e:
gu.log(str(e), level=gu.LEVEL_WARNING, tag="RevEdgeGPT")
err_count += 1
if err_count >= retry_count:
gu.log(r"如果报错, 且您的机器在中国大陆内, 请确保您的电脑已经设置好代理软件(梯子), 并在配置文件设置了系统代理地址。详见https://github.com/Soulter/QQChannelChatGPT/wiki/%E4%BA%8C%E3%80%81%E9%A1%B9%E7%9B%AE%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E9%85%8D%E7%BD%AE", max_len=999)
self.busy = False
raise e
gu.log("请求出现了一些问题, 正在重试。次数"+str(err_count), level=gu.LEVEL_WARNING, tag="RevEdgeGPT")
self.busy = False
# print("[RevEdgeGPT] "+str(reply_msg))
return reply_msg, 1

View File

@@ -1,6 +1,17 @@
requests
openai
qq-botpy
revChatGPT~=4.0.8
baidu-aip
EdgeGPT~=0.1.2
pydantic~=1.10.4
requests~=2.28.1
openai~=1.2.3
qq-botpy==1.1.2
chardet~=5.1.0
Pillow~=9.4.0
GitPython~=3.1.31
nakuru-project
beautifulsoup4
googlesearch-python
tiktoken
readability-lxml
revChatGPT~=6.8.6
baidu-aip~=4.16.9
websockets
flask
psutil

BIN
resources/fonts/simhei.ttf Normal file

Binary file not shown.

BIN
resources/fonts/syst.otf Normal file

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 241 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 239 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

83
util/cmd_config.py Normal file
View File

@@ -0,0 +1,83 @@
import os
import json
from typing import Union
cpath = "cmd_config.json"
def check_exist():
if not os.path.exists(cpath):
with open(cpath, "w", encoding="utf-8-sig") as f:
json.dump({}, f, indent=4, ensure_ascii=False)
f.flush()
class CmdConfig():
@staticmethod
def get(key, default=None):
check_exist()
with open(cpath, "r", encoding="utf-8-sig") as f:
d = json.load(f)
if key in d:
return d[key]
else:
return default
@staticmethod
def get_all():
check_exist()
with open(cpath, "r", encoding="utf-8-sig") as f:
return json.load(f)
@staticmethod
def put(key, value):
check_exist()
with open(cpath, "r", encoding="utf-8-sig") as f:
d = json.load(f)
d[key] = value
with open(cpath, "w", encoding="utf-8-sig") as f:
json.dump(d, f, indent=4, ensure_ascii=False)
f.flush()
@staticmethod
def put_by_dot_str(key: str, value):
'''
根据点分割的字符串,将值写入配置文件
'''
check_exist()
with open(cpath, "r", encoding="utf-8-sig") as f:
d = json.load(f)
_d = d
_ks = key.split(".")
for i in range(len(_ks)):
if i == len(_ks) - 1:
_d[_ks[i]] = value
else:
_d = _d[_ks[i]]
with open(cpath, "w", encoding="utf-8-sig") as f:
json.dump(d, f, indent=4, ensure_ascii=False)
f.flush()
@staticmethod
def init_attributes(key: Union[str, list], init_val = ""):
check_exist()
conf_str = ''
with open(cpath, "r", encoding="utf-8-sig") as f:
conf_str = f.read()
if conf_str.startswith(u'/ufeff'):
conf_str = conf_str.encode('utf8')[3:].decode('utf8')
d = json.loads(conf_str)
_tag = False
if isinstance(key, str):
if key not in d:
d[key] = init_val
_tag = True
elif isinstance(key, list):
for k in key:
if k not in d:
d[k] = init_val
_tag = True
if _tag:
with open(cpath, "w", encoding="utf-8-sig") as f:
json.dump(d, f, indent=4, ensure_ascii=False)
f.flush()

View File

@@ -1,3 +0,0 @@
class PromptExceededError(Exception):
pass

View File

@@ -0,0 +1,237 @@
import json
import util.general_utils as gu
import time
class FuncCallJsonFormatError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class FuncNotFoundError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class FuncCall():
def __init__(self, provider) -> None:
self.func_list = []
self.provider = provider
def add_func(self, name: str = None, func_args: list = None, desc: str = None, func_obj = None) -> None:
if name == None or func_args == None or desc == None or func_obj == None:
raise FuncCallJsonFormatError("name, func_args, desc must be provided.")
params = {
"type": "object", # hardcore here
"properties": {}
}
for param in func_args:
params['properties'][param['name']] = {
"type": param['type'],
"description": param['description']
}
self._func = {
"name": name,
"parameters": params,
"description": desc,
"func_obj": func_obj,
}
self.func_list.append(self._func)
def func_dump(self, intent: int = 2) -> str:
_l = []
for f in self.func_list:
_l.append({
"name": f["name"],
"parameters": f["parameters"],
"description": f["description"],
})
return json.dumps(_l, indent=intent, ensur_ascii=False)
def get_func(self) -> list:
_l = []
for f in self.func_list:
_l.append({
"type": "function",
"function": {
"name": f["name"],
"parameters": f["parameters"],
"description": f["description"],
}
})
return _l
def func_call(self, question, func_definition, is_task = False, tasks = None, taskindex = -1, is_summary = True, session_id = None):
funccall_prompt = """
我正实现function call功能该功能旨在让你变成给定的问题到给定的函数的解析器意味着你不是创造函数
下面会给你提供可能用到的函数相关信息和一个问题,你需要将其转换成给定的函数调用。
- 你的返回信息只含json请严格仿照以下内容不含注释必须含有`res`,`func_call`字段:
```
{
"res": string // 如果没有找到对应的函数,那么你可以在这里正常输出内容。如果有,这里是空字符串。
"func_call": [ // 这是一个数组,里面包含了所有的函数调用,如果没有函数调用,那么这个数组是空数组。
{
"res": string // 如果没有找到对应的函数,那么你可以在这里正常输出内容。如果有,这里是空字符串。
"name": str, // 函数的名字
"args_type": {
"arg1": str, // 函数的参数的类型
"arg2": str,
...
},
"args": {
"arg1": any, // 函数的参数
"arg2": any,
...
}
},
... // 可能在这个问题中会有多个函数调用
],
}
```
- 如果用户的要求较复杂,允许返回多个函数调用,但需保证这些函数调用的顺序正确。
- 当问题没有提到给定的函数时相当于提问方不打算使用function call功能这时你可以在res中正常输出这个问题的回答以AI的身份正常回答该问题并将答案输出在res字段中回答不要涉及到任何函数调用的内容就只是正常讨论这个问题。
提供的函数是:
"""
prompt = f"{funccall_prompt}\n```\n{func_definition}\n```\n"
prompt += f"""
用户的提问是:
```
{question}
```
"""
# if is_task:
# # task_prompt = f"\n任务列表为{str(tasks)}\n你目前进行到了任务{str(taskindex)}, **你不需要重新进行已经进行过的任务, 不要生成已经进行过的**"
# prompt += task_prompt
# provider.forget()
_c = 0
while _c < 3:
try:
res = self.provider.text_chat(prompt, session_id)
if res.find('```') != -1:
res = res[res.find('```json') + 7: res.rfind('```')]
gu.log("REVGPT func_call json result", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
print(res)
res = json.loads(res)
break
except Exception as e:
_c += 1
if _c == 3:
raise e
if "The message you submitted was too long" in str(e):
raise e
invoke_func_res = ""
if "func_call" in res and len(res["func_call"]) > 0:
task_list = res["func_call"]
invoke_func_res_list = []
for res in task_list:
# 说明有函数调用
func_name = res["name"]
# args_type = res["args_type"]
args = res["args"]
# 调用函数
# func = eval(func_name)
func_target = None
for func in self.func_list:
if func["name"] == func_name:
func_target = func["func_obj"]
break
if func_target == None:
raise FuncNotFoundError(f"Request function {func_name} not found.")
t_res = str(func_target(**args))
invoke_func_res += f"{func_name} 调用结果:\n```\n{t_res}\n```\n"
invoke_func_res_list.append(invoke_func_res)
gu.log(f"[FUNC| {func_name} invoked]", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
# print(str(t_res))
if is_summary:
# 生成返回结果
after_prompt = """
有以下内容:"""+invoke_func_res+"""
请以AI助手的身份结合返回的内容对用户提问做详细全面的回答。
用户的提问是:
```""" + question + """```
- 在res字段中不要输出函数的返回值也不要针对返回值的字段进行分析也不要输出用户的提问而是理解这一段返回的结果并以AI助手的身份回答问题只需要输出回答的内容不需要在回答的前面加上身份词。
- 你的返回信息必须只能是json且需严格遵循以下内容不含注释:
```json
{
"res": string, // 回答的内容
"func_call_again": bool // 如果函数返回的结果有错误或者问题可将其设置为true否则为false
}
```
- 如果func_call_again为trueres请你设为空值否则请你填写回答的内容。"""
_c = 0
while _c < 5:
try:
res = self.provider.text_chat(after_prompt, session_id)
# 截取```之间的内容
gu.log("DEBUG BEGIN", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
print(res)
gu.log("DEBUG END", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
if res.find('```') != -1:
res = res[res.find('```json') + 7: res.rfind('```')]
gu.log("REVGPT after_func_call json result", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
after_prompt_res = res
after_prompt_res = json.loads(after_prompt_res)
break
except Exception as e:
_c += 1
if _c == 5:
raise e
if "The message you submitted was too long" in str(e):
# 如果返回的内容太长了,那么就截取一部分
time.sleep(3)
invoke_func_res = invoke_func_res[:int(len(invoke_func_res) / 2)]
after_prompt = """
函数返回以下内容:"""+invoke_func_res+"""
请以AI助手的身份结合返回的内容对用户提问做详细全面的回答。
用户的提问是:
```""" + question + """```
- 在res字段中不要输出函数的返回值也不要针对返回值的字段进行分析也不要输出用户的提问而是理解这一段返回的结果并以AI助手的身份回答问题只需要输出回答的内容不需要在回答的前面加上身份词。
- 你的返回信息必须只能是json且需严格遵循以下内容不含注释:
```json
{
"res": string, // 回答的内容
"func_call_again": bool // 如果函数返回的结果有错误或者问题可将其设置为true否则为false
}
```
- 如果func_call_again为trueres请你设为空值否则请你填写回答的内容。"""
else:
raise e
if "func_call_again" in after_prompt_res and after_prompt_res["func_call_again"]:
# 如果需要重新调用函数
# 重新调用函数
gu.log("REVGPT func_call_again", bg=gu.BG_COLORS["purple"], fg=gu.FG_COLORS["white"])
res = self.func_call(question, func_definition)
return res, True
gu.log("REVGPT func callback:", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
# print(after_prompt_res["res"])
return after_prompt_res["res"], True
else:
return str(invoke_func_res_list), True
else:
# print(res["res"])
return res["res"], False

View File

@@ -0,0 +1,279 @@
import requests
import util.general_utils as gu
from bs4 import BeautifulSoup
import time
from util.function_calling.func_call import (
FuncCall,
FuncCallJsonFormatError,
FuncNotFoundError
)
from openai.types.chat.chat_completion_message_tool_call import Function
import traceback
from googlesearch import search, SearchResult
from model.provider.provider import Provider
import json
from readability import Document
def tidy_text(text: str) -> str:
'''
清理文本,去除空格、换行符等
'''
return text.strip().replace("\n", " ").replace("\r", " ").replace(" ", " ")
def special_fetch_zhihu(link: str) -> str:
'''
function-calling 函数, 用于获取知乎文章的内容
'''
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
response = requests.get(link, headers=headers)
response.encoding = "utf-8"
soup = BeautifulSoup(response.text, "html.parser")
if "zhuanlan.zhihu.com" in link:
r = soup.find(class_="Post-RichTextContainer")
else:
r = soup.find(class_="List-item").find(class_="RichContent-inner")
if r is None:
print("debug: zhihu none")
raise Exception("zhihu none")
return tidy_text(r.text)
def google_web_search(keyword) -> str:
'''
获取 google 搜索结果, 得到 title、desc、link
'''
ret = ""
index = 1
try:
ls = search(keyword, advanced=True, num_results=4)
for i in ls:
desc = i.description
try:
desc = fetch_website_content(i.url)
except BaseException as e:
print(f"(google) fetch_website_content err: {str(e)}")
gu.log(f"# No.{str(index)}\ntitle: {i.title}\nurl: {i.url}\ncontent: {desc}\n\n", level=gu.LEVEL_DEBUG, max_len=9999)
ret += f"# No.{str(index)}\ntitle: {i.title}\nurl: {i.url}\ncontent: {desc}\n\n"
index += 1
except Exception as e:
print(f"google search err: {str(e)}")
return web_keyword_search_via_bing(keyword)
return ret
def web_keyword_search_via_bing(keyword) -> str:
'''
获取bing搜索结果, 得到 title、desc、link
'''
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
url = "https://www.bing.com/search?q="+keyword
_cnt = 0
_detail_store = []
while _cnt < 5:
try:
response = requests.get(url, headers=headers)
response.encoding = "utf-8"
gu.log(f"bing response: {response.text}", tag="bing", level=gu.LEVEL_DEBUG, max_len=9999)
soup = BeautifulSoup(response.text, "html.parser")
res = []
ols = soup.find(id="b_results")
for i in ols.find_all("li", class_="b_algo"):
try:
title = i.find("h2").text
desc = i.find("p").text
link = i.find("h2").find("a").get("href")
res.append({
"title": title,
"desc": desc,
"link": link,
})
if len(res) >= 5: # 限制5条
break
if len(_detail_store) >= 3:
continue
# 爬取前两条的网页内容
if "zhihu.com" in link:
try:
_detail_store.append(special_fetch_zhihu(link))
except BaseException as e:
print(f"zhihu parse err: {str(e)}")
else:
try:
_detail_store.append(fetch_website_content(link))
except BaseException as e:
print(f"fetch_website_content err: {str(e)}")
except Exception as e:
print(f"bing parse err: {str(e)}")
if len(res) == 0:
break
if len(_detail_store) > 0:
ret = f"{str(res)} \n具体网页内容: {str(_detail_store)}"
else:
ret = f"{str(res)}"
return str(ret)
except Exception as e:
gu.log(f"bing fetch err: {str(e)}")
_cnt += 1
time.sleep(1)
gu.log("fail to fetch bing info, using sougou.")
return web_keyword_search_via_sougou(keyword)
def web_keyword_search_via_sougou(keyword) -> str:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
}
url = f"https://sogou.com/web?query={keyword}"
response = requests.get(url, headers=headers)
response.encoding = "utf-8"
soup = BeautifulSoup(response.text, "html.parser")
res = []
results = soup.find("div", class_="results")
for i in results.find_all("div", class_="vrwrap"):
try:
title = tidy_text(i.find("h3").text)
link = tidy_text(i.find("h3").find("a").get("href"))
if link.startswith("/link?url="):
link = "https://www.sogou.com" + link
res.append({
"title": title,
"link": link,
})
if len(res) >= 5: # 限制5条
break
except Exception as e:
gu.log(f"sougou parse err: {str(e)}", tag="web_keyword_search_via_sougou", level=gu.LEVEL_ERROR)
# 爬取网页内容
_detail_store = []
for i in res:
if _detail_store >= 3:
break
try:
_detail_store.append(fetch_website_content(i["link"]))
except BaseException as e:
print(f"fetch_website_content err: {str(e)}")
ret = f"{str(res)}"
if len(_detail_store) > 0:
ret += f"\n网页内容: {str(_detail_store)}"
return ret
def fetch_website_content(url):
gu.log(f"fetch_website_content: {url}", tag="fetch_website_content", level=gu.LEVEL_DEBUG)
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
response = requests.get(url, headers=headers, timeout=3)
response.encoding = "utf-8"
# soup = BeautifulSoup(response.text, "html.parser")
# # 如果有container / content / main等的话就只取这些部分
# has = False
# beleive_ls = ["container", "content", "main"]
# res = ""
# for cls in beleive_ls:
# for i in soup.find_all(class_=cls):
# has = True
# res += i.text
# if not has:
# res = soup.text
# res = res.replace("\n", "").replace(" ", " ").replace("\r", "").replace("\t", "")
# if not has:
# res = res[300:1100]
# else:
# res = res[100:800]
# # with open(f"temp_{time.time()}.html", "w", encoding="utf-8") as f:
# # f.write(res)
# gu.log(f"fetch_website_content: end", tag="fetch_website_content", level=gu.LEVEL_DEBUG)
# return res
doc = Document(response.content)
# print('title:', doc.title())
ret = doc.summary(html_partial=True)
soup = BeautifulSoup(ret, 'html.parser')
ret = tidy_text(soup.get_text())
return ret
def web_search(question, provider: Provider, session_id, official_fc=False):
'''
official_fc: 使用官方 function-calling
'''
new_func_call = FuncCall(provider)
new_func_call.add_func("google_web_search", [{
"type": "string",
"name": "keyword",
"description": "google search query (分词,尽量保留所有信息)"
}],
"通过搜索引擎搜索。如果问题需要在网页上搜索(如天气、新闻或任何需要通过网页获取信息的问题),则调用此函数;如果没有,不要调用此函数。",
google_web_search
)
new_func_call.add_func("fetch_website_content", [{
"type": "string",
"name": "url",
"description": "网址"
}],
"获取网页的内容。如果问题带有合法的网页链接(例如: `帮我总结一下https://github.com的内容`), 就调用此函数。如果没有,不要调用此函数。",
fetch_website_content
)
question1 = f"{question} \n> hint: 最多只能调用1个function, 并且存在不会调用任何function的可能性。"
has_func = False
function_invoked_ret = ""
if official_fc:
func = provider.text_chat(question1, session_id, function_call=new_func_call.get_func())
if isinstance(func, Function):
# arguments='{\n "keyword": "北京今天的天气"\n}', name='google_web_search'
# 执行对应的结果:
func_obj = None
for i in new_func_call.func_list:
if i["name"] == func.name:
func_obj = i["func_obj"]
break
if not func_obj:
gu.log("找不到返回的 func name " + func.name, level=gu.LEVEL_ERROR)
return provider.text_chat(question1, session_id) + "\n(网页搜索失败, 此为默认回复)"
try:
args = json.loads(func.arguments)
function_invoked_ret = func_obj(**args)
has_func = True
except BaseException as e:
traceback.print_exc()
return provider.text_chat(question1, session_id) + "\n(网页搜索失败, 此为默认回复)"
else:
# now func is a string
return func
else:
try:
function_invoked_ret, has_func = new_func_call.func_call(question1, new_func_call.func_dump(), is_task=False, is_summary=False)
except BaseException as e:
res = provider.text_chat(question) + "\n(网页搜索失败, 此为默认回复)"
return res
has_func = True
if has_func:
provider.forget(session_id)
question3 = f"""请你用活泼的语气回答`{question}`问题。\n以下是相关材料,请直接拿此材料针对问题进行总结回答。在文章末尾加上各参考链接,如`[1] <title> <url>`不要提到任何函数调用的信息在总结的末尾加上1或2个相关的emoji。```\n{function_invoked_ret}\n```\n"""
gu.log(f"web_search: {question3}", tag="web_search", level=gu.LEVEL_DEBUG, max_len=99999)
_c = 0
while _c < 3:
try:
print('text chat')
final_ret = provider.text_chat(question3)
return final_ret
except Exception as e:
print(e)
_c += 1
if _c == 3: raise e
if "The message you submitted was too long" in str(e):
provider.forget(session_id)
function_invoked_ret = function_invoked_ret[:int(len(function_invoked_ret) / 2)]
time.sleep(3)
question3 = f"""请回答`{question}`问题。\n以下是相关材料,请直接拿此材料针对问题进行回答,再给参考链接, 参考链接首末有空格。```\n{function_invoked_ret}\n```\n"""
return function_invoked_ret

534
util/general_utils.py Normal file
View File

@@ -0,0 +1,534 @@
import datetime
import time
import socket
from PIL import Image, ImageDraw, ImageFont
import os
import re
import requests
from util.cmd_config import CmdConfig
import socket
PLATFORM_GOCQ = 'gocq'
PLATFORM_QQCHAN = 'qqchan'
FG_COLORS = {
"black": "30",
"red": "31",
"green": "32",
"yellow": "33",
"blue": "34",
"purple": "35",
"cyan": "36",
"white": "37",
"default": "39",
}
BG_COLORS = {
"black": "40",
"red": "41",
"green": "42",
"yellow": "43",
"blue": "44",
"purple": "45",
"cyan": "46",
"white": "47",
"default": "49",
}
LEVEL_DEBUG = "DEBUG"
LEVEL_INFO = "INFO"
LEVEL_WARNING = "WARNING"
LEVEL_ERROR = "ERROR"
LEVEL_CRITICAL = "CRITICAL"
level_codes = {
LEVEL_DEBUG: 0,
LEVEL_INFO: 1,
LEVEL_WARNING: 2,
LEVEL_ERROR: 3,
LEVEL_CRITICAL: 4
}
level_colors = {
"INFO": "green",
"WARNING": "yellow",
"ERROR": "red",
"CRITICAL": "purple",
}
def log(
msg: str,
level: str = "INFO",
tag: str = "System",
fg: str = None,
bg: str = None,
max_len: int = 500,
err: Exception = None,):
"""
日志打印函数
"""
_set_level_code = level_codes[LEVEL_INFO]
if 'LOG_LEVEL' in os.environ and os.environ['LOG_LEVEL'] in level_codes:
_set_level_code = level_codes[os.environ['LOG_LEVEL']]
if level in level_codes and level_codes[level] < _set_level_code:
return
if err is not None:
msg += "\n异常原因: " + str(err)
level = LEVEL_ERROR
if len(msg) > max_len:
msg = msg[:max_len] + "..."
now = datetime.datetime.now().strftime("%m-%d %H:%M:%S")
pres = []
for line in msg.split("\n"):
if line == "\n":
pres.append("")
else:
pres.append(f"[{now}] [{level}] [{tag}]: {line}")
if level == "INFO":
if fg is None:
fg = FG_COLORS["green"]
if bg is None:
bg = BG_COLORS["default"]
elif level == "WARNING":
if fg is None:
fg = FG_COLORS["yellow"]
if bg is None:
bg = BG_COLORS["default"]
elif level == "ERROR":
if fg is None:
fg = FG_COLORS["red"]
if bg is None:
bg = BG_COLORS["default"]
elif level == "CRITICAL":
if fg is None:
fg = FG_COLORS["purple"]
if bg is None:
bg = BG_COLORS["default"]
ret = ""
for line in pres:
ret += f"\033[{fg};{bg}m{line}\033[0m\n"
print(ret[:-1])
def port_checker(port: int, host: str = "localhost"):
sk = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sk.settimeout(1)
try:
sk.connect((host, port))
sk.close()
return True
except Exception:
sk.close()
return False
def word2img(title: str, text: str, max_width=30, font_size=20):
if os.path.exists("resources/fonts/syst.otf"):
font_path = "resources/fonts/syst.otf"
elif os.path.exists("QQChannelChatGPT/resources/fonts/syst.otf"):
font_path = "QQChannelChatGPT/resources/fonts/syst.otf"
elif os.path.exists("C:/Windows/Fonts/simhei.ttf"):
font_path = "C:/Windows/Fonts/simhei.ttf"
elif os.path.exists("/usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc"):
font_path = "/usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc"
else:
raise Exception("找不到字体文件")
width_factor = 1.0
height_factor = 1.5
# 格式化文本宽度最大为30
lines = text.split('\n')
i = 0
length = len(lines)
for l in lines:
if len(l) > max_width:
# lines[i] = l[:max_width] + '\n' + l[max_width:]
# for
cp = l
for ii in range(len(l)):
if ii % max_width == 0:
cp = cp[:ii] + '\n' + cp[ii:]
length += 1
lines[i] = cp
i += 1
text = '\n'.join(lines)
width = int(max_width * font_size * width_factor)
height = int(length * font_size * height_factor)
image = Image.new('RGB', (width, height), (255, 255, 255))
draw = ImageDraw.Draw(image)
text_font = ImageFont.truetype(font_path, font_size)
title_font = ImageFont.truetype(font_path, font_size + 5)
# 标题居中
title_width, title_height = title_font.getsize(title)
draw.text(((width - title_width) / 2, 10), title, fill=(0, 0, 0), font=title_font)
# 文本不居中
draw.text((10, title_height+20), text, fill=(0, 0, 0), font=text_font)
return image
def render_markdown(markdown_text, image_width=800, image_height=600, font_size=26, font_color=(0, 0, 0), bg_color=(255, 255, 255)):
HEADER_MARGIN = 20
HEADER_FONT_STANDARD_SIZE = 42
QUOTE_LEFT_LINE_MARGIN = 10
QUOTE_FONT_LINE_MARGIN = 6 # 引用文字距离左边线的距离和上下的距离
QUOTE_LEFT_LINE_HEIGHT = font_size + QUOTE_FONT_LINE_MARGIN * 2
QUOTE_LEFT_LINE_WIDTH = 5
QUOTE_LEFT_LINE_COLOR = (180, 180, 180)
QUOTE_FONT_SIZE = font_size
QUOTE_FONT_COLOR = (180, 180, 180)
# QUOTE_BG_COLOR = (255, 255, 255)
CODE_BLOCK_MARGIN = 10
CODE_BLOCK_FONT_SIZE = font_size
CODE_BLOCK_FONT_COLOR = (255, 255, 255)
CODE_BLOCK_BG_COLOR = (240, 240, 240)
CODE_BLOCK_CODES_MARGIN_VERTICAL = 5 # 代码块和代码之间的距离
CODE_BLOCK_CODES_MARGIN_HORIZONTAL = 5 # 代码块和代码之间的距离
CODE_BLOCK_TEXT_MARGIN = 4 # 代码和代码之间的距离
INLINE_CODE_MARGIN = 8
INLINE_CODE_FONT_SIZE = font_size
INLINE_CODE_FONT_COLOR = font_color
INLINE_CODE_FONT_MARGIN = 4
INLINE_CODE_BG_COLOR = (230, 230, 230)
INLINE_CODE_BG_HEIGHT = INLINE_CODE_FONT_SIZE + INLINE_CODE_FONT_MARGIN * 2
LIST_MARGIN = 8
LIST_FONT_SIZE = font_size
LIST_FONT_COLOR = font_color
TEXT_LINE_MARGIN = 8
IMAGE_MARGIN = 15
# 用于匹配图片的正则表达式
IMAGE_REGEX = r"!\s*\[.*?\]\s*\((.*?)\)"
if os.path.exists("resources/fonts/syst.otf"):
font_path = "resources/fonts/syst.otf"
elif os.path.exists("QQChannelChatGPT/resources/fonts/syst.otf"):
font_path = "QQChannelChatGPT/resources/fonts/syst.otf"
elif os.path.exists("C:/Windows/Fonts/simhei.ttf"):
font_path = "C:/Windows/Fonts/simhei.ttf"
elif os.path.exists("/usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc"):
font_path = "/usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc"
else:
raise Exception("找不到字体文件")
# backup
if os.path.exists("resources/fonts/simhei.ttf"):
font_path1 = "resources/fonts/simhei.ttf"
elif os.path.exists("QQChannelChatGPT/resources/fonts/simhei.ttf"):
font_path1 = "QQChannelChatGPT/resources/fonts/simhei.ttf"
else:
font_path1 = font_path
# 加载字体
font = ImageFont.truetype(font_path, font_size)
images: Image = {}
# pre_process, get height of each line
pre_lines = markdown_text.split('\n')
height = 0
pre_in_code = False
i = -1
_pre_lines = []
for line in pre_lines:
i += 1
# 处理图片
if re.search(IMAGE_REGEX, line):
try:
image_url = re.findall(IMAGE_REGEX, line)[0]
print(image_url)
image_res = Image.open(requests.get(image_url, stream=True, timeout=5).raw)
images[i] = image_res
# 最大不得超过image_width的50%
img_height = image_res.size[1]
if image_res.size[0] > image_width*0.5:
image_res = image_res.resize((int(image_width*0.5), int(image_res.size[1]*image_width*0.5/image_res.size[0])))
img_height = image_res.size[1]
height += img_height + IMAGE_MARGIN*2
line = re.sub(IMAGE_REGEX, "", line)
except Exception as e:
print(e)
line = re.sub(IMAGE_REGEX, "\n[加载失败的图片]\n", line)
continue
line.replace("\t", " ")
if font.getsize(line)[0] > image_width:
cp = line
_width = 0
_word_cnt = 0
for ii in range(len(line)):
# 检测是否是中文
_width += font.getsize(line[ii])[0]
_word_cnt+=1
if _width > image_width:
_pre_lines.append(cp[:_word_cnt])
cp = cp[_word_cnt:]
_word_cnt=0
_width=0
_pre_lines.append(cp)
else:
_pre_lines.append(line)
pre_lines = _pre_lines
i=-1
for line in pre_lines:
if line == "":
height += TEXT_LINE_MARGIN
continue
i += 1
line = line.strip()
if pre_in_code and not line.startswith("```"):
height += font_size + CODE_BLOCK_TEXT_MARGIN
# pre_codes.append(line)
continue
if line.startswith("#"):
header_level = line.count("#")
height += HEADER_FONT_STANDARD_SIZE + HEADER_MARGIN*2 - header_level * 4
elif line.startswith("-"):
height += font_size+LIST_MARGIN*2
elif line.startswith(">"):
height += font_size+QUOTE_LEFT_LINE_MARGIN*2
elif line.startswith("```"):
if pre_in_code:
pre_in_code = False
# pre_codes = []
height += CODE_BLOCK_MARGIN
else:
pre_in_code = True
height += CODE_BLOCK_MARGIN
elif re.search(r"`(.*?)`", line):
height += font_size+INLINE_CODE_FONT_MARGIN*2+INLINE_CODE_MARGIN*2
else:
height += font_size + TEXT_LINE_MARGIN*2
markdown_text = '\n'.join(pre_lines)
print("Pre process done, height: ", height)
image_height = height
if image_height < 100:
image_height = 100
image_width += 20
# 创建空白图像
image = Image.new('RGB', (image_width, image_height), bg_color)
draw = ImageDraw.Draw(image)
# # get all the emojis unicode in the markdown text
# unicode_text = markdown_text.encode('unicode_escape').decode()
# # print(unicode_text)
# unicode_emojis = re.findall(r'\\U\w{8}', unicode_text)
# emoji_base_url = "https://abs.twimg.com/emoji/v1/72x72/{unicode_emoji}.png"
# 设置初始位置
x, y = 10, 10
# 解析Markdown文本
lines = markdown_text.split("\n")
# lines = pre_lines
in_code_block = False
code_block_start_y = 0
code_block_codes = []
index = -1
for line in lines:
index += 1
if in_code_block and not line.startswith("```"):
code_block_codes.append(line)
y += font_size + CODE_BLOCK_TEXT_MARGIN
continue
line = line.strip()
if line.startswith("#"):
# unicode_emojis = re.findall(r'\\U0001\w{4}', line)
# for unicode_emoji in unicode_emojis:
# line = line.replace(unicode_emoji, "")
# unicode_emoji = ""
# if len(unicode_emojis) > 0:
# unicode_emoji = unicode_emojis[0]
# 处理标题
header_level = line.count("#")
line = line.strip("#").strip()
font_size_header = HEADER_FONT_STANDARD_SIZE - header_level * 4
# if unicode_emoji != "":
# emoji_url = emoji_base_url.format(unicode_emoji=unicode_emoji[-5:])
# emoji = Image.open(requests.get(emoji_url, stream=True).raw)
# emoji = emoji.resize((font_size, font_size))
# image.paste(emoji, (x, y))
# x += font_size
font = ImageFont.truetype(font_path, font_size_header)
y += HEADER_MARGIN # 上边距
# 字间距
draw.text((x, y), line, font=font, fill=font_color)
draw.line((x, y + font_size_header + 8, image_width - 10, y + font_size_header + 8), fill=(230, 230, 230), width=3)
y += font_size_header + HEADER_MARGIN
elif line.startswith(">"):
# 处理引用
quote_text = line.strip(">")
# quote_width = image_width - 20 # 引用框的宽度为图像宽度减去左右边距
# quote_height = font_size + 10 # 引用框的高度为字体大小加上上下边距
# quote_box = (x, y, x + quote_width, y + quote_height)
# draw.rounded_rectangle(quote_box, radius=5, fill=(230, 230, 230), width=2) # 使用灰色填充矩形框作为引用背景
y+=QUOTE_LEFT_LINE_MARGIN
draw.line((x, y, x, y + QUOTE_LEFT_LINE_HEIGHT), fill=QUOTE_LEFT_LINE_COLOR, width=QUOTE_LEFT_LINE_WIDTH)
font = ImageFont.truetype(font_path, QUOTE_FONT_SIZE)
draw.text((x + QUOTE_FONT_LINE_MARGIN, y + QUOTE_FONT_LINE_MARGIN), quote_text, font=font, fill=QUOTE_FONT_COLOR)
y += font_size + QUOTE_LEFT_LINE_HEIGHT + QUOTE_LEFT_LINE_MARGIN
elif line.startswith("-"):
# 处理列表
list_text = line.strip("-").strip()
font = ImageFont.truetype(font_path, LIST_FONT_SIZE)
y += LIST_MARGIN
draw.text((x, y), " · " + list_text, font=font, fill=LIST_FONT_COLOR)
y += font_size + LIST_MARGIN
elif line.startswith("```"):
if not in_code_block:
code_block_start_y = y+CODE_BLOCK_MARGIN
in_code_block = True
else:
# print(code_block_codes)
in_code_block = False
codes = "\n".join(code_block_codes)
code_block_codes = []
draw.rounded_rectangle((x, code_block_start_y, image_width - 10, y+CODE_BLOCK_CODES_MARGIN_VERTICAL + CODE_BLOCK_TEXT_MARGIN), radius=5, fill=CODE_BLOCK_BG_COLOR, width=2)
font = ImageFont.truetype(font_path1, CODE_BLOCK_FONT_SIZE)
draw.text((x + CODE_BLOCK_CODES_MARGIN_HORIZONTAL, code_block_start_y + CODE_BLOCK_CODES_MARGIN_VERTICAL), codes, font=font, fill=font_color)
y += CODE_BLOCK_CODES_MARGIN_VERTICAL + CODE_BLOCK_MARGIN
# y += font_size+10
elif re.search(r"`(.*?)`", line):
y += INLINE_CODE_MARGIN # 上边距
# 处理行内代码
code_regex = r"`(.*?)`"
parts_inline = re.findall(code_regex, line)
# print(parts_inline)
parts = re.split(code_regex, line)
# print(parts)
for part in parts:
# the judge has a tiny bug.
# when line is like "hi`hi`". all the parts will be in parts_inline.
if part in parts_inline:
font = ImageFont.truetype(font_path, INLINE_CODE_FONT_SIZE)
code_text = part.strip("`")
code_width = font.getsize(code_text)[0] + INLINE_CODE_FONT_MARGIN*2
x += INLINE_CODE_MARGIN
code_box = (x, y, x + code_width, y + INLINE_CODE_BG_HEIGHT)
draw.rounded_rectangle(code_box, radius=5, fill=INLINE_CODE_BG_COLOR, width=2) # 使用灰色填充矩形框作为引用背景
draw.text((x+INLINE_CODE_FONT_MARGIN, y), code_text, font=font, fill=font_color)
x += code_width+INLINE_CODE_MARGIN-INLINE_CODE_FONT_MARGIN
else:
font = ImageFont.truetype(font_path, font_size)
draw.text((x, y), part, font=font, fill=font_color)
x += font.getsize(part)[0]
y += font_size + INLINE_CODE_MARGIN
x = 10
else:
# 处理普通文本
if line == "":
y += TEXT_LINE_MARGIN
else:
font = ImageFont.truetype(font_path, font_size)
draw.text((x, y), line, font=font, fill=font_color)
y += font_size + TEXT_LINE_MARGIN*2
# 图片特殊处理
if index in images:
image_res = images[index]
# 最大不得超过image_width的50%
if image_res.size[0] > image_width*0.5:
image_res = image_res.resize((int(image_width*0.5), int(image_res.size[1]*image_width*0.5/image_res.size[0])))
image.paste(image_res, (IMAGE_MARGIN, y))
y += image_res.size[1] + IMAGE_MARGIN*2
return image
def save_temp_img(img: Image) -> str:
if not os.path.exists("temp"):
os.makedirs("temp")
# 获得文件创建时间清除超过1小时的
try:
for f in os.listdir("temp"):
path = os.path.join("temp", f)
if os.path.isfile(path):
ctime = os.path.getctime(path)
if time.time() - ctime > 3600:
os.remove(path)
except Exception as e:
log(f"清除临时文件失败: {e}", level=LEVEL_WARNING, tag="GeneralUtils")
# 获得时间戳
timestamp = int(time.time())
p = f"temp/{timestamp}.png"
img.save(p)
return p
def create_text_image(title: str, text: str, max_width=30, font_size=20):
'''
文本转图片。
title: 标题
text: 文本内容
max_width: 文本宽度最大值默认30
font_size: 字体大小默认20
返回:文件路径
'''
try:
img = word2img(title, text, max_width, font_size)
p = save_temp_img(img)
return p
except Exception as e:
raise e
def create_markdown_image(text: str):
'''
markdown文本转图片。
返回:文件路径
'''
try:
img = render_markdown(text)
p = save_temp_img(img)
return p
except Exception as e:
raise e
# 迁移配置文件到 cmd_config.json
def try_migrate_config(old_config: dict):
cc = CmdConfig()
if cc.get("qqbot", None) is None:
# 未迁移过
for k in old_config:
cc.put(k, old_config[k])
def get_local_ip_addresses():
try:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(('8.8.8.8', 80))
ip = s.getsockname()[0]
finally:
s.close()
return ip

View File

@@ -1,11 +0,0 @@
import logging
from logging.handlers import RotatingFileHandler
import colorlog
logger = logging.getLogger("QQChannelChatGPT")
logger.setLevel(logging.DEBUG)
handler = colorlog.StreamHandler()
fmt = "%(log_color)s[%(name)s] %(message)s"
handler.setFormatter(colorlog.ColoredFormatter(
fmt))
logger.addHandler(handler)

166
util/plugin_util.py Normal file
View File

@@ -0,0 +1,166 @@
'''
插件工具函数
'''
import os
import inspect
try:
import git.exc
from git.repo import Repo
except ImportError:
pass
import shutil
from pip._internal import main as pipmain
import importlib
import stat
# 找出模块里所有的类名
def get_classes(p_name, arg):
classes = []
clsmembers = inspect.getmembers(arg, inspect.isclass)
for (name, _) in clsmembers:
# print(name, p_name)
if p_name.lower() == name.lower()[:-6] or name.lower() == "main":
classes.append(name)
break
return classes
# 获取一个文件夹下所有的模块, 文件名和文件夹名相同
def get_modules(path):
modules = []
for root, dirs, files in os.walk(path):
# 获得所在目录名
p_name = os.path.basename(root)
for file in files:
"""
与文件夹名不计大小写相同或者是main.py的都算启动模块
"""
if file.endswith(".py") and not file.startswith("__") and (p_name.lower() == file[:-3].lower() or file[:-3].lower() == "main"):
modules.append({
"pname": p_name,
"module": file[:-3],
})
return modules
def get_plugin_store_path():
if os.path.exists("addons/plugins"):
return "addons/plugins"
elif os.path.exists("QQChannelChatGPT/addons/plugins"):
return "QQChannelChatGPT/addons/plugins"
elif os.path.exists("AstrBot/addons/plugins"):
return "AstrBot/addons/plugins"
else:
raise FileNotFoundError("插件文件夹不存在。")
def get_plugin_modules():
plugins = []
try:
if os.path.exists("addons/plugins"):
plugins = get_modules("addons/plugins")
return plugins
elif os.path.exists("QQChannelChatGPT/addons/plugins"):
plugins = get_modules("QQChannelChatGPT/addons/plugins")
return plugins
else:
return None
except BaseException as e:
raise e
def plugin_reload(cached_plugins: dict, target: str = None, all: bool = False):
plugins = get_plugin_modules()
if plugins is None:
return False, "未找到任何插件模块"
fail_rec = ""
for plugin in plugins:
try:
p = plugin['module']
root_dir_name = plugin['pname']
if p not in cached_plugins or p == target or all:
module = __import__("addons.plugins." + root_dir_name + "." + p, fromlist=[p])
if p in cached_plugins:
module = importlib.reload(module)
cls = get_classes(p, module)
obj = getattr(module, cls[0])()
try:
info = obj.info()
if 'name' not in info or 'desc' not in info or 'version' not in info or 'author' not in info:
fail_rec += f"载入插件{p}失败,原因: 插件信息不完整\n"
continue
if isinstance(info, dict) == False:
fail_rec += f"载入插件{p}失败,原因: 插件信息格式不正确\n"
continue
except BaseException as e:
fail_rec += f"调用插件{p} info失败, 原因: {str(e)}\n"
continue
cached_plugins[info['name']] = {
"module": module,
"clsobj": obj,
"info": info,
"name": info['name'],
"root_dir_name": root_dir_name,
}
except BaseException as e:
fail_rec += f"加载{p}插件出现问题,原因 {str(e)}\n"
if fail_rec == "":
return True, None
else:
return False, fail_rec
def install_plugin(repo_url: str, cached_plugins: dict):
ppath = get_plugin_store_path()
# 删除末尾的 /
if repo_url.endswith("/"):
repo_url = repo_url[:-1]
# 得到 url 的最后一段
d = repo_url.split("/")[-1]
# 转换非法字符:-
d = d.replace("-", "_")
# 创建文件夹
plugin_path = os.path.join(ppath, d)
if os.path.exists(plugin_path):
remove_dir(plugin_path)
Repo.clone_from(repo_url, to_path=plugin_path, branch='master')
# 读取插件的requirements.txt
if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
if pipmain(['install', '-r', os.path.join(plugin_path, "requirements.txt"), '--quiet']) != 0:
raise Exception("插件的依赖安装失败, 需要您手动 pip 安装对应插件的依赖。")
ok, err = plugin_reload(cached_plugins, target=d)
if not ok: raise Exception(err)
def uninstall_plugin(plugin_name: str, cached_plugins: dict):
if plugin_name not in cached_plugins:
raise Exception("插件不存在。")
root_dir_name = cached_plugins[plugin_name]["root_dir_name"]
ppath = get_plugin_store_path()
del cached_plugins[plugin_name]
if not remove_dir(os.path.join(ppath, root_dir_name)):
raise Exception("移除插件成功,但是删除插件文件夹失败。您可以手动删除该文件夹,位于 addons/plugins/ 下。")
def update_plugin(plugin_name: str, cached_plugins: dict):
if plugin_name not in cached_plugins:
raise Exception("插件不存在。")
ppath = get_plugin_store_path()
root_dir_name = cached_plugins[plugin_name]["root_dir_name"]
plugin_path = os.path.join(ppath, root_dir_name)
repo = Repo(path = plugin_path)
repo.remotes.origin.pull()
# 读取插件的requirements.txt
if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
if pipmain(['install', '-r', os.path.join(plugin_path, "requirements.txt"), '--quiet']) != 0:
raise Exception("插件依赖安装失败, 需要您手动pip安装对应插件的依赖。")
ok, err = plugin_reload(cached_plugins, target=plugin_name)
if not ok: raise Exception(err)
def remove_dir(file_path) -> bool:
try_cnt = 50
while try_cnt > 0:
if not os.path.exists(file_path):
return False
try:
shutil.rmtree(file_path)
return True
except PermissionError as e:
err_file_path = str(e).split("\'", 2)[1]
if os.path.exists(err_file_path):
os.chmod(err_file_path, stat.S_IWUSR)
try_cnt -= 1

25
util/webapp_replit.py Normal file
View File

@@ -0,0 +1,25 @@
from flask import Flask
from threading import Thread
import datetime
app = Flask(__name__)
@app.route('/')
def main_func():
content = "<h1>QQChannelChatGPT Web APP</h1>"
content += "<p>" + "Online @ " + str(datetime.datetime.now()) + "</p>"
content += "<p>欢迎Star本项目</p>"
return content
def run():
app.run(host="0.0.0.0", port=8080)
def keep_alive():
server = Thread(target=run)
server.start()