Compare commits

...

100 Commits

Author SHA1 Message Date
Soulter
e77b7014e6 fix: 修复更新、卸载插件时的报错 2024-07-30 09:15:45 +08:00
Soulter
d57fd0f827 fix: metadata is not seralizable 2024-07-29 09:47:42 +08:00
Soulter
6a83d2a62a update version 2024-07-28 12:11:07 +08:00
Soulter
2d29726c18 fix: 修复带空格路径导致的重启失败 2024-07-28 11:55:57 +08:00
Soulter
b241b0f954 update version 2024-07-27 12:31:15 -04:00
Soulter
171dd1dc02 feat: qq 官方机器人接口支持C2C 2024-07-27 12:30:09 -04:00
Soulter
af62d969d7 perf: 更改 send_msg 接口 2024-07-27 11:26:02 -04:00
Soulter
c4fd9a66c6 update version to 3.3.3 2024-07-27 11:08:51 -04:00
Soulter
d191997a39 feat: aiocqhttp 适配器适配主动发送消息接口 2024-07-27 11:07:26 -04:00
Soulter
853ac4c104 fix: 优化 update 提示 2024-07-27 04:58:15 -04:00
Soulter
ed053acad6 update: version 2024-07-27 04:47:57 -04:00
Soulter
f147634e51 fix: 修复update异常 2024-07-27 04:43:53 -04:00
Soulter
e3b2a68341 Merge pull request #179 from Soulter/refactor-v3.3.0
feat: 新增 Provider 注册接口;新增 provider 指令
2024-07-27 16:31:03 +08:00
Soulter
84c450aef9 feat: 新增 Provider 注册接口;新增 provider 指令 2024-07-27 04:25:27 -04:00
Soulter
f52a0eb43a fix: 修复默认配置迁移问题 2024-07-27 08:58:26 +08:00
Soulter
6ed7559518 Merge pull request #174 from Soulter/refactor-v3.3.0
重写工程,提高稳定性
2024-07-26 18:24:33 +08:00
Soulter
d977dbe9a7 update version 2024-07-26 06:24:11 -04:00
Soulter
17fc761c61 chore: update default plugin 2024-07-26 05:15:41 -04:00
Soulter
af878f2ed3 fix: 修复 aiocqhttp 运行导致 ctrl+c 无法退出 bot 的问题
perf: 支持通过context注册task
2024-07-26 05:02:29 -04:00
Soulter
bb2164c324 perf: 在 context 中添加message_handler 2024-07-25 12:58:45 -04:00
Soulter
0496becc50 perf: 增强 aiocqhttp 发图的稳定性 2024-07-25 12:33:31 -04:00
Soulter
618f8aa7d2 fix: 修复一些指令的bug 2024-07-25 10:44:17 -04:00
Soulter
c57f711c48 update: metrics refactoring 2024-07-24 09:48:25 -04:00
Soulter
4edd11f2f7 fix: 修复了一些bug。 2024-07-24 09:19:43 -04:00
Soulter
a2cf058951 update: refactor codes 2024-07-24 18:40:08 +08:00
Soulter
d52eb10ddd chore: remove large font files to shrink the source code size 2024-07-07 21:37:44 +08:00
Soulter
4b6dae71fc update: 更新默认插件 helloworld 2024-07-07 21:00:18 +08:00
Soulter
ddad30c22e feat: 支持本地上传插件 2024-07-07 20:59:12 +08:00
Soulter
77067c545c feat: 使用压缩包文件的更新方式 2024-07-07 18:26:58 +08:00
Soulter
465d283cad Update README.md 2024-06-23 11:23:17 +08:00
Soulter
05071144fb fix: 修复文转图相关问题 2024-06-09 08:56:52 -04:00
Soulter
a4e7904953 chore: clean codes 2024-06-03 20:40:18 -04:00
Soulter
986a8c7554 Update README.md 2024-06-03 21:18:53 +08:00
Soulter
9272843b77 Update README.md 2024-06-03 21:18:00 +08:00
Soulter
542d4bc703 typo: fix t2i typo 2024-06-03 08:47:51 -04:00
Soulter
e3640fdac9 perf: 优化update、help等指令的输出效果 2024-06-03 08:33:17 -04:00
Soulter
f64ab4b190 chore: 移除了一些过时的方法 2024-06-03 05:54:40 -04:00
Soulter
bd571e1577 feat: 提供新的文本转图片样式 2024-06-03 05:51:44 -04:00
Soulter
e4a5cbd893 prof: 改善加载插件时的稳定性 2024-06-03 00:20:56 -04:00
Soulter
7a9fd7fd1e fix: 修复报配置文件未找到的问题 2024-06-02 23:14:48 -04:00
Soulter
d9b60108db Update README.md 2024-05-30 18:11:57 +08:00
Soulter
8455c8b4ed Update README.md 2024-05-30 18:03:59 +08:00
Soulter
5c2e7099fc Update README.md 2024-05-26 21:38:32 +08:00
Soulter
1fd1d55895 Update config.py 2024-05-26 21:31:26 +08:00
Soulter
5ce4137e75 fix: 修复model指令 2024-05-26 21:15:33 +08:00
Soulter
d49179541e feat: 给插件的init方法传入 ctx 2024-05-26 21:10:19 +08:00
Soulter
676f258981 perf: 重启后终止子进程 2024-05-26 21:09:23 +08:00
Soulter
fa44749240 fix: 修复相对路径导致的windows启动器无法安装依赖的问题 2024-05-26 18:15:25 +08:00
Soulter
6c856f9da2 fix(typo): 修复插件注册器的一个typo导致无法注册消息平台插件的问题 2024-05-26 18:07:07 +08:00
Soulter
e8773cea7f fix: 修复配置文件没有有效迁移的问题 2024-05-25 20:59:37 +08:00
Soulter
4d36ffcb08 fix: 优化插件的结果处理 2024-05-25 18:46:38 +08:00
Soulter
c653e492c4 Merge pull request #164 from Soulter/stat-upload-perf
/models 指令优化
2024-05-25 18:35:56 +08:00
Soulter
f08de1f404 perf: 添加 models 指令到帮助中 2024-05-25 18:34:08 +08:00
Soulter
1218691b61 perf: model 指令放宽限制,支持输入自定义模型。设置模型后持久化保存。 2024-05-25 18:29:01 +08:00
Soulter
61fc27ff79 Merge pull request #163 from Soulter/stat-upload-perf
优化统计记录数据结构
2024-05-25 18:28:08 +08:00
Soulter
123ee24f7e fix: stat perf 2024-05-25 18:01:16 +08:00
Soulter
52c9045a28 feat: 优化了统计信息数据结构 2024-05-25 17:47:41 +08:00
Soulter
f00f1e8933 fix: 画图报错 2024-05-24 13:33:02 +08:00
Soulter
8da4433e57 chore: 更改相关字段 2024-05-21 08:44:05 +08:00
Soulter
7babb87934 perf: 更改库的加载顺序 2024-05-21 08:41:46 +08:00
Soulter
f67b171385 perf: 数据库迁移至 data 目录下 2024-05-19 17:10:11 +08:00
Soulter
1780d1355d perf: 将内部pip全部更换为阿里云镜像; 插件依赖更新逻辑优化 2024-05-19 16:45:08 +08:00
Soulter
5a3390e4f3 fix: force update 2024-05-19 16:06:47 +08:00
Soulter
337d96b41d Merge pull request #160 from Soulter/dev_default_openai_refactor
优化自带的 OpenAI LLM 交互, 人格, 网页搜索
2024-05-19 15:23:19 +08:00
Soulter
38a1dfea98 fix: web content scraper add proxy 2024-05-19 15:08:22 +08:00
Soulter
fbef73aeec fix: websearch encoding set to utf-8 2024-05-19 14:42:28 +08:00
Soulter
d6214c2b7c fix: web search 2024-05-19 12:55:54 +08:00
Soulter
d58c86f6fc perf: websearch 优化;项目结构调整 2024-05-19 12:46:07 +08:00
Soulter
ea34c20198 perf: 优化人格和LVM的处理过程 2024-05-18 10:34:35 +08:00
Soulter
934ca94e62 refactor: 重写 LLM OpenAI 模块 2024-05-17 22:56:44 +08:00
Soulter
1775327c2e chore: refact openai official 2024-05-17 09:07:11 +08:00
Soulter
707fcad8b4 feat: gpt 模型列表查看指令 models 2024-05-17 00:06:49 +08:00
Soulter
f143c5afc6 fix: 修复 plugin v 子指令报错的问题 2024-05-16 23:11:07 +08:00
Soulter
99f94b2611 fix: 修复无法调用某些指令的问题 2024-05-16 23:04:47 +08:00
Soulter
e39c1f9116 remove: 移除自动更换多模态模型的功能 2024-05-16 22:46:50 +08:00
Soulter
235e0b9b8f fix: gocq logging 2024-05-09 13:24:31 +08:00
Soulter
d5a9bed8a4 fix(updator): IterableList object has no
attribute origin
2024-05-08 19:18:21 +08:00
Soulter
d7dc8a7612 chore: 添加一些日志;更新版本 2024-05-08 19:12:23 +08:00
Soulter
08cd3ca40c perf: 更好的日志输出;
fix: 修复可视化面板刷新404
2024-05-08 19:01:36 +08:00
Soulter
a13562dcea fix: 修复启动器启动加载带有配置的插件时提示配置文件缺失的问题 2024-05-08 16:28:30 +08:00
Soulter
d7a0c0d1d0 Update requirements.txt 2024-05-07 15:58:51 +08:00
Soulter
c0729b2d29 fix: 修复插件重载相关问题 2024-04-22 19:04:15 +08:00
Soulter
a80f474290 fix: 修复更新插件时的报错 2024-04-22 18:36:56 +08:00
Soulter
699207dd54 update: version 2024-04-21 22:41:48 +08:00
Soulter
e7708010c9 fix: 修复 gocq 平台下无法回复消息的问题 2024-04-21 22:39:09 +08:00
Soulter
f66091e08f 🎨: clean codes 2024-04-21 22:20:23 +08:00
Soulter
03bb932f8f fix: 修复可视化面板报错 2024-04-21 22:16:42 +08:00
Soulter
fbf8b349e0 update: helloworld 2024-04-21 22:13:27 +08:00
Soulter
e9278fce6a !! delete: 移除对逆向 ChatGPT 的所有支持。 2024-04-21 22:12:09 +08:00
Soulter
9a7db956d5 fix: 修复 3.10.x readibility 依赖导致的报错 2024-04-21 16:40:02 +08:00
Soulter
13196dd667 perf: 修改包路径 2024-03-15 14:49:44 +08:00
Soulter
52b80e24d2 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-03-15 14:29:48 +08:00
Soulter
7dff87e65d fix: 修复无法更新到指定版本的问题 2024-03-15 14:29:28 +08:00
Soulter
31ee64d1b2 Update docker-image.yml 2024-03-15 14:11:57 +08:00
Soulter
8e865b6918 fix: 修复无LLM情况下update不管用的问题 2024-03-15 14:05:16 +08:00
Soulter
66f91e5832 update: 更新版本号 2024-03-15 13:50:57 +08:00
Soulter
cd2d368f9c fix: 修复可视化面板指定版本更新失效的问题 2024-03-15 13:48:14 +08:00
Soulter
7736c1c9bd feat: QQ机器人官方API 支持可选择是否接收Q群消息 2024-03-15 13:44:18 +08:00
Soulter
6728c0b7b5 chore: 改变包名 2024-03-15 13:37:51 +08:00
Soulter
344f92e0e7 perf: 将内部基本消息对象统一为 AstrBotMessage
feat: 支持官方qq接口的Q群消息
2024-03-14 13:56:32 +08:00
130 changed files with 5318 additions and 4676 deletions

View File

@@ -14,6 +14,8 @@ jobs:
uses: actions/checkout@v2
- name: Build image
run: |
git clone https://github.com/Soulter/AstrBot
cd AstrBot
docker build -t ${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:latest .
- name: Publish image
run: |

5
.gitignore vendored
View File

@@ -7,6 +7,7 @@ configs/config.yaml
**/.DS_Store
temp
cmd_config.json
addons/plugins/
data/
data/*
cookies.json
logs/
addons/plugins

View File

@@ -1,4 +1,4 @@
FROM python:3.10.13-bullseye
FROM python:3.10-slim
WORKDIR /AstrBot
COPY . /AstrBot/

176
README.md
View File

@@ -1,181 +1,51 @@
<p align="center">
<img src="https://github.com/Soulter/AstrBot/assets/37870767/b1686114-f3aa-4963-b07f-28bf83dc0a10" alt="QQChannelChatGPT" width="200" />
<img width="806" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/c6f057d9-46d7-4144-8116-00a962941746">
</p>
<div align="center">
# AstrBot
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/Soulter/AstrBot)](https://github.com/Soulter/AstrBot/releases/latest)
<img src="https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/34412545-2e37-400f-bedc-42348713ac1f.svg" alt="wakatime">
<img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="python">
<a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg"/></a>
<a href="https://qm.qq.com/cgi-bin/qm/qr?k=EYGsuUTfe00_iOu9JTXS7_TEpMkXOvwv&jump_from=webapi&authKey=uUEMKCROfsseS+8IzqPjzV3y1tzy4AkykwTib2jNkOFdzezF9s9XknqnIaf3CDft">
<img alt="Static Badge" src="https://img.shields.io/badge/QQ群-322154837-purple">
</a>
<img alt="Static Badge" src="https://img.shields.io/badge/频道-x42d56aki2-purple">
<a href="https://astrbot.soulter.top/center">项目部署</a>
<a href="https://github.com/Soulter/QQChannelChatGPT/issues">问题提交</a>
<a href="https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91">插件开发(最少只需 25 行)</a>
<a href="https://github.com/Soulter/AstrBot/issues">问题提交</a>
<a href="https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91">插件开发</a>
</div>
## 🤔您可能想了解的
- **如何部署?** [帮助文档](https://astrbot.soulter.top/center/docs/%E9%83%A8%E7%BD%B2/%E9%80%9A%E8%BF%87Docker%E9%83%A8%E7%BD%B2) (部署不成功欢迎进群捞人解决<3)
- **go-cqhttp启动不成功报登录失败** [在这里搜索解决方法](https://github.com/Mrs4s/go-cqhttp/issues)
- **程序闪退/机器人启动不成功** [提交issue或加群反馈](https://github.com/Soulter/QQChannelChatGPT/issues)
- **如何开启 ChatGPTClaudeHuggingChat 等语言模型** [查看帮助](https://astrbot.soulter.top/center/docs/%E4%BD%BF%E7%94%A8/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B)
## 🛠️ 功能
## 🧩功能:
最近功能
1. 可视化面板
2. Docker 一键部署项目[链接](https://astrbot.soulter.top/center/docs/%E9%83%A8%E7%BD%B2/%E9%80%9A%E8%BF%87Docker%E9%83%A8%E7%BD%B2)
🌍支持的消息平台/接口
- go-cqhttpQQQQ频道
- QQ 官方机器人接口
🌍 支持的消息平台
- QQ 群、QQ 频道OneBot、QQ 官方接口)
- Telegram由 [astrbot_plugin_telegram](https://github.com/Soulter/astrbot_plugin_telegram) 插件支持)
- WeChat(微信) (由 [astrbot_plugin_vchat](https://github.com/z2z63/astrbot_plugin_vchat) 插件支持)
🌍支持的AI语言模型一览
🌍 支持的大模型一览:
**文字模型/图片理解**
- OpenAI GPT、DallE 系列
- Claude由[LLMs插件](https://github.com/Soulter/llms)支持)
- HuggingChat由[LLMs插件](https://github.com/Soulter/llms)支持)
- Gemini由[LLMs插件](https://github.com/Soulter/llms)支持)
- OpenAI GPT-3原生支持
- OpenAI GPT-3.5原生支持
- OpenAI GPT-4原生支持
- Claude免费[LLMs插件](https://github.com/Soulter/llms)支持
- HuggingChat免费[LLMs插件](https://github.com/Soulter/llms)支持
- Gemini免费[LLMs插件](https://github.com/Soulter/llms)支持
🌍 机器人支持的能力一览:
- 大模型对话、人格、网页搜索
- 可视化管理面板
- 同时处理多平台消息
- 精确到个人的会话隔离
- 插件支持
- 文本转图片回复Markdown
**图片生成**
- OpenAI Dalle 接口
- NovelAI/Naifu (免费[AIDraw插件](https://github.com/Soulter/aidraw)支持)
## 🧩 插件支持
🌍机器人支持的能力一览
- 可视化面板beta
- 同时部署机器人到 QQ QQ 频道
- 大模型对话
- 大模型网页搜索能力 **(目前仅支持OpenAI系模型最新版本下使用 web on 指令打开)**
- 插件在QQ或QQ频道聊天框内输入 `plugin` 了解详情
- 回复文字图片渲染以图片markdown格式回复**大幅度降低被风控概率**需手动在`cmd_config.json`内开启qq_pic_mode
- 人格设置
- 关键词回复
- 热更新更新本项目时**仅需**在QQ或QQ频道聊天框内输入`update latest r`
- Windows一键部署 https://github.com/Soulter/QQChatGPTLauncher/releases/latest
<!--
### 基本功能
<details>
<summary>✅ 回复符合上下文</summary>
- 程序向API发送近多次对话内容模型根据上下文生成回复
- 你可在`configs/config.yaml`中修改`total_token_limit`来近似控制缓存大小。
</details>
<details>
<summary>✅ 超额自动切换</summary>
- 超额时程序自动切换openai的key方便快捷
</details>
<details>
<summary>✅ 支持统计频道、消息数量等信息</summary>
- 实现了简单的统计功能
</details>
<details>
<summary>✅ 多并发处理,回复速度快</summary>
- 使用了协程理论最高可以支持每个子频道每秒回复5条信息
</details>
<details>
<summary>✅ 持久化转储历史记录,重启不丢失</summary>
- 使用内置的sqlite数据库存储历史记录到本地
- 方式为定时转储,可在`config.yaml`下修改`dump_history_interval`来修改间隔时间,单位为分钟。
</details>
<details>
<summary>✅ 支持多种指令控制</summary>
- 详见下方`指令功能`
</details>
<details>
<summary>✅ 官方API稳定</summary>
- 不使用ChatGPT逆向接口而使用官方API接口稳定方便。
- QQ频道机器人框架为QQ官方开源的框架稳定。
</details> -->
<!-- > 关于tokentoken就相当于是AI中的单词数但是不等于单词数`text-davinci-003`模型中最大可以支持`4097`个token。在发送信息时这个机器人会将用户的历史聊天记录打包发送给ChatGPT因此`token`也会相应的累加为了保证聊天的上下文的逻辑性就有了缓存token。 -->
### 🛠️ 插件支持
本项目支持接入插件。
> 使用`plugin i 插件GitHub链接`即可安装。
部分插件:
- `LLMS`: https://github.com/Soulter/llms | Claude, HuggingChat 大语言模型接入。
- `GoodPlugins`: https://github.com/Soulter/goodplugins | 随机动漫图片、搜番、喜报生成器等等
- `sysstat`: https://github.com/Soulter/sysstatqcbot | 查看系统状态
- `BiliMonitor`: https://github.com/Soulter/BiliMonitor | 订阅B站动态
- `liferestart`: https://github.com/Soulter/liferestart | 人生重开模拟器
有关插件的使用和列表请移步:[AstrBot 文档 - 插件](https://astrbot.soulter.top/center/docs/%E4%BD%BF%E7%94%A8/%E6%8F%92%E4%BB%B6)
## ✨ Demo
<img width="900" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/824d1ff3-7b85-481c-b795-8e62dedb9fd7">
<!--
### 指令
#### OpenAI官方API
在频道内需要先`@`机器人之后再输入指令在QQ中暂时需要在消息前加上`ai `,不需要@
- `/reset`重置prompt
- `/his`查看历史记录(每个用户都有独立的会话)
- `/his [页码数]`查看不同页码的历史记录。例如`/his 2`查看第2页
- `/token`查看当前缓存的总token数
- `/count` 查看统计
- `/status` 查看chatGPT的配置
- `/help` 查看帮助
- `/key` 动态添加key
- `/set` 人格设置面板
- `/keyword nihao 你好` 设置关键词回复。nihao->你好
- `/revgpt` 切换为ChatGPT逆向库
- `/画` 画画
#### 逆向ChatGPT库语言模型
- `/gpt` 切换为OpenAI官方API
* 切换模型指令支持临时回复。如`/a 你好`将会临时使用一次bing模型 -->
<!--
## 🙇‍感谢
本项目使用了一下项目:
[ChatGPT by acheong08](https://github.com/acheong08/ChatGPT)
[EdgeGPT by acheong08](https://github.com/acheong08/EdgeGPT)
[go-cqhttp by Mrs4s](https://github.com/Mrs4s/go-cqhttp)
[nakuru-project by Lxns-Network](https://github.com/Lxns-Network/nakuru-project) -->

View File

@@ -1 +0,0 @@
import{_ as t}from"./_plugin-vue_export-helper-c27b6911.js";import{o,c,w as s,V as i,a as r,b as e,d as l,e as a,f as d}from"./index-9075b0bb.js";const n="/assets/img-error-bg-ab6474a0.svg",_="/assets/img-error-blue-2675a7a9.svg",m="/assets/img-error-text-a6aebfa0.svg",g="/assets/img-error-purple-edee3fbc.svg";const p={},u={class:"text-center"},f=e("div",{class:"CardMediaWrapper"},[e("img",{src:n,alt:"grid",class:"w-100"}),e("img",{src:_,alt:"grid",class:"CardMediaParts"}),e("img",{src:m,alt:"build",class:"CardMediaBuild"}),e("img",{src:g,alt:"build",class:"CardMediaBuild"})],-1),h=e("h1",{class:"text-h1"},"Something is wrong",-1),v=e("p",null,[e("small",null,[a("The page you are looking was moved, removed, "),e("br"),a("renamed, or might never exist! ")])],-1);function x(b,V){return o(),c(i,{"no-gutters":"",class:"h-100vh"},{default:s(()=>[r(d,{class:"d-flex align-center justify-center"},{default:s(()=>[e("div",u,[f,h,v,r(l,{variant:"flat",color:"primary",class:"mt-4",to:"/","prepend-icon":"mdi-home"},{default:s(()=>[a(" Home")]),_:1})])]),_:1})]),_:1})}const C=t(p,[["render",x]]);export{C as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
import{_ as B}from"./LogoDark.vue_vue_type_script_setup_true_lang-b3521e6c.js";import{x as y,D as o,o as b,s as U,a as e,w as a,b as n,B as $,d as u,f as d,A as _,e as f,V as r,O as m,ap as A,au as E,F,c as T,N as q,J as V,L as P}from"./index-9075b0bb.js";const z="/assets/social-google-a359a253.svg",N=["src"],S=n("span",{class:"ml-2"},"Sign up with Google",-1),D=n("h5",{class:"text-h5 text-center my-4 mb-8"},"Sign up with Email address",-1),G={class:"d-sm-inline-flex align-center mt-2 mb-7 mb-sm-0 font-weight-bold"},L=n("a",{href:"#",class:"ml-1 text-lightText"},"Terms and Condition",-1),O={class:"mt-5 text-right"},j=y({__name:"AuthRegister",setup(w){const c=o(!1),i=o(!1),p=o(""),v=o(""),g=o(),h=o(""),x=o(""),k=o([s=>!!s||"Password is required",s=>s&&s.length<=10||"Password must be less than 10 characters"]),C=o([s=>!!s||"E-mail is required",s=>/.+@.+\..+/.test(s)||"E-mail must be valid"]);function R(){g.value.validate()}return(s,l)=>(b(),U(F,null,[e(u,{block:"",color:"primary",variant:"outlined",class:"text-lightText googleBtn"},{default:a(()=>[n("img",{src:$(z),alt:"google"},null,8,N),S]),_:1}),e(r,null,{default:a(()=>[e(d,{class:"d-flex align-center"},{default:a(()=>[e(_,{class:"custom-devider"}),e(u,{variant:"outlined",class:"orbtn",rounded:"md",size:"small"},{default:a(()=>[f("OR")]),_:1}),e(_,{class:"custom-devider"})]),_:1})]),_:1}),D,e(E,{ref_key:"Regform",ref:g,"lazy-validation":"",action:"/dashboards/analytical",class:"mt-7 loginForm"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:h.value,"onUpdate:modelValue":l[0]||(l[0]=t=>h.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Firstname"},null,8,["modelValue"])]),_:1}),e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:x.value,"onUpdate:modelValue":l[1]||(l[1]=t=>x.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Lastname"},null,8,["modelValue"])]),_:1})]),_:1}),e(m,{modelValue:v.value,"onUpdate:modelValue":l[2]||(l[2]=t=>v.value=t),rules:C.value,label:"Email Address / Username",class:"mt-4 mb-4",required:"",density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary"},null,8,["modelValue","rules"]),e(m,{modelValue:p.value,"onUpdate:modelValue":l[3]||(l[3]=t=>p.value=t),rules:k.value,label:"Password",required:"",density:"comfortable",variant:"outlined",color:"primary","hide-details":"auto","append-icon":i.value?"mdi-eye":"mdi-eye-off",type:i.value?"text":"password","onClick:append":l[4]||(l[4]=t=>i.value=!i.value),class:"pwdInput"},null,8,["modelValue","rules","append-icon","type"]),n("div",G,[e(A,{modelValue:c.value,"onUpdate:modelValue":l[5]||(l[5]=t=>c.value=t),rules:[t=>!!t||"You must agree to continue!"],label:"Agree with?",required:"",color:"primary",class:"ms-n2","hide-details":""},null,8,["modelValue","rules"]),L]),e(u,{color:"secondary",block:"",class:"mt-2",variant:"flat",size:"large",onClick:l[6]||(l[6]=t=>R())},{default:a(()=>[f("Sign Up")]),_:1})]),_:1},512),n("div",O,[e(_),e(u,{variant:"plain",to:"/auth/login",class:"mt-2 text-capitalize mr-n2"},{default:a(()=>[f("Already have an account?")]),_:1})])],64))}});const I={class:"pa-7 pa-sm-12"},J=n("h2",{class:"text-secondary text-h2 mt-8"},"Sign up",-1),Y=n("h4",{class:"text-disabled text-h4 mt-3"},"Enter credentials to continue",-1),M=y({__name:"RegisterPage",setup(w){return(c,i)=>(b(),T(r,{class:"h-100vh","no-gutters":""},{default:a(()=>[e(d,{cols:"12",class:"d-flex align-center bg-lightprimary"},{default:a(()=>[e(q,null,{default:a(()=>[n("div",I,[e(r,{justify:"center"},{default:a(()=>[e(d,{cols:"12",lg:"10",xl:"6",md:"7"},{default:a(()=>[e(V,{elevation:"0",class:"loginBox"},{default:a(()=>[e(V,{variant:"outlined"},{default:a(()=>[e(P,{class:"pa-9"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",class:"text-center"},{default:a(()=>[e(B),J,Y]),_:1})]),_:1}),e(j)]),_:1})]),_:1})]),_:1})]),_:1})]),_:1})])]),_:1})]),_:1})]),_:1}))}});export{M as default};

View File

@@ -0,0 +1,10 @@
# helloworld
AstrBot 插件模板
A template plugin for AstrBot plugin feature
# 支持
[帮助文档](https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91/
)

View File

@@ -0,0 +1 @@
https://github.com/Soulter/helloworld

View File

@@ -1,170 +0,0 @@
import os
import shutil
from nakuru.entities.components import *
from nakuru import (
GroupMessage,
FriendMessage
)
from botpy.message import Message, DirectMessage
flag_not_support = False
try:
from util.plugin_dev.api.v1.config import *
from util.plugin_dev.api.v1.bot import (
PluginMetadata,
PluginType,
AstrMessageEvent,
CommandResult,
)
from util.plugin_dev.api.v1.register import register_llm, unregister_llm
except ImportError:
flag_not_support = True
print("llms: 导入接口失败。请升级到 AstrBot 最新版本。")
'''
注意改插件名噢格式XXXPlugin 或 Main
小提示:把此模板仓库 fork 之后 clone 到机器人文件夹下的 addons/plugins/ 目录下,然后用 Pycharm/VSC 等工具打开可获更棒的编程体验(自动补全等)
'''
class HelloWorldPlugin:
"""
初始化函数, 可以选择直接pass
"""
def __init__(self) -> None:
# 复制旧配置文件到 data 目录下。
if os.path.exists("keyword.json"):
shutil.move("keyword.json", "data/keyword.json")
self.keywords = {}
if os.path.exists("data/keyword.json"):
self.keywords = json.load(open("data/keyword.json", "r"))
else:
self.save_keyword()
"""
机器人程序会调用此函数。
返回规范: bool: 插件是否响应该消息 (所有的消息均会调用每一个载入的插件, 如果不响应, 则应返回 False)
Tuple: Non e或者长度为 3 的元组。如果不响应, 返回 None 如果响应, 第 1 个参数为指令是否调用成功, 第 2 个参数为返回的消息链列表, 第 3 个参数为指令名称
例子:一个名为"yuanshen"的插件;当接收到消息为“原神 可莉”, 如果不想要处理此消息则返回False, None如果想要处理但是执行失败了返回True, tuple([False, "请求失败。", "yuanshen"]) 执行成功了返回True, tuple([True, "结果文本", "yuanshen"])
"""
def run(self, ame: AstrMessageEvent):
if ame.message_str == "helloworld":
return CommandResult(
hit=True,
success=True,
message_chain=[Plain("Hello World!!")],
command_name="helloworld"
)
if ame.message_str.startswith("/keyword") or ame.message_str.startswith("keyword"):
return self.handle_keyword_command(ame)
ret = self.check_keyword(ame.message_str)
if ret: return ret
return CommandResult(
hit=False,
success=False,
message_chain=None,
command_name=None
)
def handle_keyword_command(self, ame: AstrMessageEvent):
l = ame.message_str.split(" ")
# 获取图片
image_url = ""
for comp in ame.message_obj.message:
if isinstance(comp, Image) and image_url == "":
if comp.url is None:
image_url = comp.file
else:
image_url = comp.url
command_result = CommandResult(
hit=True,
success=False,
message_chain=None,
command_name="keyword"
)
if len(l) == 1 or (len(l) == 2 and image_url == ""):
ret = """【设置关键词回复】
示例:
1. keyword <触发词> <回复词>
keyword hi 你好
发送 hi 回复你好
* 回复词支持图片
2. keyword d <触发词>
keyword d hi
删除 hi 触发词产生的回复"""
command_result.success = True
command_result.message_chain = [Plain(ret)]
return command_result
elif len(l) == 3 and l[1] == "d":
if l[2] not in self.keywords:
command_result.message_chain = [Plain(f"关键词 {l[2]} 不存在")]
return command_result
self.keywords.pop(l[2])
self.save_keyword()
command_result.success = True
command_result.message_chain = [Plain("删除成功")]
return command_result
else:
self.keywords[l[1]] = {
"plain_text": " ".join(l[2:]),
"image_url": image_url
}
self.save_keyword()
command_result.success = True
command_result.message_chain = [Plain("设置成功")]
return command_result
def save_keyword(self):
json.dump(self.keywords, open("data/keyword.json", "w"), ensure_ascii=False)
def check_keyword(self, message_str: str):
for k in self.keywords:
if message_str == k:
plain_text = ""
if 'plain_text' in self.keywords[k]:
plain_text = self.keywords[k]['plain_text']
else:
plain_text = self.keywords[k]
image_url = ""
if 'image_url' in self.keywords[k]:
image_url = self.keywords[k]['image_url']
if image_url != "":
res = [Plain(plain_text), Image.fromURL(image_url)]
return CommandResult(
hit=True,
success=True,
message_chain=res,
command_name="keyword"
)
return CommandResult(
hit=True,
success=True,
message_chain=[Plain(plain_text)],
command_name="keyword"
)
"""
插件元信息。
当用户输入 plugin v 插件名称 时,会调用此函数,返回帮助信息。
返回参数要求(必填)dict{
"name": str, # 插件名称
"desc": str, # 插件简短描述
"help": str, # 插件帮助信息
"version": str, # 插件版本
"author": str, # 插件作者
"repo": str, # 插件仓库地址 [ 可选 ]
"homepage": str, # 插件主页 [ 可选 ]
}
"""
def info(self):
return {
"name": "helloworld",
"desc": "这是 AstrBot 的默认插件,支持关键词回复。",
"help": "输入 /keyword 查看关键词回复帮助。",
"version": "v1.3",
"author": "Soulter"
}

View File

@@ -0,0 +1,32 @@
flag_not_support = False
try:
from util.plugin_dev.api.v1.bot import Context, AstrMessageEvent, CommandResult
from util.plugin_dev.api.v1.config import *
except ImportError:
flag_not_support = True
print("导入接口失败。请升级到 AstrBot 最新版本。")
'''
注意以格式 XXXPlugin 或 Main 来修改插件名。
提示:把此模板仓库 fork 之后 clone 到机器人文件夹下的 addons/plugins/ 目录下,然后用 Pycharm/VSC 等工具打开可获更棒的编程体验(自动补全等)
'''
class HelloWorldPlugin:
"""
AstrBot 会传递 context 给插件。
- context.register_commands: 注册指令
- context.register_task: 注册任务
- context.message_handler: 消息处理器(平台类插件用)
"""
def __init__(self, context: Context) -> None:
self.context = context
self.context.register_commands("helloworld", "helloworld", "内置测试指令。", 1, self.helloworld)
"""
指令处理函数。
- 需要接收两个参数message: AstrMessageEvent, context: Context
- 返回 CommandResult 对象
"""
def helloworld(self, message: AstrMessageEvent, context: Context):
return CommandResult().message("Hello, World!")

View File

@@ -0,0 +1,6 @@
name: helloworld # 这是你的插件的唯一识别名。
desc: 这是 AstrBot 的默认插件。
help:
version: v1.3 # 插件版本号。格式v1.1.1 或者 v1.1
author: Soulter # 作者
repo: https://github.com/Soulter/helloworld # 插件的仓库地址

114
astrbot/bootstrap.py Normal file
View File

@@ -0,0 +1,114 @@
import asyncio
import traceback
from astrbot.message.handler import MessageHandler
from astrbot.persist.helper import dbConn
from dashboard.server import AstrBotDashBoard
from model.provider.provider import Provider
from model.command.manager import CommandManager
from model.command.internal_handler import InternalCommandHandler
from model.plugin.manager import PluginManager
from model.platform.manager import PlatformManager
from typing import Dict, List, Union
from type.types import Context
from SparkleLogging.utils.core import LogManager
from logging import Logger
from util.cmd_config import CmdConfig
from util.metrics import MetricUploader
from util.config_utils import *
from util.updator.astrbot_updator import AstrBotUpdator
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class AstrBotBootstrap():
def __init__(self) -> None:
self.context = Context()
self.config_helper: CmdConfig = CmdConfig()
# load configs and ensure the backward compatibility
init_configs()
try_migrate_config()
self.configs = inject_to_context(self.context)
logger.info("AstrBot v" + self.context.version)
self.context.config_helper = self.config_helper
# apply proxy settings
http_proxy = self.context.base_config.get("http_proxy")
https_proxy = self.context.base_config.get("https_proxy")
if http_proxy:
os.environ['HTTP_PROXY'] = http_proxy
if https_proxy:
os.environ['HTTPS_PROXY'] = https_proxy
os.environ['NO_PROXY'] = 'https://api.sgroup.qq.com'
if http_proxy and https_proxy:
logger.info(f"使用代理: {http_proxy}, {https_proxy}")
else:
logger.info("未使用代理。")
async def run(self):
self.command_manager = CommandManager()
self.plugin_manager = PluginManager(self.context)
self.updator = AstrBotUpdator()
self.cmd_handler = InternalCommandHandler(self.command_manager, self.plugin_manager)
self.db_conn_helper = dbConn()
# load llm provider
self.llm_instance: Provider = None
self.load_llm()
self.message_handler = MessageHandler(self.context, self.command_manager, self.db_conn_helper, self.llm_instance)
self.platfrom_manager = PlatformManager(self.context, self.message_handler)
self.dashboard = AstrBotDashBoard(self.context, plugin_manager=self.plugin_manager, astrbot_updator=self.updator)
self.metrics_uploader = MetricUploader(self.context)
self.context.metrics_uploader = self.metrics_uploader
self.context.updator = self.updator
self.context.plugin_updator = self.plugin_manager.updator
self.context.message_handler = self.message_handler
# load plugins, plugins' commands.
self.load_plugins()
self.command_manager.register_from_pcb(self.context.plugin_command_bridge)
# load platforms
platform_tasks = self.load_platform()
# load metrics uploader
metrics_upload_task = asyncio.create_task(self.metrics_uploader.upload_metrics(), name="metrics-uploader")
# load dashboard
self.dashboard.run_http_server()
dashboard_task = asyncio.create_task(self.dashboard.ws_server(), name="dashboard")
tasks = [metrics_upload_task, dashboard_task, *platform_tasks, *self.context.ext_tasks]
tasks = [self.handle_task(task) for task in tasks]
await asyncio.gather(*tasks)
async def handle_task(self, task: Union[asyncio.Task, asyncio.Future]):
while True:
try:
result = await task
return result
except Exception as e:
logger.error(traceback.format_exc())
logger.error(f"{task.get_name()} 任务发生错误,将在 5 秒后重试。")
await asyncio.sleep(5)
def load_llm(self):
if 'openai' in self.configs and \
len(self.configs['openai']['key']) and \
self.configs['openai']['key'][0] is not None:
from model.provider.openai_official import ProviderOpenAIOfficial
from model.command.openai_official_handler import OpenAIOfficialCommandHandler
self.openai_command_handler = OpenAIOfficialCommandHandler(self.command_manager)
self.llm_instance = ProviderOpenAIOfficial(self.context)
self.openai_command_handler.set_provider(self.llm_instance)
self.context.register_provider("internal_openai", self.llm_instance)
logger.info("已启用 OpenAI API 支持。")
def load_plugins(self):
self.plugin_manager.plugin_reload()
def load_platform(self):
platforms = self.platfrom_manager.load_platforms()
if not platforms:
logger.warn("未启用任何消息平台。")
return platforms

View File

@@ -1,14 +1,17 @@
from aip import AipContentCensor
class BaiduJudge:
def __init__(self, baidu_configs) -> None:
if 'app_id' in baidu_configs and 'api_key' in baidu_configs and 'secret_key' in baidu_configs:
self.app_id = str(baidu_configs['app_id'])
self.api_key = baidu_configs['api_key']
self.secret_key = baidu_configs['secret_key']
self.client = AipContentCensor(self.app_id, self.api_key, self.secret_key)
self.client = AipContentCensor(
self.app_id, self.api_key, self.secret_key)
else:
raise ValueError("Baidu configs error! 请填写百度内容审核服务相关配置!")
def judge(self, text):
res = self.client.textCensorUserDefined(text)
if 'conclusionType' not in res:
@@ -23,4 +26,4 @@ class BaiduJudge:
for i in res['data']:
info += f"{i['msg']}\n"
info += "\n判断结果:"+res['conclusion']
return False, info
return False, info

204
astrbot/message/handler.py Normal file
View File

@@ -0,0 +1,204 @@
import time
import re
import asyncio
import traceback
import astrbot.message.unfit_words as uw
from typing import Dict
from astrbot.persist.helper import dbConn
from model.provider.provider import Provider
from model.command.manager import CommandManager
from type.message_event import AstrMessageEvent, MessageResult
from type.types import Context
from type.command import CommandResult
from SparkleLogging.utils.core import LogManager
from logging import Logger
from nakuru.entities.components import Image
import util.agent.web_searcher as web_searcher
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class RateLimitHelper():
def __init__(self, context: Context) -> None:
self.user_rate_limit: Dict[int, int] = {}
self.rate_limit_time: int = 60
self.rate_limit_count: int = 10
self.user_frequency = {}
if 'limit' in context.base_config:
if 'count' in context.base_config['limit']:
self.rate_limit_count = context.base_config['limit']['count']
if 'time' in context.base_config['limit']:
self.rate_limit_time = context.base_config['limit']['time']
def check_frequency(self, session_id: str) -> bool:
'''
检查发言频率
'''
ts = int(time.time())
if session_id in self.user_frequency:
if ts-self.user_frequency[session_id]['time'] > self.rate_limit_time:
self.user_frequency[session_id]['time'] = ts
self.user_frequency[session_id]['count'] = 1
return True
else:
if self.user_frequency[session_id]['count'] >= self.rate_limit_count:
return False
else:
self.user_frequency[session_id]['count'] += 1
return True
else:
t = {'time': ts, 'count': 1}
self.user_frequency[session_id] = t
return True
class ContentSafetyHelper():
def __init__(self, context: Context) -> None:
self.baidu_judge = None
if 'baidu_api' in context.base_config and \
'enable' in context.base_config['baidu_aip'] and \
context.base_config['baidu_aip']['enable']:
try:
from astrbot.message.baidu_aip_judge import BaiduJudge
self.baidu_judge = BaiduJudge(context.base_config['baidu_aip'])
logger.info("已启用百度 AI 内容审核。")
except BaseException as e:
logger.error("百度 AI 内容审核初始化失败。")
logger.error(e)
async def check_content(self, content: str) -> bool:
'''
检查文本内容是否合法
'''
for i in uw.unfit_words_q:
matches = re.match(i, content.strip(), re.I | re.M)
if matches:
return False
if self.baidu_judge != None:
check, msg = await asyncio.to_thread(self.baidu_judge.judge, content)
if not check:
logger.info(f"百度 AI 内容审核发现以下违规:{msg}")
return False
return True
def filter_content(self, content: str) -> str:
'''
过滤文本内容
'''
for i in uw.unfit_words_q:
content = re.sub(i, "*", content, flags=re.I)
return content
def baidu_check(self, content: str) -> bool:
'''
使用百度 AI 内容审核检查文本内容是否合法
'''
if self.baidu_judge != None:
check, msg = self.baidu_judge.judge(content)
if not check:
logger.info(f"百度 AI 内容审核发现以下违规:{msg}")
return False
return True
class MessageHandler():
def __init__(self, context: Context,
command_manager: CommandManager,
persist_manager: dbConn,
provider: Provider) -> None:
self.context = context
self.command_manager = command_manager
self.persist_manager = persist_manager
self.rate_limit_helper = RateLimitHelper(context)
self.content_safety_helper = ContentSafetyHelper(context)
self.llm_wake_prefix = self.context.base_config['llm_wake_prefix']
self.nicks = self.context.nick
self.provider = provider
self.reply_prefix = self.context.reply_prefix
def set_provider(self, provider: Provider):
self.provider = provider
async def handle(self, message: AstrMessageEvent, llm_provider: Provider = None) -> MessageResult:
'''
Handle the message event, including commands, plugins, etc.
`llm_provider`: the provider to use for LLM. If None, use the default provider
'''
msg_plain = message.message_str.strip()
provider = llm_provider if llm_provider else self.provider
inner_provider = False if llm_provider else True
self.persist_manager.record_message(message.platform.platform_name, message.session_id)
# TODO: this should be configurable
if not message.message_str:
return MessageResult("Hi~")
# check the rate limit
if not self.rate_limit_helper.check_frequency(message.message_obj.sender.user_id):
return MessageResult(f'你的发言超过频率限制(╯▔皿▔)╯。\n管理员设置 {self.rate_limit_helper.rate_limit_time} 秒内只能提问{self.rate_limit_helper.rate_limit_count} 次。')
# remove the nick prefix
for nick in self.nicks:
if msg_plain.startswith(nick):
msg_plain = msg_plain.removeprefix(nick)
break
# scan candidate commands
cmd_res = await self.command_manager.scan_command(message, self.context)
if cmd_res:
assert(isinstance(cmd_res, CommandResult))
return MessageResult(
cmd_res.message_chain,
is_command_call=True,
use_t2i=cmd_res.is_use_t2i
)
# check if the message is a llm-wake-up command
if not msg_plain.startswith(self.llm_wake_prefix):
return
if not provider:
return
# check the content safety
if not await self.content_safety_helper.check_content(msg_plain):
return MessageResult("信息包含违规内容,由于机器人管理者开启内容安全审核,你的此条消息已被停止继续处理。")
image_url = None
for comp in message.message_obj.message:
if isinstance(comp, Image):
image_url = comp.url if comp.url else comp.file
break
web_search = self.context.web_search
if not web_search and msg_plain.startswith("ws"):
# leverage web search feature
web_search = True
msg_plain = msg_plain.removeprefix("ws").strip()
try:
if web_search:
llm_result = await web_searcher.web_search(msg_plain, provider, message.session_id, inner_provider)
else:
llm_result = await provider.text_chat(
prompt=msg_plain,
session_id=message.session_id,
image_url=image_url
)
except BaseException as e:
logger.error(traceback.format_exc())
logger.error(f"LLM 调用失败。")
return MessageResult("AstrBot 请求 LLM 资源失败:" + str(e))
# concatenate the reply prefix
if self.reply_prefix:
llm_result = self.reply_prefix + llm_result
# mask the unsafe content
llm_result = self.content_safety_helper.filter_content(llm_result)
check = self.content_safety_helper.baidu_check(llm_result)
if not check:
return MessageResult("LLM 输出的信息包含违规内容,由于机器人管理者开启了内容安全审核,该条消息已拦截。")
return MessageResult(llm_result)

View File

@@ -0,0 +1,114 @@
# 网上搜集的。
# 来源词拦截
# 支持正则
unfit_words_q = ["习近平",
"胡锦涛",
"江泽民",
"温家宝",
"李克强",
"李长春",
"毛泽东",
"邓小平",
"周恩来",
"社会主义",
"共产党",
"共产主义",
"大陆官方",
"北京政权",
"中华帝国",
"中国政府",
"共狗",
"六四事件",
"天安门",
"六四",
"政治局常委",
"学潮",
"八九",
"二十大",
"民进党",
"台独",
"台湾独立",
"台湾国",
"国民党",
"台湾民国",
"中华民国",
"pornhub",
"Pornhub",
"作爱",
"做爱",
"性交",
"自慰",
"阴茎",
"淫妇",
"肛交",
"交配",
"性关系",
"性活动",
"色情",
"色图",
"裸体",
"小穴",
"淫荡",
"性爱",
"港独",
"法轮功",
"六四"]
# 回复词过滤
unfit_words = ["习近平",
"胡锦涛",
"江泽民",
"温家宝",
"李克强",
"李长春",
"毛泽东",
"邓小平",
"周恩来",
"社会主义",
"共产党",
"共产主义",
"大陆官方",
"北京政权",
"中华帝国",
"中国政府",
"共狗",
"六四事件",
"天安门",
"六四",
"政治局常委",
"学潮",
"八九",
"二十大",
"民进党",
"台独",
"台湾独立",
"台湾国",
"国民党",
"台湾民国",
"中华民国",
"pornhub",
"Pornhub",
"作爱",
"做爱",
"性交",
"自慰",
"阴茎",
"淫妇",
"肛交",
"交配",
"性关系",
"性活动",
"色情",
"色图",
"涩图",
"裸体",
"小穴",
"淫荡",
"性爱",
"中华人民共和国",
"党中央",
"中央军委主席",
"台湾",
"港独",
"法轮功",
"PRC"]

View File

@@ -1,51 +1,28 @@
import sqlite3
import yaml
import os
import shutil
import time
from typing import Tuple
class dbConn():
def __init__(self):
# 读取参数,并支持中文
conn = sqlite3.connect("data.db")
conn.text_factory=str
self.conn = conn
c = conn.cursor()
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_session(
qq_id VARCHAR(32) PRIMARY KEY,
history TEXT
);
'''
)
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_stat_session(
platform VARCHAR(32),
session_id VARCHAR(32),
cnt INTEGER
);
'''
)
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_stat_message(
ts INTEGER,
cnt INTEGER
);
'''
)
c.execute(
'''
CREATE TABLE IF NOT EXISTS tb_stat_platform(
ts INTEGER,
platform VARCHAR(32),
cnt INTEGER
);
'''
)
db_path = "data/data.db"
if os.path.exists("data.db"):
shutil.copy("data.db", db_path)
with open(os.path.dirname(__file__) + "/initialization.sql", "r") as f:
sql = f.read()
self.conn = sqlite3.connect(db_path)
self.conn.text_factory = str
c = self.conn.cursor()
c.executescript(sql)
self.conn.commit()
conn.commit()
def record_message(self, platform, session_id):
curr_ts = int(time.time())
self.increment_stat_session(platform, session_id, 1)
self.increment_stat_message(curr_ts, 1)
self.increment_stat_platform(curr_ts, platform, 1)
def insert_session(self, qq_id, history):
conn = self.conn
@@ -76,7 +53,7 @@ class dbConn():
''', (qq_id, )
)
return c.fetchone()
def get_all_session(self):
conn = self.conn
c = conn.cursor()
@@ -86,7 +63,7 @@ class dbConn():
'''
)
return c.fetchall()
def check_session(self, qq_id):
conn = self.conn
c = conn.cursor()
@@ -107,7 +84,6 @@ class dbConn():
)
conn.commit()
def increment_stat_session(self, platform, session_id, cnt):
# if not exist, insert
conn = self.conn
@@ -137,7 +113,7 @@ class dbConn():
''', (platform, session_id)
)
return c.fetchone() is not None
def get_all_stat_session(self):
conn = self.conn
c = conn.cursor()
@@ -147,7 +123,7 @@ class dbConn():
'''
)
return c.fetchall()
def get_session_cnt_total(self):
conn = self.conn
c = conn.cursor()
@@ -157,7 +133,7 @@ class dbConn():
'''
)
return c.fetchone()[0]
def increment_stat_message(self, ts, cnt):
# 以一个小时为单位。ts的单位是秒。
# 找到最近的一个小时,如果没有,就插入
@@ -197,7 +173,7 @@ class dbConn():
return True, ts
else:
return False, ts
def get_last_24h_stat_message(self):
# 获取最近24小时的消息统计
conn = self.conn
@@ -208,7 +184,7 @@ class dbConn():
''', (time.time() - 86400, )
)
return c.fetchall()
def get_message_cnt_total(self) -> int:
conn = self.conn
c = conn.cursor()
@@ -258,7 +234,7 @@ class dbConn():
return True, ts
else:
return False, ts
def get_last_24h_stat_platform(self):
# 获取最近24小时的消息统计
conn = self.conn
@@ -269,7 +245,7 @@ class dbConn():
''', (time.time() - 86400, )
)
return c.fetchall()
def get_platform_cnt_total(self) -> int:
conn = self.conn
c = conn.cursor()
@@ -291,4 +267,3 @@ class dbConn():
def close(self):
self.conn.close()

View File

@@ -0,0 +1,18 @@
CREATE TABLE IF NOT EXISTS tb_session(
qq_id VARCHAR(32) PRIMARY KEY,
history TEXT
);
CREATE TABLE IF NOT EXISTS tb_stat_session(
platform VARCHAR(32),
session_id VARCHAR(32),
cnt INTEGER
);
CREATE TABLE IF NOT EXISTS tb_stat_message(
ts INTEGER,
cnt INTEGER
);
CREATE TABLE IF NOT EXISTS tb_stat_platform(
ts INTEGER,
platform VARCHAR(32),
cnt INTEGER
);

View File

@@ -1,137 +0,0 @@
# 如果你不知道怎么部署请查看https://soulter.top/posts/qpdg.html
# 不一定需要key了如果你没有key但有openAI账号或者必应账号可以考虑使用下面的逆向库
###############平台设置#################
# QQ频道机器人
# QQ开放平台的appid和令牌
# q.qq.com
# enable为true则启用false则不启用
qqbot:
enable: true
appid:
token:
# QQ机器人
# enable为true则启用false则不启用
# 需要安装GO-CQHTTP配合使用。
# 文档https://docs.go-cqhttp.org/
# 请将go-cqhttp的配置文件的sever部分粘贴为以下内容否则无法使用
# 请先启动go-cqhttp再启动本程序
#
# servers:
# - http:
# host: 127.0.0.1
# version: 0
# port: 5700
# timeout: 5
# - ws:
# address: 127.0.0.1:6700
# middlewares:
# <<: *default
gocqbot:
enable: false
# 设置是否一个人一个会话
uniqueSessionMode: false
# QChannelBot 的版本请勿修改此字段否则可能产生一些bug
version: 3.0
# [Beta] 转储历史记录时间间隔(分钟)
dump_history_interval: 10
# 一个用户只能在time秒内发送count条消息
limit:
time: 60
count: 5
# 公告
notice: "此机器人由Github项目QQChannelChatGPT驱动。"
# 是否打开私信功能
# 设置为true则频道成员可以私聊机器人。
# 设置为false则频道成员不能私聊机器人。
direct_message_mode: true
# 系统代理
# http_proxy: http://localhost:7890
# https_proxy: http://localhost:7890
# 自定义回复前缀,如[Rev]或其他务必加引号以防止不必要的bug。
reply_prefix:
openai_official: "[GPT]"
rev_chatgpt: "[Rev]"
rev_edgegpt: "[RevBing]"
# 百度内容审核服务
# 新用户免费5万次调用。https://cloud.baidu.com/doc/ANTIPORN/index.html
baidu_aip:
enable: false
app_id:
api_key:
secret_key:
###############语言模型设置#################
# OpenAI官方API
# 注意已支持多key自动切换方法
# key:
# - sk-xxxxxx
# - sk-xxxxxx
# 在下方非注释的地方使用以上格式
# 关于api_base可以使用一些云函数如腾讯、阿里来避免国内被墙的问题。
# 详见:
# https://github.com/Ice-Hazymoon/openai-scf-proxy
# https://github.com/Soulter/QQChannelChatGPT/issues/42
# 设置为none则表示使用官方默认api地址
openai:
key:
-
api_base: none
# 这里是GPT配置语言模型默认使用gpt-3.5-turbo
chatGPTConfigs:
model: gpt-3.5-turbo
max_tokens: 3000
temperature: 0.9
top_p: 1
frequency_penalty: 0
presence_penalty: 0
total_tokens_limit: 5000
# 逆向文心一言【暂时不可用,请勿使用】
rev_ernie:
enable: false
# 逆向New Bing
# 需要在项目根目录下创建cookies.json并粘贴cookies进去。
# 详见https://soulter.top/posts/qpdg.html
rev_edgegpt:
enable: false
# 逆向ChatGPT库
# https://github.com/acheong08/ChatGPT
# 优点:免费(无免费额度限制);
# 缺点速度相对慢。OpenAI 速率限制:免费帐户每小时 50 个请求。您可以通过多帐户循环来绕过它
# enable设置为true后将会停止使用上面正常的官方API调用而使用本逆向项目
#
# 多账户可以保证每个请求都能得到及时的回复。
# 关于account的格式
# account:
# - email: 第1个账户
# password: 第1个账户密码
# - email: 第2个账户
# password: 第2个账户密码
# - ....
# 支持使用access_token登录
# 例:
# - session_token: xxxxx
# - access_token: xxxx
# 请严格按照上面这个格式填写。
# 逆向ChatGPT库的email-password登录方式不工作建议使用access_token登录
# 获取access_token的方法详见https://soulter.top/posts/qpdg.html
rev_ChatGPT:
enable: false
account:
- access_token:

View File

@@ -1,478 +0,0 @@
import re
import threading
import asyncio
import time
import aiohttp
import util.unfit_words as uw
import os
import sys
import io
import traceback
import util.function_calling.gplugin as gplugin
import util.plugin_util as putil
from PIL import Image as PILImage
from typing import Union
from nakuru import (
GroupMessage,
FriendMessage,
GuildMessage,
)
from nakuru.entities.components import Plain, At, Image
from addons.baidu_aip_judge import BaiduJudge
from model.platform._nakuru_translation_layer import NakuruGuildMessage
from model.provider.provider import Provider
from model.command.command import Command
from util import general_utils as gu
from util.general_utils import Logger, upload, run_monitor
from util.cmd_config import CmdConfig as cc
from util.cmd_config import init_astrbot_config_items
from .types import *
from addons.dashboard.helper import DashBoardHelper
from addons.dashboard.server import DashBoardData
from cores.database.conn import dbConn
from model.platform._message_result import MessageResult
# 用户发言频率
user_frequency = {}
# 时间默认值
frequency_time = 60
# 计数默认值
frequency_count = 10
# 版本
version = '3.1.9'
# 语言模型
REV_CHATGPT = 'rev_chatgpt'
OPENAI_OFFICIAL = 'openai_official'
NONE_LLM = 'none_llm'
chosen_provider = None
# 语言模型对象
llm_instance: dict[str, Provider] = {}
llm_command_instance: dict[str, Command] = {}
llm_wake_prefix = ""
# 百度内容审核实例
baidu_judge = None
# CLI
PLATFORM_CLI = 'cli'
init_astrbot_config_items()
# 全局对象
_global_object: GlobalObject = None
logger: Logger = Logger()
# 语言模型选择
def privider_chooser(cfg):
l = []
if 'rev_ChatGPT' in cfg and cfg['rev_ChatGPT']['enable']:
l.append('rev_chatgpt')
if 'openai' in cfg and len(cfg['openai']['key']) > 0 and cfg['openai']['key'][0] is not None:
l.append('openai_official')
return l
'''
初始化机器人
'''
def init(cfg):
global llm_instance, llm_command_instance
global baidu_judge, chosen_provider
global frequency_count, frequency_time
global _global_object
global logger
# 迁移旧配置
gu.try_migrate_config(cfg)
# 使用新配置
cfg = cc.get_all()
_event_loop = asyncio.new_event_loop()
asyncio.set_event_loop(_event_loop)
# 初始化 global_object
_global_object = GlobalObject()
_global_object.version = version
_global_object.base_config = cfg
_global_object.logger = logger
logger.log("AstrBot v"+version, gu.LEVEL_INFO)
if 'reply_prefix' in cfg:
# 适配旧版配置
if isinstance(cfg['reply_prefix'], dict):
_global_object.reply_prefix = ""
cfg['reply_prefix'] = ""
cc.put("reply_prefix", "")
else:
_global_object.reply_prefix = cfg['reply_prefix']
# 语言模型提供商
logger.log("正在载入语言模型...", gu.LEVEL_INFO)
prov = privider_chooser(cfg)
if REV_CHATGPT in prov:
logger.log("初始化:逆向 ChatGPT", gu.LEVEL_INFO)
if cfg['rev_ChatGPT']['enable']:
if 'account' in cfg['rev_ChatGPT']:
from model.provider.rev_chatgpt import ProviderRevChatGPT
from model.command.rev_chatgpt import CommandRevChatGPT
llm_instance[REV_CHATGPT] = ProviderRevChatGPT(cfg['rev_ChatGPT'], base_url=cc.get("CHATGPT_BASE_URL", None))
llm_command_instance[REV_CHATGPT] = CommandRevChatGPT(llm_instance[REV_CHATGPT], _global_object)
chosen_provider = REV_CHATGPT
_global_object.llms.append(RegisteredLLM(llm_name=REV_CHATGPT, llm_instance=llm_instance[REV_CHATGPT], origin="internal"))
else:
input("请退出本程序, 然后在配置文件中填写rev_ChatGPT相关配置")
if OPENAI_OFFICIAL in prov:
logger.log("初始化OpenAI官方", gu.LEVEL_INFO)
if cfg['openai']['key'] is not None and cfg['openai']['key'] != [None]:
from model.provider.openai_official import ProviderOpenAIOfficial
from model.command.openai_official import CommandOpenAIOfficial
llm_instance[OPENAI_OFFICIAL] = ProviderOpenAIOfficial(cfg['openai'])
llm_command_instance[OPENAI_OFFICIAL] = CommandOpenAIOfficial(llm_instance[OPENAI_OFFICIAL], _global_object)
_global_object.llms.append(RegisteredLLM(llm_name=OPENAI_OFFICIAL, llm_instance=llm_instance[OPENAI_OFFICIAL], origin="internal"))
chosen_provider = OPENAI_OFFICIAL
# 检查provider设置偏好
p = cc.get("chosen_provider", None)
if p is not None and p in llm_instance:
chosen_provider = p
# 百度内容审核
if 'baidu_aip' in cfg and 'enable' in cfg['baidu_aip'] and cfg['baidu_aip']['enable']:
try:
baidu_judge = BaiduJudge(cfg['baidu_aip'])
logger.log("百度内容审核初始化成功", gu.LEVEL_INFO)
except BaseException as e:
logger.log("百度内容审核初始化失败", gu.LEVEL_ERROR)
threading.Thread(target=upload, args=(_global_object, ), daemon=True).start()
# 得到发言频率配置
if 'limit' in cfg:
if 'count' in cfg['limit']:
frequency_count = cfg['limit']['count']
if 'time' in cfg['limit']:
frequency_time = cfg['limit']['time']
try:
if 'uniqueSessionMode' in cfg and cfg['uniqueSessionMode']:
_global_object.unique_session = True
else:
_global_object.unique_session = False
except BaseException as e:
logger.log("独立会话配置错误: "+str(e), gu.LEVEL_ERROR)
nick_qq = cc.get("nick_qq", None)
if nick_qq == None:
nick_qq = ("ai","!","")
if isinstance(nick_qq, str):
nick_qq = (nick_qq,)
if isinstance(nick_qq, list):
nick_qq = tuple(nick_qq)
_global_object.nick = nick_qq
# 语言模型唤醒词
global llm_wake_prefix
llm_wake_prefix = cc.get("llm_wake_prefix", "")
logger.log("正在载入插件...", gu.LEVEL_INFO)
# 加载插件
_command = Command(None, _global_object)
ok, err = putil.plugin_reload(_global_object.cached_plugins)
if ok:
logger.log(f"成功载入 {len(_global_object.cached_plugins)} 个插件", gu.LEVEL_INFO)
else:
logger.log(err, gu.LEVEL_ERROR)
if chosen_provider is None:
llm_command_instance[NONE_LLM] = _command
chosen_provider = NONE_LLM
logger.log("正在载入机器人消息平台", gu.LEVEL_INFO)
# logger.log("提示:需要添加管理员 ID 才能使用 update/plugin 等指令),可在可视化面板添加。(如已添加可忽略)", gu.LEVEL_WARNING)
platform_str = ""
# GOCQ
if 'gocqbot' in cfg and cfg['gocqbot']['enable']:
logger.log("启用 QQ_GOCQ 机器人消息平台", gu.LEVEL_INFO)
threading.Thread(target=run_gocq_bot, args=(cfg, _global_object), daemon=True).start()
platform_str += "QQ_GOCQ,"
# QQ频道
if 'qqbot' in cfg and cfg['qqbot']['enable'] and cfg['qqbot']['appid'] != None:
logger.log("启用 QQ_OFFICIAL 机器人消息平台", gu.LEVEL_INFO)
threading.Thread(target=run_qqchan_bot, args=(cfg, _global_object), daemon=True).start()
platform_str += "QQ_OFFICIAL,"
default_personality_str = cc.get("default_personality_str", "")
if default_personality_str == "":
_global_object.default_personality = None
else:
_global_object.default_personality = {
"name": "default",
"prompt": default_personality_str,
}
# 初始化dashboard
_global_object.dashboard_data = DashBoardData(
stats={},
configs={},
logs={},
plugins=_global_object.cached_plugins,
)
dashboard_helper = DashBoardHelper(_global_object, config=cc.get_all())
dashboard_thread = threading.Thread(target=dashboard_helper.run, daemon=True)
dashboard_thread.start()
# 运行 monitor
threading.Thread(target=run_monitor, args=(_global_object,), daemon=False).start()
logger.log("如果有任何问题, 请在 https://github.com/Soulter/AstrBot 上提交 issue 或加群 322154837。", gu.LEVEL_INFO)
logger.log("请给 https://github.com/Soulter/AstrBot 点个 star。", gu.LEVEL_INFO)
if platform_str == '':
platform_str = "(未启动任何平台,请前往面板添加)"
logger.log(f"🎉 项目启动完成")
dashboard_thread.join()
'''
运行 QQ_OFFICIAL 机器人
'''
def run_qqchan_bot(cfg: dict, global_object: GlobalObject):
try:
from model.platform.qq_official import QQOfficial
qqchannel_bot = QQOfficial(cfg=cfg, message_handler=oper_msg, global_object=global_object)
global_object.platforms.append(RegisteredPlatform(platform_name="qqchan", platform_instance=qqchannel_bot, origin="internal"))
qqchannel_bot.run()
except BaseException as e:
logger.log("启动QQ频道机器人时出现错误, 原因如下: " + str(e), gu.LEVEL_CRITICAL, tag="QQ频道")
logger.log(r"如果您是初次启动请前往可视化面板填写配置。详情请看https://astrbot.soulter.top/center/。" + str(e), gu.LEVEL_CRITICAL)
'''
运行 QQ_GOCQ 机器人
'''
def run_gocq_bot(cfg: dict, _global_object: GlobalObject):
from model.platform.qq_gocq import QQGOCQ
noticed = False
host = cc.get("gocq_host", "127.0.0.1")
port = cc.get("gocq_websocket_port", 6700)
http_port = cc.get("gocq_http_port", 5700)
logger.log(f"正在检查连接...host: {host}, ws port: {port}, http port: {http_port}", tag="QQ")
while True:
if not gu.port_checker(port=port, host=host) or not gu.port_checker(port=http_port, host=host):
if not noticed:
noticed = True
logger.log(f"连接到{host}:{port}(或{http_port})失败。程序会每隔 5s 自动重试。", gu.LEVEL_CRITICAL, tag="QQ")
time.sleep(5)
else:
logger.log("检查完毕,未发现问题。", tag="QQ")
break
try:
qq_gocq = QQGOCQ(cfg=cfg, message_handler=oper_msg, global_object=_global_object)
_global_object.platforms.append(RegisteredPlatform(platform_name="gocq", platform_instance=qq_gocq, origin="internal"))
qq_gocq.run()
except BaseException as e:
input("启动QQ机器人出现错误"+str(e))
'''
检查发言频率
'''
def check_frequency(id) -> bool:
ts = int(time.time())
if id in user_frequency:
if ts-user_frequency[id]['time'] > frequency_time:
user_frequency[id]['time'] = ts
user_frequency[id]['count'] = 1
return True
else:
if user_frequency[id]['count'] >= frequency_count:
return False
else:
user_frequency[id]['count']+=1
return True
else:
t = {'time':ts,'count':1}
user_frequency[id] = t
return True
async def record_message(platform: str, session_id: str):
# TODO: 这里会非常吃资源。然而 sqlite3 不支持多线程,所以暂时这样写。
curr_ts = int(time.time())
db_inst = dbConn()
db_inst.increment_stat_session(platform, session_id, 1)
db_inst.increment_stat_message(curr_ts, 1)
db_inst.increment_stat_platform(curr_ts, platform, 1)
_global_object.cnt_total += 1
async def oper_msg(message: Union[GroupMessage, FriendMessage, GuildMessage, NakuruGuildMessage],
session_id: str,
role: str = 'member',
platform: str = None,
) -> MessageResult:
"""
处理消息。
message: 消息对象
session_id: 该消息源的唯一识别号
role: member | admin
platform: str 所注册的平台的名称。如果没有注册,将抛出一个异常。
"""
global chosen_provider, _global_object
message_str = ''
session_id = session_id
role = role
hit = False # 是否命中指令
command_result = () # 调用指令返回的结果
# 获取平台实例
reg_platform: RegisteredPlatform = None
for p in _global_object.platforms:
if p.platform_name == platform:
reg_platform = p
break
if not reg_platform:
_global_object.logger.log(f"未找到平台 {platform} 的实例。", gu.LEVEL_ERROR)
raise Exception(f"未找到平台 {platform} 的实例。")
# 统计数据,如频道消息量
await record_message(platform, session_id)
for i in message.message:
if isinstance(i, Plain):
message_str += i.text.strip()
if message_str == "":
return MessageResult("Hi~")
# 检查发言频率
if not check_frequency(message.user_id):
return MessageResult(f'你的发言超过频率限制(╯▔皿▔)╯。\n管理员设置{frequency_time}秒内只能提问{frequency_count}次。')
# 检查是否是更换语言模型的请求
temp_switch = ""
if message_str.startswith('/gpt') or message_str.startswith('/revgpt'):
target = chosen_provider
if message_str.startswith('/gpt'):
target = OPENAI_OFFICIAL
elif message_str.startswith('/revgpt'):
target = REV_CHATGPT
l = message_str.split(' ')
if len(l) > 1 and l[1] != "":
# 临时对话模式,先记录下之前的语言模型,回答完毕后再切回
temp_switch = chosen_provider
chosen_provider = target
message_str = l[1]
else:
chosen_provider = target
cc.put("chosen_provider", chosen_provider)
return MessageResult(f"已切换至【{chosen_provider}")
llm_result_str = ""
# check commands and plugins
hit, command_result = await llm_command_instance[chosen_provider].check_command(
message_str,
session_id,
role,
reg_platform,
message,
)
# 没触发指令
if not hit:
# 关键词拦截
for i in uw.unfit_words_q:
matches = re.match(i, message_str.strip(), re.I | re.M)
if matches:
return MessageResult(f"你的提问得到的回复未通过【默认关键词拦截】服务, 不予回复。")
if baidu_judge != None:
check, msg = await asyncio.to_thread(baidu_judge.judge, message_str)
if not check:
return MessageResult(f"你的提问得到的回复未通过【百度AI内容审核】服务, 不予回复。\n\n{msg}")
if chosen_provider == NONE_LLM:
logger.log("一条消息由于 Bot 未启动任何语言模型并且未触发指令而将被忽略。", gu.LEVEL_WARNING)
return
try:
if llm_wake_prefix != "" and not message_str.startswith(llm_wake_prefix):
return
# check image url
image_url = None
for comp in message.message:
if isinstance(comp, Image):
if comp.url is None:
image_url = comp.file
break
else:
image_url = comp.url
break
# web search keyword
web_sch_flag = False
if message_str.startswith("ws ") and message_str != "ws ":
message_str = message_str[3:]
web_sch_flag = True
else:
message_str += " " + cc.get("llm_env_prompt", "")
if chosen_provider == REV_CHATGPT or chosen_provider == OPENAI_OFFICIAL:
if _global_object.web_search or web_sch_flag:
official_fc = chosen_provider == OPENAI_OFFICIAL
llm_result_str = await gplugin.web_search(message_str, llm_instance[chosen_provider], session_id, official_fc)
else:
llm_result_str = await llm_instance[chosen_provider].text_chat(message_str, session_id, image_url, default_personality = _global_object.default_personality)
llm_result_str = _global_object.reply_prefix + llm_result_str
except BaseException as e:
logger.log(f"调用异常:{traceback.format_exc()}", gu.LEVEL_ERROR)
return MessageResult(f"调用语言模型例程时出现异常。原因: {str(e)}")
# 切换回原来的语言模型
if temp_switch != "":
chosen_provider = temp_switch
if hit:
# 有指令或者插件触发
# command_result 是一个元组:(指令调用是否成功, 指令返回的文本结果, 指令类型)
if command_result == None:
return
command = command_result[2]
if command == "update latest r":
def update_restart():
py = sys.executable
os.execl(py, py, *sys.argv)
return MessageResult(command_result[1] + "\n\n即将自动重启。", callback=update_restart)
if not command_result[0]:
return MessageResult(f"指令调用错误: \n{str(command_result[1])}")
# 画图指令
if isinstance(command_result[1], list) and len(command_result) == 3 and command == 'draw':
for i in command_result[1]:
# 保存到本地
async with aiohttp.ClientSession() as session:
async with session.get(i) as resp:
if resp.status == 200:
image = PILImage.open(io.BytesIO(await resp.read()))
return MessageResult([Image.fromFileSystem(gu.save_temp_img(image))])
# 其他指令
else:
try:
return MessageResult(command_result[1])
except BaseException as e:
return MessageResult(f"回复消息出错: {str(e)}")
return
# 敏感过滤
# 过滤不合适的词
for i in uw.unfit_words:
llm_result_str = re.sub(i, "***", llm_result_str)
# 百度内容审核服务二次审核
if baidu_judge != None:
check, msg = await asyncio.to_thread(baidu_judge.judge, llm_result_str)
if not check:
return MessageResult(f"你的提问得到的回复【百度内容审核】未通过,不予回复。\n\n{msg}")
# 发送信息
try:
return MessageResult(llm_result_str)
except BaseException as e:
logger.log("回复消息错误: \n"+str(e), gu.LEVEL_ERROR)

View File

@@ -1,140 +0,0 @@
from model.platform.qq_official import NakuruGuildMessage
from model.provider.provider import Provider as LLMProvider
from model.platform._platfrom import Platform
from nakuru import (
GroupMessage,
FriendMessage,
GuildMessage,
)
from typing import Union, List, ClassVar
from types import ModuleType
from enum import Enum
from dataclasses import dataclass
class PluginType(Enum):
PLATFORM = 'platfrom' # 平台类插件。
LLM = 'llm' # 大语言模型类插件
COMMON = 'common' # 其他插件
@dataclass
class PluginMetadata:
'''
插件的元数据。
'''
# required
plugin_name: str
plugin_type: PluginType
author: str # 插件作者
desc: str # 插件简介
version: str # 插件版本
# optional
repo: str = None # 插件仓库地址
def __str__(self) -> str:
return f"PluginMetadata({self.plugin_name}, {self.plugin_type}, {self.desc}, {self.version}, {self.repo})"
@dataclass
class RegisteredPlugin:
'''
注册在 AstrBot 中的插件。
'''
metadata: PluginMetadata
plugin_instance: object
module_path: str
module: ModuleType
root_dir_name: str
def __str__(self) -> str:
return f"RegisteredPlugin({self.metadata}, {self.module_path}, {self.root_dir_name})"
RegisteredPlugins = List[RegisteredPlugin]
@dataclass
class RegisteredPlatform:
'''
注册在 AstrBot 中的平台。平台应当实现 Platform 接口。
'''
platform_name: str
platform_instance: Platform
origin: str = None # 注册来源
@dataclass
class RegisteredLLM:
'''
注册在 AstrBot 中的大语言模型调用。大语言模型应当实现 LLMProvider 接口。
'''
llm_name: str
llm_instance: LLMProvider
origin: str = None # 注册来源
class GlobalObject:
'''
存放一些公用的数据,用于在不同模块(如core与command)之间传递
'''
version: str # 机器人版本
nick: str # 用户定义的机器人的别名
base_config: dict # config.json 中导出的配置
cached_plugins: List[RegisteredPlugin] # 加载的插件
platforms: List[RegisteredPlatform]
llms: List[RegisteredLLM]
web_search: bool # 是否开启了网页搜索
reply_prefix: str # 回复前缀
unique_session: bool # 是否开启了独立会话
cnt_total: int # 总消息数
default_personality: dict
dashboard_data = None
logger: None
def __init__(self):
self.nick = None # gocq 的昵称
self.base_config = None # config.yaml
self.cached_plugins = [] # 缓存的插件
self.web_search = False # 是否开启了网页搜索
self.reply_prefix = None
self.unique_session = False
self.cnt_total = 0
self.platforms = []
self.llms = []
self.default_personality = None
self.dashboard_data = None
self.stat = {}
class AstrMessageEvent():
'''
消息事件。
'''
context: GlobalObject # 一些公用数据
message_str: str # 纯消息字符串
message_obj: Union[GroupMessage, FriendMessage, GuildMessage, NakuruGuildMessage] # 消息对象
platform: RegisteredPlatform # 来源平台
role: str # 基本身份。`admin` 或 `member`
session_id: int # 会话 id
def __init__(self,
message_str: str,
message_obj: Union[GroupMessage, FriendMessage, GuildMessage, NakuruGuildMessage],
platform: RegisteredPlatform,
role: str,
context: GlobalObject,
session_id: str = None):
self.context = context
self.message_str = message_str
self.message_obj = message_obj
self.platform = platform
self.role = role
self.session_id = session_id
class CommandResult():
'''
用于在Command中返回多个值
'''
def __init__(self, hit: bool, success: bool, message_chain: list, command_name: str = "unknown_command") -> None:
self.hit = hit
self.success = success
self.message_chain = message_chain
self.command_name = command_name
def _result_tuple(self):
return (self.success, self.message_chain, self.command_name)

11
dashboard/__init__.py Normal file
View File

@@ -0,0 +1,11 @@
from dataclasses import dataclass
class DashBoardData():
stats: dict = {}
configs: dict = {}
@dataclass
class Response():
status: str
message: str
data: dict

View File

@@ -1 +1 @@
import{x as i,o as l,c as _,w as s,a as e,f as a,J as m,V as c,b as t,t as u,ad as p,B as n,ae as o,j as f}from"./index-9075b0bb.js";const b={class:"text-h3"},h={class:"d-flex align-center"},g={class:"d-flex align-center"},V=i({__name:"BaseBreadcrumb",props:{title:String,breadcrumbs:Array,icon:String},setup(d){const r=d;return(x,B)=>(l(),_(c,{class:"page-breadcrumb mb-1 mt-1"},{default:s(()=>[e(a,{cols:"12",md:"12"},{default:s(()=>[e(m,{variant:"outlined",elevation:"0",class:"px-4 py-3 withbg"},{default:s(()=>[e(c,{"no-gutters":"",class:"align-center"},{default:s(()=>[e(a,{md:"5"},{default:s(()=>[t("h3",b,u(r.title),1)]),_:1}),e(a,{md:"7",sm:"12",cols:"12"},{default:s(()=>[e(p,{items:r.breadcrumbs,class:"text-h5 justify-md-end pa-1"},{divider:s(()=>[t("div",h,[e(n(o),{size:"17"})])]),prepend:s(()=>[e(f,{size:"small",icon:"mdi-home",class:"text-secondary mr-2"}),t("div",g,[e(n(o),{size:"17"})])]),_:1},8,["items"])]),_:1})]),_:1})]),_:1})]),_:1})]),_:1}))}});export{V as _};
import{x as i,o as l,c as _,w as s,a as e,f as a,J as m,V as c,b as t,t as u,ae as p,B as n,af as o,j as f}from"./index-5ac7c267.js";const b={class:"text-h3"},h={class:"d-flex align-center"},g={class:"d-flex align-center"},V=i({__name:"BaseBreadcrumb",props:{title:String,breadcrumbs:Array,icon:String},setup(d){const r=d;return(x,B)=>(l(),_(c,{class:"page-breadcrumb mb-1 mt-1"},{default:s(()=>[e(a,{cols:"12",md:"12"},{default:s(()=>[e(m,{variant:"outlined",elevation:"0",class:"px-4 py-3 withbg"},{default:s(()=>[e(c,{"no-gutters":"",class:"align-center"},{default:s(()=>[e(a,{md:"5"},{default:s(()=>[t("h3",b,u(r.title),1)]),_:1}),e(a,{md:"7",sm:"12",cols:"12"},{default:s(()=>[e(p,{items:r.breadcrumbs,class:"text-h5 justify-md-end pa-1"},{divider:s(()=>[t("div",h,[e(n(o),{size:"17"})])]),prepend:s(()=>[e(f,{size:"small",icon:"mdi-home",class:"text-secondary mr-2"}),t("div",g,[e(n(o),{size:"17"})])]),_:1},8,["items"])]),_:1})]),_:1})]),_:1})]),_:1})]),_:1}))}});export{V as _};

View File

@@ -1 +1 @@
import{x as e,o as a,c as t,w as o,a as s,B as n,Z as r,W as c}from"./index-9075b0bb.js";const f=e({__name:"BlankLayout",setup(p){return(u,_)=>(a(),t(c,null,{default:o(()=>[s(n(r))]),_:1}))}});export{f as default};
import{x as e,o as a,c as t,w as o,a as s,B as n,Z as r,W as c}from"./index-5ac7c267.js";const f=e({__name:"BlankLayout",setup(p){return(u,_)=>(a(),t(c,null,{default:o(()=>[s(n(r))]),_:1}))}});export{f as default};

View File

@@ -1 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-3bf6ea80.js";import{_}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as p,D as a,o as r,s,a as e,w as t,f as o,V as i,F as n,u as g,c as h,a0 as b,e as x,t as y}from"./index-9075b0bb.js";const P=p({__name:"ColorPage",setup(C){const c=a({title:"Colors Page"}),d=a([{title:"Utilities",disabled:!1,href:"#"},{title:"Colors",disabled:!0,href:"#"}]),u=a(["primary","lightprimary","secondary","lightsecondary","info","success","accent","warning","error","darkText","lightText","borderLight","inputBorder","containerBg"]);return(V,k)=>(r(),s(n,null,[e(m,{title:c.value.title,breadcrumbs:d.value},null,8,["title","breadcrumbs"]),e(i,null,{default:t(()=>[e(o,{cols:"12",md:"12"},{default:t(()=>[e(_,{title:"Color Palette"},{default:t(()=>[e(i,null,{default:t(()=>[(r(!0),s(n,null,g(u.value,(l,f)=>(r(),h(o,{md:"3",cols:"12",key:f},{default:t(()=>[e(b,{rounded:"md",class:"align-center justify-center d-flex",height:"100",width:"100%",color:l},{default:t(()=>[x("class: "+y(l),1)]),_:2},1032,["color"])]),_:2},1024))),128))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{P as default};
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as p,D as a,o as r,s,a as e,w as t,f as o,V as i,F as n,u as g,c as h,a0 as b,e as x,t as y}from"./index-5ac7c267.js";const P=p({__name:"ColorPage",setup(C){const c=a({title:"Colors Page"}),d=a([{title:"Utilities",disabled:!1,href:"#"},{title:"Colors",disabled:!0,href:"#"}]),u=a(["primary","lightprimary","secondary","lightsecondary","info","success","accent","warning","error","darkText","lightText","borderLight","inputBorder","containerBg"]);return(V,k)=>(r(),s(n,null,[e(m,{title:c.value.title,breadcrumbs:d.value},null,8,["title","breadcrumbs"]),e(i,null,{default:t(()=>[e(o,{cols:"12",md:"12"},{default:t(()=>[e(_,{title:"Color Palette"},{default:t(()=>[e(i,null,{default:t(()=>[(r(!0),s(n,null,g(u.value,(l,f)=>(r(),h(o,{md:"3",cols:"12",key:f},{default:t(()=>[e(b,{rounded:"md",class:"align-center justify-center d-flex",height:"100",width:"100%",color:l},{default:t(()=>[x("class: "+y(l),1)]),_:2},1032,["color"])]),_:2},1024))),128))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{P as default};

View File

@@ -1 +1 @@
import{o as l,s as o,u as c,c as n,w as u,Q as g,b as s,R as k,F as t,ab as h,O as p,t as m,a as V,ac as f,i as C,q as x,k as v,A as U}from"./index-9075b0bb.js";import{_ as w}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";const S={__name:"ConfigDetailCard",props:{config:Array},setup(d){return(y,B)=>(l(!0),o(t,null,c(d.config,r=>(l(),n(w,{key:r.name,title:r.name,style:{"margin-bottom":"16px"}},{default:u(()=>[g(s("a",null,"No data",512),[[k,d.config.length===0]]),(l(!0),o(t,null,c(r.body,e=>(l(),o(t,null,[e.config_type==="item"?(l(),o(t,{key:0},[e.val_type==="bool"?(l(),n(h,{key:0,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,color:"primary",inset:""},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="str"?(l(),n(p,{key:1,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="int"?(l(),n(p,{key:2,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="list"?(l(),o(t,{key:3},[s("span",null,m(e.name),1),V(f,{modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,chips:"",clearable:"",label:"请添加",multiple:"","prepend-icon":"mdi-tag-multiple-outline"},{selection:u(({attrs:a,item:i,select:b,selected:_})=>[V(C,x(a,{"model-value":_,closable:"",onClick:b,"onClick:close":D=>y.remove(i)}),{default:u(()=>[s("strong",null,m(i),1)]),_:2},1040,["model-value","onClick","onClick:close"])]),_:2},1032,["modelValue","onUpdate:modelValue"])],64)):v("",!0)],64)):e.config_type==="divider"?(l(),n(U,{key:1,style:{"margin-top":"8px","margin-bottom":"8px"}})):v("",!0)],64))),256))]),_:2},1032,["title"]))),128))}};export{S as _};
import{o as l,s as o,u as c,c as n,w as u,Q as g,b as d,R as k,F as t,ac as h,O as p,t as m,a as V,ad as f,i as C,q as x,k as v,A as U}from"./index-5ac7c267.js";import{_ as w}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";const S={__name:"ConfigDetailCard",props:{config:Array},setup(s){return(y,B)=>(l(!0),o(t,null,c(s.config,r=>(l(),n(w,{key:r.name,title:r.name,style:{"margin-bottom":"16px"}},{default:u(()=>[g(d("a",null,"No data",512),[[k,s.config.length===0]]),(l(!0),o(t,null,c(r.body,e=>(l(),o(t,null,[e.config_type==="item"?(l(),o(t,{key:0},[e.val_type==="bool"?(l(),n(h,{key:0,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,color:"primary",inset:""},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="str"?(l(),n(p,{key:1,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="int"?(l(),n(p,{key:2,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="list"?(l(),o(t,{key:3},[d("span",null,m(e.name),1),V(f,{modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,chips:"",clearable:"",label:"请添加",multiple:"","prepend-icon":"mdi-tag-multiple-outline"},{selection:u(({attrs:a,item:i,select:b,selected:_})=>[V(C,x(a,{"model-value":_,closable:"",onClick:b,"onClick:close":D=>y.remove(i)}),{default:u(()=>[d("strong",null,m(i),1)]),_:2},1040,["model-value","onClick","onClick:close"])]),_:2},1032,["modelValue","onUpdate:modelValue"])],64)):v("",!0)],64)):e.config_type==="divider"?(l(),n(U,{key:1,style:{"margin-top":"8px","margin-bottom":"8px"}})):v("",!0)],64))),256))]),_:2},1032,["title"]))),128))}};export{S as _};

View File

@@ -1 +1 @@
import{_ as y}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as h,o,c as u,w as t,a,a8 as b,b as c,K as x,e as f,t as g,G as V,A as w,L as S,a9 as $,J as B,s as _,d as v,F as d,u as p,f as G,V as T,aa as j,T as l}from"./index-9075b0bb.js";import{_ as m}from"./ConfigDetailCard-d45b9ca7.js";const D={class:"d-sm-flex align-center justify-space-between"},C=h({__name:"ConfigGroupCard",props:{title:String},setup(e){const s=e;return(i,n)=>(o(),u(B,{variant:"outlined",elevation:"0",class:"withbg",style:{width:"50%"}},{default:t(()=>[a(b,{style:{padding:"10px 20px"}},{default:t(()=>[c("div",D,[a(x,null,{default:t(()=>[f(g(s.title),1)]),_:1}),a(V)])]),_:1}),a(w),a(S,null,{default:t(()=>[$(i.$slots,"default")]),_:3})]),_:3}))}}),I={style:{display:"flex","flex-direction":"row","justify-content":"space-between","align-items":"center","margin-bottom":"12px"}},N={style:{display:"flex","flex-direction":"row"}},R={style:{"margin-right":"10px",color:"black"}},F={style:{color:"#222"}},k=h({__name:"ConfigGroupItem",props:{title:String,desc:String,btnRoute:String,namespace:String},setup(e){const s=e;return(i,n)=>(o(),_("div",I,[c("div",N,[c("h3",R,g(s.title),1),c("p",F,g(s.desc),1)]),a(v,{to:s.btnRoute,color:"primary",class:"ml-2",style:{"border-radius":"10px"}},{default:t(()=>[f("配置")]),_:1},8,["to"])]))}}),L={style:{display:"flex","flex-direction":"row",padding:"16px",gap:"16px",width:"100%"}},P={name:"ConfigPage",components:{UiParentCard:y,ConfigGroupCard:C,ConfigGroupItem:k,ConfigDetailCard:m},data(){return{config_data:[],config_base:[],save_message_snack:!1,save_message:"",save_message_success:"",config_outline:[],namespace:""}},mounted(){this.getConfig()},methods:{switchConfig(e){l.get("/api/configs?namespace="+e).then(s=>{this.namespace=e,this.config_data=s.data.data,console.log(this.config_data)}).catch(s=>{save_message=s,save_message_snack=!0,save_message_success="error"})},getConfig(){l.get("/api/config_outline").then(e=>{this.config_outline=e.data.data,console.log(this.config_outline)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"}),l.get("/api/configs").then(e=>{this.config_base=e.data.data,console.log(this.config_data)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"})},updateConfig(){l.post("/api/configs",{base_config:this.config_base,config:this.config_data,namespace:this.namespace}).then(e=>{e.data.status==="success"?(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="success"):(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="error")}).catch(e=>{this.save_message=e,this.save_message_snack=!0,this.save_message_success="error"})}}},J=Object.assign(P,{setup(e){return(s,i)=>(o(),_(d,null,[a(T,null,{default:t(()=>[c("div",L,[(o(!0),_(d,null,p(s.config_outline,n=>(o(),u(C,{key:n.name,title:n.name},{default:t(()=>[(o(!0),_(d,null,p(n.body,r=>(o(),u(k,{title:r.title,desc:r.desc,namespace:r.namespace,onClick:U=>s.switchConfig(r.namespace)},null,8,["title","desc","namespace","onClick"]))),256))]),_:2},1032,["title"]))),128))]),a(G,{cols:"12",md:"12"},{default:t(()=>[a(m,{config:s.config_data},null,8,["config"]),a(m,{config:s.config_base},null,8,["config"])]),_:1})]),_:1}),a(v,{icon:"mdi-content-save",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary",onClick:s.updateConfig},null,8,["onClick"]),a(j,{timeout:2e3,elevation:"24",color:s.save_message_success,modelValue:s.save_message_snack,"onUpdate:modelValue":i[0]||(i[0]=n=>s.save_message_snack=n)},{default:t(()=>[f(g(s.save_message),1)]),_:1},8,["color","modelValue"])],64))}});export{J as default};
import{_ as b}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as h,o,c as u,w as t,a,a8 as y,b as c,K as x,e as f,t as g,G as V,A as w,L as S,a9 as $,J as B,s as _,d as v,F as d,u as p,f as G,V as T,ab as j,T as l}from"./index-5ac7c267.js";import{_ as m}from"./ConfigDetailCard-756c045d.js";const D={class:"d-sm-flex align-center justify-space-between"},C=h({__name:"ConfigGroupCard",props:{title:String},setup(e){const s=e;return(i,n)=>(o(),u(B,{variant:"outlined",elevation:"0",class:"withbg",style:{width:"50%"}},{default:t(()=>[a(y,{style:{padding:"10px 20px"}},{default:t(()=>[c("div",D,[a(x,null,{default:t(()=>[f(g(s.title),1)]),_:1}),a(V)])]),_:1}),a(w),a(S,null,{default:t(()=>[$(i.$slots,"default")]),_:3})]),_:3}))}}),I={style:{display:"flex","flex-direction":"row","justify-content":"space-between","align-items":"center","margin-bottom":"12px"}},N={style:{display:"flex","flex-direction":"row"}},R={style:{"margin-right":"10px",color:"black"}},F={style:{color:"#222"}},k=h({__name:"ConfigGroupItem",props:{title:String,desc:String,btnRoute:String,namespace:String},setup(e){const s=e;return(i,n)=>(o(),_("div",I,[c("div",N,[c("h3",R,g(s.title),1),c("p",F,g(s.desc),1)]),a(v,{to:s.btnRoute,color:"primary",class:"ml-2",style:{"border-radius":"10px"}},{default:t(()=>[f("配置")]),_:1},8,["to"])]))}}),L={style:{display:"flex","flex-direction":"row",padding:"16px",gap:"16px",width:"100%"}},P={name:"ConfigPage",components:{UiParentCard:b,ConfigGroupCard:C,ConfigGroupItem:k,ConfigDetailCard:m},data(){return{config_data:[],config_base:[],save_message_snack:!1,save_message:"",save_message_success:"",config_outline:[],namespace:""}},mounted(){this.getConfig()},methods:{switchConfig(e){l.get("/api/configs?namespace="+e).then(s=>{this.namespace=e,this.config_data=s.data.data,console.log(this.config_data)}).catch(s=>{save_message=s,save_message_snack=!0,save_message_success="error"})},getConfig(){l.get("/api/config_outline").then(e=>{this.config_outline=e.data.data,console.log(this.config_outline)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"}),l.get("/api/configs").then(e=>{this.config_base=e.data.data,console.log(this.config_data)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"})},updateConfig(){l.post("/api/configs",{base_config:this.config_base,config:this.config_data,namespace:this.namespace}).then(e=>{e.data.status==="success"?(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="success"):(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="error")}).catch(e=>{this.save_message=e,this.save_message_snack=!0,this.save_message_success="error"})}}},J=Object.assign(P,{setup(e){return(s,i)=>(o(),_(d,null,[a(T,null,{default:t(()=>[c("div",L,[(o(!0),_(d,null,p(s.config_outline,n=>(o(),u(C,{key:n.name,title:n.name},{default:t(()=>[(o(!0),_(d,null,p(n.body,r=>(o(),u(k,{title:r.title,desc:r.desc,namespace:r.namespace,onClick:U=>s.switchConfig(r.namespace)},null,8,["title","desc","namespace","onClick"]))),256))]),_:2},1032,["title"]))),128))]),a(G,{cols:"12",md:"12"},{default:t(()=>[a(m,{config:s.config_data},null,8,["config"]),a(m,{config:s.config_base},null,8,["config"])]),_:1})]),_:1}),a(v,{icon:"mdi-content-save",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary",onClick:s.updateConfig},null,8,["onClick"]),a(j,{timeout:2e3,elevation:"24",color:s.save_message_success,modelValue:s.save_message_snack,"onUpdate:modelValue":i[0]||(i[0]=n=>s.save_message_snack=n)},{default:t(()=>[f(g(s.save_message),1)]),_:1},8,["color","modelValue"])],64))}});export{J as default};

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
import{_ as a}from"./_plugin-vue_export-helper-c27b6911.js";import{o,c,w as s,V as i,a as t,b as e,d as l,e as r,f as d}from"./index-5ac7c267.js";const n="/assets/img-error-bg-41f65efa.svg",_="/assets/img-error-blue-f50c8e77.svg",m="/assets/img-error-text-630dc36d.svg",g="/assets/img-error-purple-b97a483b.svg";const p={},u={class:"text-center"},f=e("div",{class:"CardMediaWrapper"},[e("img",{src:n,alt:"grid",class:"w-100"}),e("img",{src:_,alt:"grid",class:"CardMediaParts"}),e("img",{src:m,alt:"build",class:"CardMediaBuild"}),e("img",{src:g,alt:"build",class:"CardMediaBuild"})],-1),h=e("h1",{class:"text-h1"},"Something is wrong",-1),v=e("p",null,[e("small",null,[r("The page you are looking was moved, removed, "),e("br"),r("renamed, or might never exist! ")])],-1);function x(b,V){return o(),c(i,{"no-gutters":"",class:"h-100vh"},{default:s(()=>[t(d,{class:"d-flex align-center justify-center"},{default:s(()=>[e("div",u,[f,h,v,t(l,{variant:"flat",color:"primary",class:"mt-4",to:"/","prepend-icon":"mdi-home"},{default:s(()=>[r(" Home")]),_:1})])]),_:1})]),_:1})}const C=a(p,[["render",x]]);export{C as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
import{_ as _t}from"./LogoDark.vue_vue_type_script_setup_true_lang-b3521e6c.js";import{x as ke,af as we,r as Ot,ag as Vt,D as A,ah as Ne,a2 as P,B as I,ai as Q,aj as St,I as Be,ak as Ie,al as Et,am as jt,an as At,ao as G,y as wt,o as Re,c as tt,w as C,a as j,O as He,b as ge,ap as Ft,d as Pt,e as Ge,s as Ct,aq as Tt,t as Nt,k as Bt,U as It,f as Fe,N as Rt,V as Pe,J as Ye,L as kt}from"./index-9075b0bb.js";import{a as Mt}from"./md5-48eb25f3.js";/**
import{_ as _t}from"./LogoDark.vue_vue_type_script_setup_true_lang-d555e5be.js";import{x as ke,ag as we,r as Ot,ah as Vt,D as A,ai as Ne,a2 as P,B as I,aj as Q,ak as St,I as Be,al as Ie,am as Et,an as jt,ao as At,ap as G,y as wt,o as Re,c as tt,w as C,a as j,O as He,b as ge,aq as Ft,d as Pt,e as Ge,s as Ct,ar as Tt,t as Nt,k as Bt,U as It,f as Fe,N as Rt,V as Pe,J as Ye,L as kt}from"./index-5ac7c267.js";import{a as Mt}from"./md5-086248bf.js";/**
* vee-validate v4.11.3
* (c) 2023 Abdelrahman Awad
* @license MIT

View File

@@ -1 +1 @@
import{av as _,x as d,D as n,o as c,s as m,a as f,w as p,Q as r,b as a,R as o,B as t,aw as h}from"./index-9075b0bb.js";const s={Sidebar_drawer:!0,Customizer_drawer:!1,mini_sidebar:!1,fontTheme:"Roboto",inputBg:!1},l=_({id:"customizer",state:()=>({Sidebar_drawer:s.Sidebar_drawer,Customizer_drawer:s.Customizer_drawer,mini_sidebar:s.mini_sidebar,fontTheme:"Poppins",inputBg:s.inputBg}),getters:{},actions:{SET_SIDEBAR_DRAWER(){this.Sidebar_drawer=!this.Sidebar_drawer},SET_MINI_SIDEBAR(e){this.mini_sidebar=e},SET_FONT(e){this.fontTheme=e}}}),u={class:"logo",style:{display:"flex","align-items":"center"}},b={style:{"font-size":"24px","font-weight":"1000"}},w={style:{"font-size":"20px","font-weight":"1000"}},S={style:{"font-size":"20px"}},z=d({__name:"LogoDark",setup(e){n("rgb(var(--v-theme-primary))"),n("rgb(var(--v-theme-secondary))");const i=l();return(g,B)=>(c(),m("div",u,[f(t(h),{to:"/",style:{"text-decoration":"none",color:"black"}},{default:p(()=>[r(a("span",b,"AstrBot 仪表盘",512),[[o,!t(i).mini_sidebar]]),r(a("span",w,"Astr",512),[[o,t(i).mini_sidebar]]),r(a("span",S,"Bot",512),[[o,t(i).mini_sidebar]])]),_:1})]))}});export{z as _,l as u};
import{aw as _,x as d,D as n,o as c,s as m,a as f,w as p,Q as r,b as a,R as o,B as t,ax as h}from"./index-5ac7c267.js";const s={Sidebar_drawer:!0,Customizer_drawer:!1,mini_sidebar:!1,fontTheme:"Roboto",inputBg:!1},l=_({id:"customizer",state:()=>({Sidebar_drawer:s.Sidebar_drawer,Customizer_drawer:s.Customizer_drawer,mini_sidebar:s.mini_sidebar,fontTheme:"Poppins",inputBg:s.inputBg}),getters:{},actions:{SET_SIDEBAR_DRAWER(){this.Sidebar_drawer=!this.Sidebar_drawer},SET_MINI_SIDEBAR(e){this.mini_sidebar=e},SET_FONT(e){this.fontTheme=e}}}),u={class:"logo",style:{display:"flex","align-items":"center"}},b={style:{"font-size":"24px","font-weight":"1000"}},w={style:{"font-size":"20px","font-weight":"1000"}},S={style:{"font-size":"20px"}},z=d({__name:"LogoDark",setup(e){n("rgb(var(--v-theme-primary))"),n("rgb(var(--v-theme-secondary))");const i=l();return(g,B)=>(c(),m("div",u,[f(t(h),{to:"/",style:{"text-decoration":"none",color:"black"}},{default:p(()=>[r(a("span",b,"AstrBot 仪表盘",512),[[o,!t(i).mini_sidebar]]),r(a("span",w,"Astr",512),[[o,t(i).mini_sidebar]]),r(a("span",S,"Bot",512),[[o,t(i).mini_sidebar]])]),_:1})]))}});export{z as _,l as u};

View File

@@ -1 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-3bf6ea80.js";import{_ as i}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as n,D as a,o as c,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-9075b0bb.js";const p=["innerHTML"],v=n({__name:"MaterialIcons",setup(b){const s=a({title:"Material Icons"}),r=a('<iframe src="https://materialdesignicons.com/" frameborder="0" width="100%" height="1000"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Material Icons",disabled:!0,href:"#"}]);return(h,M)=>(c(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(i,{title:"Material Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,p)]),_:1})]),_:1})]),_:1})],64))}});export{v as default};
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as i}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as n,D as a,o as c,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-5ac7c267.js";const p=["innerHTML"],v=n({__name:"MaterialIcons",setup(b){const s=a({title:"Material Icons"}),r=a('<iframe src="https://materialdesignicons.com/" frameborder="0" width="100%" height="1000"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Material Icons",disabled:!0,href:"#"}]);return(h,M)=>(c(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(i,{title:"Material Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,p)]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -0,0 +1 @@
import{_ as B}from"./LogoDark.vue_vue_type_script_setup_true_lang-d555e5be.js";import{x as y,D as o,o as b,s as U,a as e,w as a,b as n,B as $,d as u,f as d,A as _,e as f,V as r,O as m,aq as q,av as A,F as E,c as F,N as T,J as V,L as P}from"./index-5ac7c267.js";const z="/assets/social-google-9b2fa67a.svg",N=["src"],S=n("span",{class:"ml-2"},"Sign up with Google",-1),D=n("h5",{class:"text-h5 text-center my-4 mb-8"},"Sign up with Email address",-1),G={class:"d-sm-inline-flex align-center mt-2 mb-7 mb-sm-0 font-weight-bold"},L=n("a",{href:"#",class:"ml-1 text-lightText"},"Terms and Condition",-1),O={class:"mt-5 text-right"},j=y({__name:"AuthRegister",setup(w){const c=o(!1),i=o(!1),p=o(""),v=o(""),g=o(),h=o(""),x=o(""),k=o([s=>!!s||"Password is required",s=>s&&s.length<=10||"Password must be less than 10 characters"]),C=o([s=>!!s||"E-mail is required",s=>/.+@.+\..+/.test(s)||"E-mail must be valid"]);function R(){g.value.validate()}return(s,l)=>(b(),U(E,null,[e(u,{block:"",color:"primary",variant:"outlined",class:"text-lightText googleBtn"},{default:a(()=>[n("img",{src:$(z),alt:"google"},null,8,N),S]),_:1}),e(r,null,{default:a(()=>[e(d,{class:"d-flex align-center"},{default:a(()=>[e(_,{class:"custom-devider"}),e(u,{variant:"outlined",class:"orbtn",rounded:"md",size:"small"},{default:a(()=>[f("OR")]),_:1}),e(_,{class:"custom-devider"})]),_:1})]),_:1}),D,e(A,{ref_key:"Regform",ref:g,"lazy-validation":"",action:"/dashboards/analytical",class:"mt-7 loginForm"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:h.value,"onUpdate:modelValue":l[0]||(l[0]=t=>h.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Firstname"},null,8,["modelValue"])]),_:1}),e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:x.value,"onUpdate:modelValue":l[1]||(l[1]=t=>x.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Lastname"},null,8,["modelValue"])]),_:1})]),_:1}),e(m,{modelValue:v.value,"onUpdate:modelValue":l[2]||(l[2]=t=>v.value=t),rules:C.value,label:"Email Address / Username",class:"mt-4 mb-4",required:"",density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary"},null,8,["modelValue","rules"]),e(m,{modelValue:p.value,"onUpdate:modelValue":l[3]||(l[3]=t=>p.value=t),rules:k.value,label:"Password",required:"",density:"comfortable",variant:"outlined",color:"primary","hide-details":"auto","append-icon":i.value?"mdi-eye":"mdi-eye-off",type:i.value?"text":"password","onClick:append":l[4]||(l[4]=t=>i.value=!i.value),class:"pwdInput"},null,8,["modelValue","rules","append-icon","type"]),n("div",G,[e(q,{modelValue:c.value,"onUpdate:modelValue":l[5]||(l[5]=t=>c.value=t),rules:[t=>!!t||"You must agree to continue!"],label:"Agree with?",required:"",color:"primary",class:"ms-n2","hide-details":""},null,8,["modelValue","rules"]),L]),e(u,{color:"secondary",block:"",class:"mt-2",variant:"flat",size:"large",onClick:l[6]||(l[6]=t=>R())},{default:a(()=>[f("Sign Up")]),_:1})]),_:1},512),n("div",O,[e(_),e(u,{variant:"plain",to:"/auth/login",class:"mt-2 text-capitalize mr-n2"},{default:a(()=>[f("Already have an account?")]),_:1})])],64))}});const I={class:"pa-7 pa-sm-12"},J=n("h2",{class:"text-secondary text-h2 mt-8"},"Sign up",-1),Y=n("h4",{class:"text-disabled text-h4 mt-3"},"Enter credentials to continue",-1),M=y({__name:"RegisterPage",setup(w){return(c,i)=>(b(),F(r,{class:"h-100vh","no-gutters":""},{default:a(()=>[e(d,{cols:"12",class:"d-flex align-center bg-lightprimary"},{default:a(()=>[e(T,null,{default:a(()=>[n("div",I,[e(r,{justify:"center"},{default:a(()=>[e(d,{cols:"12",lg:"10",xl:"6",md:"7"},{default:a(()=>[e(V,{elevation:"0",class:"loginBox"},{default:a(()=>[e(V,{variant:"outlined"},{default:a(()=>[e(P,{class:"pa-9"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",class:"text-center"},{default:a(()=>[e(B),J,Y]),_:1})]),_:1}),e(j)]),_:1})]),_:1})]),_:1})]),_:1})]),_:1})])]),_:1})]),_:1})]),_:1}))}});export{M as default};

View File

@@ -1 +1 @@
import{_ as c}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-3bf6ea80.js";import{_ as f}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as m,D as s,o as l,s as r,a as e,w as a,f as i,V as o,F as d,u as _,J as p,X as b,b as h,t as g}from"./index-9075b0bb.js";const v=m({__name:"ShadowPage",setup(w){const n=s({title:"Shadow Page"}),u=s([{title:"Utilities",disabled:!1,href:"#"},{title:"Shadow",disabled:!0,href:"#"}]);return(V,x)=>(l(),r(d,null,[e(c,{title:n.value.title,breadcrumbs:u.value},null,8,["title","breadcrumbs"]),e(o,null,{default:a(()=>[e(i,{cols:"12",md:"12"},{default:a(()=>[e(f,{title:"Basic Shadow"},{default:a(()=>[e(o,{justify:"center"},{default:a(()=>[(l(),r(d,null,_(25,t=>e(i,{key:t,cols:"auto"},{default:a(()=>[e(p,{height:"100",width:"100",class:b(["mb-5",["d-flex justify-center align-center bg-primary",`elevation-${t}`]])},{default:a(()=>[h("div",null,g(t-1),1)]),_:2},1032,["class"])]),_:2},1024)),64))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{v as default};
import{_ as c}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as f}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as m,D as s,o as l,s as r,a as e,w as a,f as i,V as o,F as d,u as _,J as p,X as b,b as h,t as g}from"./index-5ac7c267.js";const v=m({__name:"ShadowPage",setup(w){const n=s({title:"Shadow Page"}),u=s([{title:"Utilities",disabled:!1,href:"#"},{title:"Shadow",disabled:!0,href:"#"}]);return(V,x)=>(l(),r(d,null,[e(c,{title:n.value.title,breadcrumbs:u.value},null,8,["title","breadcrumbs"]),e(o,null,{default:a(()=>[e(i,{cols:"12",md:"12"},{default:a(()=>[e(f,{title:"Basic Shadow"},{default:a(()=>[e(o,{justify:"center"},{default:a(()=>[(l(),r(d,null,_(25,t=>e(i,{key:t,cols:"auto"},{default:a(()=>[e(p,{height:"100",width:"100",class:b(["mb-5",["d-flex justify-center align-center bg-primary",`elevation-${t}`]])},{default:a(()=>[h("div",null,g(t-1),1)]),_:2},1032,["class"])]),_:2},1024)),64))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -1 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-3bf6ea80.js";import{_ as n}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as c,D as a,o as i,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-9075b0bb.js";const b=["innerHTML"],w=c({__name:"TablerIcons",setup(p){const s=a({title:"Tabler Icons"}),r=a('<iframe src="https://tablericons.com/" frameborder="0" width="100%" height="600"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Tabler Icons",disabled:!0,href:"#"}]);return(h,T)=>(i(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(n,{title:"Tabler Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,b)]),_:1})]),_:1})]),_:1})],64))}});export{w as default};
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as n}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as c,D as a,o as i,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-5ac7c267.js";const b=["innerHTML"],w=c({__name:"TablerIcons",setup(p){const s=a({title:"Tabler Icons"}),r=a('<iframe src="https://tablericons.com/" frameborder="0" width="100%" height="600"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Tabler Icons",disabled:!0,href:"#"}]);return(h,T)=>(i(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(n,{title:"Tabler Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,b)]),_:1})]),_:1})]),_:1})],64))}});export{w as default};

View File

@@ -1 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-3bf6ea80.js";import{_ as v}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as f,o as i,c as g,w as e,a,a8 as y,K as b,e as w,t as d,A as C,L as V,a9 as L,J as _,D as o,s as h,f as k,b as t,F as x,u as B,X as H,V as T}from"./index-9075b0bb.js";const s=f({__name:"UiChildCard",props:{title:String},setup(r){const l=r;return(n,c)=>(i(),g(_,{variant:"outlined"},{default:e(()=>[a(y,{class:"py-3"},{default:e(()=>[a(b,{class:"text-h5"},{default:e(()=>[w(d(l.title),1)]),_:1})]),_:1}),a(C),a(V,null,{default:e(()=>[L(n.$slots,"default")]),_:3})]),_:3}))}}),D={class:"d-flex flex-column gap-1"},S={class:"text-caption pa-2 bg-lightprimary"},z=t("div",{class:"text-grey"},"Class",-1),N={class:"font-weight-medium"},$=t("div",null,[t("p",{class:"text-left"},"Left aligned on all viewport sizes."),t("p",{class:"text-center"},"Center aligned on all viewport sizes."),t("p",{class:"text-right"},"Right aligned on all viewport sizes."),t("p",{class:"text-sm-left"},"Left aligned on viewports SM (small) or wider."),t("p",{class:"text-right text-md-left"},"Left aligned on viewports MD (medium) or wider."),t("p",{class:"text-right text-lg-left"},"Left aligned on viewports LG (large) or wider."),t("p",{class:"text-right text-xl-left"},"Left aligned on viewports XL (extra-large) or wider.")],-1),M=t("div",{class:"d-flex justify-space-between flex-row"},[t("a",{href:"#",class:"text-decoration-none"},"Non-underlined link"),t("div",{class:"text-decoration-line-through"},"Line-through text"),t("div",{class:"text-decoration-overline"},"Overline text"),t("div",{class:"text-decoration-underline"},"Underline text")],-1),O=t("div",null,[t("p",{class:"text-high-emphasis"},"High-emphasis has an opacity of 87% in light theme and 100% in dark."),t("p",{class:"text-medium-emphasis"},"Medium-emphasis text and hint text have opacities of 60% in light theme and 70% in dark."),t("p",{class:"text-disabled"},"Disabled text has an opacity of 38% in light theme and 50% in dark.")],-1),j=f({__name:"TypographyPage",setup(r){const l=o({title:"Typography Page"}),n=o([["Heading 1","text-h1"],["Heading 2","text-h2"],["Heading 3","text-h3"],["Heading 4","text-h4"],["Heading 5","text-h5"],["Heading 6","text-h6"],["Subtitle 1","text-subtitle-1"],["Subtitle 2","text-subtitle-2"],["Body 1","text-body-1"],["Body 2","text-body-2"],["Button","text-button"],["Caption","text-caption"],["Overline","text-overline"]]),c=o([{title:"Utilities",disabled:!1,href:"#"},{title:"Typography",disabled:!0,href:"#"}]);return(U,F)=>(i(),h(x,null,[a(m,{title:l.value.title,breadcrumbs:c.value},null,8,["title","breadcrumbs"]),a(T,null,{default:e(()=>[a(k,{cols:"12",md:"12"},{default:e(()=>[a(v,{title:"Basic Typography"},{default:e(()=>[a(s,{title:"Heading"},{default:e(()=>[t("div",D,[(i(!0),h(x,null,B(n.value,([p,u])=>(i(),g(_,{variant:"outlined",key:p,class:"my-4"},{default:e(()=>[t("div",{class:H([u,"pa-2"])},d(p),3),t("div",S,[z,t("div",N,d(u),1)])]),_:2},1024))),128))])]),_:1}),a(s,{title:"Text-alignment",class:"mt-8"},{default:e(()=>[$]),_:1}),a(s,{title:"Decoration",class:"mt-8"},{default:e(()=>[M]),_:1}),a(s,{title:"Opacity",class:"mt-8"},{default:e(()=>[O]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{j as default};
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as v}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as f,o as i,c as g,w as e,a,a8 as y,K as b,e as w,t as d,A as C,L as V,a9 as L,J as _,D as o,s as h,f as k,b as t,F as x,u as B,X as H,V as T}from"./index-5ac7c267.js";const s=f({__name:"UiChildCard",props:{title:String},setup(r){const l=r;return(n,c)=>(i(),g(_,{variant:"outlined"},{default:e(()=>[a(y,{class:"py-3"},{default:e(()=>[a(b,{class:"text-h5"},{default:e(()=>[w(d(l.title),1)]),_:1})]),_:1}),a(C),a(V,null,{default:e(()=>[L(n.$slots,"default")]),_:3})]),_:3}))}}),D={class:"d-flex flex-column gap-1"},S={class:"text-caption pa-2 bg-lightprimary"},z=t("div",{class:"text-grey"},"Class",-1),N={class:"font-weight-medium"},$=t("div",null,[t("p",{class:"text-left"},"Left aligned on all viewport sizes."),t("p",{class:"text-center"},"Center aligned on all viewport sizes."),t("p",{class:"text-right"},"Right aligned on all viewport sizes."),t("p",{class:"text-sm-left"},"Left aligned on viewports SM (small) or wider."),t("p",{class:"text-right text-md-left"},"Left aligned on viewports MD (medium) or wider."),t("p",{class:"text-right text-lg-left"},"Left aligned on viewports LG (large) or wider."),t("p",{class:"text-right text-xl-left"},"Left aligned on viewports XL (extra-large) or wider.")],-1),M=t("div",{class:"d-flex justify-space-between flex-row"},[t("a",{href:"#",class:"text-decoration-none"},"Non-underlined link"),t("div",{class:"text-decoration-line-through"},"Line-through text"),t("div",{class:"text-decoration-overline"},"Overline text"),t("div",{class:"text-decoration-underline"},"Underline text")],-1),O=t("div",null,[t("p",{class:"text-high-emphasis"},"High-emphasis has an opacity of 87% in light theme and 100% in dark."),t("p",{class:"text-medium-emphasis"},"Medium-emphasis text and hint text have opacities of 60% in light theme and 70% in dark."),t("p",{class:"text-disabled"},"Disabled text has an opacity of 38% in light theme and 50% in dark.")],-1),j=f({__name:"TypographyPage",setup(r){const l=o({title:"Typography Page"}),n=o([["Heading 1","text-h1"],["Heading 2","text-h2"],["Heading 3","text-h3"],["Heading 4","text-h4"],["Heading 5","text-h5"],["Heading 6","text-h6"],["Subtitle 1","text-subtitle-1"],["Subtitle 2","text-subtitle-2"],["Body 1","text-body-1"],["Body 2","text-body-2"],["Button","text-button"],["Caption","text-caption"],["Overline","text-overline"]]),c=o([{title:"Utilities",disabled:!1,href:"#"},{title:"Typography",disabled:!0,href:"#"}]);return(U,F)=>(i(),h(x,null,[a(m,{title:l.value.title,breadcrumbs:c.value},null,8,["title","breadcrumbs"]),a(T,null,{default:e(()=>[a(k,{cols:"12",md:"12"},{default:e(()=>[a(v,{title:"Basic Typography"},{default:e(()=>[a(s,{title:"Heading"},{default:e(()=>[t("div",D,[(i(!0),h(x,null,B(n.value,([p,u])=>(i(),g(_,{variant:"outlined",key:p,class:"my-4"},{default:e(()=>[t("div",{class:H([u,"pa-2"])},d(p),3),t("div",S,[z,t("div",N,d(u),1)])]),_:2},1024))),128))])]),_:1}),a(s,{title:"Text-alignment",class:"mt-8"},{default:e(()=>[$]),_:1}),a(s,{title:"Decoration",class:"mt-8"},{default:e(()=>[M]),_:1}),a(s,{title:"Opacity",class:"mt-8"},{default:e(()=>[O]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{j as default};

View File

@@ -1 +1 @@
import{x as n,o,c as i,w as e,a,a8 as d,b as c,K as u,e as p,t as _,a9 as s,A as f,L as V,J as m}from"./index-9075b0bb.js";const C={class:"d-sm-flex align-center justify-space-between"},h=n({__name:"UiParentCard",props:{title:String},setup(l){const r=l;return(t,x)=>(o(),i(m,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(d,null,{default:e(()=>[c("div",C,[a(u,null,{default:e(()=>[p(_(r.title),1)]),_:1}),s(t.$slots,"action")])]),_:3}),a(f),a(V,null,{default:e(()=>[s(t.$slots,"default")]),_:3})]),_:3}))}});export{h as _};
import{x as n,o,c as i,w as e,a,a8 as d,b as c,K as u,e as p,t as _,a9 as s,A as f,L as V,J as m}from"./index-5ac7c267.js";const C={class:"d-sm-flex align-center justify-space-between"},h=n({__name:"UiParentCard",props:{title:String},setup(l){const r=l;return(t,x)=>(o(),i(m,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(d,null,{default:e(()=>[c("div",C,[a(u,null,{default:e(()=>[p(_(r.title),1)]),_:1}),s(t.$slots,"action")])]),_:3}),a(f),a(V,null,{default:e(()=>[s(t.$slots,"default")]),_:3})]),_:3}))}});export{h as _};

View File

Before

Width:  |  Height:  |  Size: 3.9 KiB

After

Width:  |  Height:  |  Size: 3.9 KiB

View File

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

Before

Width:  |  Height:  |  Size: 3.3 KiB

After

Width:  |  Height:  |  Size: 3.3 KiB

View File

Before

Width:  |  Height:  |  Size: 2.9 KiB

After

Width:  |  Height:  |  Size: 2.9 KiB

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
import{ar as K,as as Y,at as V}from"./index-9075b0bb.js";var C={exports:{}};const $={},k=Object.freeze(Object.defineProperty({__proto__:null,default:$},Symbol.toStringTag,{value:"Module"})),z=K(k);/**
import{as as K,at as Y,au as V}from"./index-5ac7c267.js";var C={exports:{}};const $={},k=Object.freeze(Object.defineProperty({__proto__:null,default:$},Symbol.toStringTag,{value:"Module"})),z=K(k);/**
* [js-md5]{@link https://github.com/emn178/js-md5}
*
* @namespace md5

View File

Before

Width:  |  Height:  |  Size: 1.2 KiB

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

Before

Width:  |  Height:  |  Size: 1.5 KiB

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@@ -11,7 +11,7 @@
href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&family=Poppins:wght@400;500;600;700&family=Roboto:wght@400;500;700&display=swap"
/>
<title>AstrBot - 仪表盘</title>
<script type="module" crossorigin src="/assets/index-9075b0bb.js"></script>
<script type="module" crossorigin src="/assets/index-5ac7c267.js"></script>
<link rel="stylesheet" href="/assets/index-0f1523f3.css">
</head>
<body>

View File

@@ -1,63 +1,44 @@
from addons.dashboard.server import AstrBotDashBoard, DashBoardData
from pydantic import BaseModel
import threading
import asyncio
from . import DashBoardData
from typing import Union, Optional
import uuid
from util import general_utils as gu
from util.cmd_config import CmdConfig
from dataclasses import dataclass
import sys
import os
import threading
import time
import asyncio
from util.plugin_dev.api.v1.config import update_config
from SparkleLogging.utils.core import LogManager
from logging import Logger
from type.types import Context
logger: Logger = LogManager.GetLogger(log_name='astrbot')
@dataclass
class DashBoardConfig():
config_type: str
name: Optional[str] = None
description: Optional[str] = None
path: Optional[str] = None # 仅 item 才需要
body: Optional[list['DashBoardConfig']] = None # 仅 group 才需要
value: Optional[Union[list, dict, str, int, bool]] = None # 仅 item 才需要
val_type: Optional[str] = None # 仅 item 才需要
path: Optional[str] = None # 仅 item 才需要
body: Optional[list['DashBoardConfig']] = None # 仅 group 才需要
value: Optional[Union[list, dict, str, int, bool]] = None # 仅 item 才需要
val_type: Optional[str] = None # 仅 item 才需要
class DashBoardHelper():
def __init__(self, global_object, config: dict):
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.logger = global_object.logger
dashboard_data = global_object.dashboard_data
def __init__(self, context: Context, dashboard_data: DashBoardData):
dashboard_data.configs = {
"data": []
}
self.parse_default_config(dashboard_data, config)
self.dashboard_data: DashBoardData = dashboard_data
self.dashboard = AstrBotDashBoard(global_object)
self.key_map = {} # key: uuid, value: config key name
self.cc = CmdConfig()
@self.dashboard.register("post_configs")
def on_post_configs(post_configs: dict):
try:
# self.logger.log(f"收到配置更新请求", gu.LEVEL_INFO, tag="可视化面板")
if 'base_config' in post_configs:
self.save_config(post_configs['base_config'], namespace='') # 基础配置
self.save_config(post_configs['config'], namespace=post_configs['namespace']) # 选定配置
self.parse_default_config(self.dashboard_data, self.cc.get_all())
# 重启
threading.Thread(target=self.dashboard.shutdown_bot, args=(2,), daemon=True).start()
except Exception as e:
# self.logger.log(f"在保存配置时发生错误:{e}", gu.LEVEL_ERROR, tag="可视化面板")
raise e
self.context = context
self.parse_default_config(dashboard_data, context.base_config)
# 将 config.yaml、 中的配置解析到 dashboard_data.configs 中
def parse_default_config(self, dashboard_data: DashBoardData, config: dict):
try:
qq_official_platform_group = DashBoardConfig(
config_type="group",
name="QQ_OFFICIAL 平台配置",
name="QQ(官方)",
description="",
body=[
DashBoardConfig(
@@ -100,18 +81,26 @@ class DashBoardHelper():
value=config['direct_message_mode'],
path="direct_message_mode",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否接收QQ群消息",
description="需要机器人有相应的群消息接收权限。在 q.qq.com 上查看。",
value=config['qqofficial_enable_group_message'],
path="qqofficial_enable_group_message",
),
]
)
qq_gocq_platform_group = DashBoardConfig(
config_type="group",
name="OneBot协议平台配置",
name="QQ(nakuru)",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用",
description="支持cq-http、shamrock等目前仅支持QQ平台",
description="",
value=config['gocqbot']['enable'],
path="gocqbot.enable",
),
@@ -182,6 +171,38 @@ class DashBoardHelper():
]
)
qq_aiocqhttp_platform_group = DashBoardConfig(
config_type="group",
name="QQ(aiocqhttp)",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用",
description="",
value=config['aiocqhttp']['enable'],
path="aiocqhttp.enable",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="WebSocket 反向连接 host",
description="",
value=config['aiocqhttp']['ws_reverse_host'],
path="aiocqhttp.ws_reverse_host",
),
DashBoardConfig(
config_type="item",
val_type="int",
name="WebSocket 反向连接 port",
description="",
value=config['aiocqhttp']['ws_reverse_port'],
path="aiocqhttp.ws_reverse_port",
),
]
)
general_platform_detail_group = DashBoardConfig(
config_type="group",
name="通用平台配置",
@@ -373,48 +394,7 @@ class DashBoardHelper():
),
]
)
rev_chatgpt_accounts = config['rev_ChatGPT']['account']
new_accs = []
for i in rev_chatgpt_accounts:
if isinstance(i, dict) and 'access_token' in i:
new_accs.append(i['access_token'])
elif isinstance(i, str):
new_accs.append(i)
config['rev_ChatGPT']['account'] = new_accs
rev_chatgpt_llm_group = DashBoardConfig(
config_type="group",
name="逆向语言模型服务设置",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用逆向语言模型服务",
description="",
value=config['rev_ChatGPT']['enable'],
path="rev_ChatGPT.enable",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="终结点Endpoint地址",
description="逆向服务的终结点服务器的地址。",
value=config['CHATGPT_BASE_URL'],
path="CHATGPT_BASE_URL",
),
DashBoardConfig(
config_type="item",
val_type="list",
name="assess_token",
description="assess_token",
value=config['rev_ChatGPT']['account'],
path="rev_ChatGPT.account",
),
]
)
baidu_aip_group = DashBoardConfig(
config_type="group",
name="百度内容审核",
@@ -428,9 +408,6 @@ class DashBoardHelper():
value=config['baidu_aip']['enable'],
path="baidu_aip.enable"
),
# "app_id": null,
# "api_key": null,
# "secret_key": null
DashBoardConfig(
config_type="item",
val_type="str",
@@ -489,27 +466,26 @@ class DashBoardHelper():
),
]
)
dashboard_data.configs['data'] = [
qq_official_platform_group,
qq_gocq_platform_group,
general_platform_detail_group,
openai_official_llm_group,
rev_chatgpt_llm_group,
other_group,
baidu_aip_group
baidu_aip_group,
qq_aiocqhttp_platform_group
]
except Exception as e:
self.logger.log(f"配置文件解析错误:{e}", gu.LEVEL_ERROR)
logger.error(f"配置文件解析错误:{e}")
raise e
def save_config(self, post_config: list, namespace: str):
'''
根据 path 解析并保存配置
'''
queue = post_config
while len(queue) > 0:
config = queue.pop(0)
@@ -519,23 +495,27 @@ class DashBoardHelper():
elif config['config_type'] == "item":
if config['path'] is None or config['path'] == "":
continue
path = config['path'].split('.')
if len(path) == 0:
continue
if config['val_type'] == "bool":
self._write_config(namespace, config['path'], config['value'])
self._write_config(
namespace, config['path'], config['value'])
elif config['val_type'] == "str":
self._write_config(namespace, config['path'], config['value'])
self._write_config(
namespace, config['path'], config['value'])
elif config['val_type'] == "int":
try:
self._write_config(namespace, config['path'], int(config['value']))
self._write_config(
namespace, config['path'], int(config['value']))
except:
raise ValueError(f"配置项 {config['name']} 的值必须是整数")
elif config['val_type'] == "float":
try:
self._write_config(namespace, config['path'], float(config['value']))
self._write_config(
namespace, config['path'], float(config['value']))
except:
raise ValueError(f"配置项 {config['name']} 的值必须是浮点数")
elif config['val_type'] == "list":
@@ -543,16 +523,15 @@ class DashBoardHelper():
self._write_config(namespace, config['path'], [])
elif not isinstance(config['value'], list):
raise ValueError(f"配置项 {config['name']} 的值必须是列表")
self._write_config(namespace, config['path'], config['value'])
self._write_config(
namespace, config['path'], config['value'])
else:
raise NotImplementedError(f"未知或者未实现的配置项类型:{config['val_type']}")
raise NotImplementedError(
f"未知或者未实现的配置项类型:{config['val_type']}")
def _write_config(self, namespace: str, key: str, value):
if namespace == "" or namespace.startswith("internal_"):
# 机器人自带配置,存到 config.yaml
self.cc.put_by_dot_str(key, value)
self.context.config_helper.put_by_dot_str(key, value)
else:
update_config(namespace, key, value)
def run(self):
self.dashboard.run()

View File

@@ -1,59 +1,71 @@
from flask import Flask, request
from flask.logging import default_handler
from werkzeug.serving import make_server
from util import general_utils as gu
from dataclasses import dataclass
import logging
from cores.database.conn import dbConn
from util.cmd_config import CmdConfig
from util.updator import check_update, update_project, request_release_info
from cores.qqbot.types import *
import util.plugin_util as putil
import websockets
import json
import threading
import asyncio
import os, sys
import time
import os
import uuid
import logging
import traceback
from . import DashBoardData, Response
from flask import Flask, request
from werkzeug.serving import make_server
from astrbot.persist.helper import dbConn
from type.types import Context
from typing import List
from SparkleLogging.utils.core import LogManager
from logging import Logger
from dashboard.helper import DashBoardHelper
from util.io import get_local_ip_addresses
from model.plugin.manager import PluginManager
from util.updator.astrbot_updator import AstrBotUpdator
logger: Logger = LogManager.GetLogger(log_name='astrbot')
@dataclass
class DashBoardData():
stats: dict
configs: dict
logs: dict
plugins: List[RegisteredPlugin]
@dataclass
class Response():
status: str
message: str
data: dict
class AstrBotDashBoard():
def __init__(self, global_object: 'gu.GlobalObject'):
self.global_object = global_object
self.loop = asyncio.get_event_loop()
asyncio.set_event_loop(self.loop)
self.dashboard_data: DashBoardData = global_object.dashboard_data
self.dashboard_be = Flask(__name__, static_folder="dist", static_url_path="/")
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
self.funcs = {}
self.cc = CmdConfig()
self.logger = global_object.logger
self.ws_clients = {} # remote_ip: ws
# 启动 websocket 服务器
self.ws_server = websockets.serve(self.__handle_msg, "0.0.0.0", 6186)
def __init__(self, context: Context, plugin_manager: PluginManager, astrbot_updator: AstrBotUpdator):
self.context = context
self.plugin_manager = plugin_manager
self.astrbot_updator = astrbot_updator
self.dashboard_data = DashBoardData()
self.dashboard_helper = DashBoardHelper(self.context, self.dashboard_data)
self.dashboard_be = Flask(__name__, static_folder="dist", static_url_path="/")
logging.getLogger('werkzeug').setLevel(logging.ERROR)
self.dashboard_be.logger.setLevel(logging.ERROR)
self.ws_clients = {} # remote_ip: ws
self.loop = asyncio.get_event_loop()
self.http_server_thread: threading.Thread = None
@self.dashboard_be.get("/")
def index():
# 返回页面
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/config")
def rt_config():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/logs")
def rt_logs():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/extension")
def rt_extension():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/dashboard/default")
def rt_dashboard():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.post("/api/authenticate")
def authenticate():
username = self.cc.get("dashboard_username", "")
password = self.cc.get("dashboard_password", "")
username = self.context.base_config.get("dashboard_username", "")
password = self.context.base_config.get("dashboard_password", "")
# 获得请求体
post_data = request.json
if post_data["username"] == username and post_data["password"] == password:
@@ -71,14 +83,15 @@ class AstrBotDashBoard():
message="用户名或密码错误。",
data=None
).__dict__
@self.dashboard_be.post("/api/change_password")
def change_password():
password = self.cc.get("dashboard_password", "")
password = self.context.base_config("dashboard_password", "")
# 获得请求体
post_data = request.json
if post_data["password"] == password:
self.cc.put("dashboard_password", post_data["new_password"])
self.context.config_helper.put("dashboard_password", post_data["new_password"])
self.context.base_config['dashboard_password'] = post_data["new_password"]
return Response(
status="success",
message="修改成功。",
@@ -99,9 +112,11 @@ class AstrBotDashBoard():
# last_24_platform = db_inst.get_last_24h_stat_platform()
platforms = db_inst.get_platform_cnt_total()
self.dashboard_data.stats["session"] = []
self.dashboard_data.stats["session_total"] = db_inst.get_session_cnt_total()
self.dashboard_data.stats["session_total"] = db_inst.get_session_cnt_total(
)
self.dashboard_data.stats["message"] = last_24_message
self.dashboard_data.stats["message_total"] = db_inst.get_message_cnt_total()
self.dashboard_data.stats["message_total"] = db_inst.get_message_cnt_total(
)
self.dashboard_data.stats["platform"] = platforms
return Response(
@@ -109,7 +124,7 @@ class AstrBotDashBoard():
message="",
data=self.dashboard_data.stats
).__dict__
@self.dashboard_be.get("/api/configs")
def get_configs():
# 如果params中有namespace则返回该namespace下的配置
@@ -121,7 +136,7 @@ class AstrBotDashBoard():
message="",
data=conf
).__dict__
@self.dashboard_be.get("/api/config_outline")
def get_config_outline():
outline = self._generate_outline()
@@ -130,12 +145,12 @@ class AstrBotDashBoard():
message="",
data=outline
).__dict__
@self.dashboard_be.post("/api/configs")
def post_configs():
post_configs = request.json
try:
self.funcs["post_configs"](post_configs)
self.on_post_configs(post_configs)
return Response(
status="success",
message="保存成功~ 机器人将在 2 秒内重启以应用新的配置。",
@@ -147,11 +162,11 @@ class AstrBotDashBoard():
message=e.__str__(),
data=self.dashboard_data.configs
).__dict__
@self.dashboard_be.get("/api/extensions")
def get_plugins():
_plugin_resp = []
for plugin in self.dashboard_data.plugins:
for plugin in self.context.cached_plugins:
_p = plugin.metadata
_t = {
"name": _p.plugin_name,
@@ -166,94 +181,123 @@ class AstrBotDashBoard():
message="",
data=_plugin_resp
).__dict__
@self.dashboard_be.post("/api/extensions/install")
def install_plugin():
post_data = request.json
repo_url = post_data["url"]
try:
self.logger.log(f"正在安装插件 {repo_url}", tag="可视化面板")
putil.install_plugin(repo_url, self.dashboard_data.plugins)
self.logger.log(f"安装插件 {repo_url} 成功", tag="可视化面板")
logger.info(f"正在安装插件 {repo_url}")
self.plugin_manager.install_plugin(repo_url)
logger.info(f"安装插件 {repo_url} 成功")
return Response(
status="success",
message="安装成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/install: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/upload-install")
def upload_install_plugin():
try:
file = request.files['file']
print(file.filename)
logger.info(f"正在安装用户上传的插件 {file.filename}")
# save file to temp/
file_path = f"temp/{uuid.uuid4()}.zip"
file.save(file_path)
self.plugin_manager.install_plugin_from_file(file_path)
logger.info(f"安装插件 {file.filename} 成功")
return Response(
status="success",
message="安装成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/upload-install: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/uninstall")
def uninstall_plugin():
post_data = request.json
plugin_name = post_data["name"]
try:
self.logger.log(f"正在卸载插件 {plugin_name}", tag="可视化面板")
putil.uninstall_plugin(plugin_name, self.dashboard_data.plugins)
self.logger.log(f"卸载插件 {plugin_name} 成功", tag="可视化面板")
logger.info(f"正在卸载插件 {plugin_name}")
self.plugin_manager.uninstall_plugin(plugin_name)
logger.info(f"卸载插件 {plugin_name} 成功")
return Response(
status="success",
message="卸载成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/uninstall: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/update")
def update_plugin():
post_data = request.json
plugin_name = post_data["name"]
try:
self.logger.log(f"正在更新插件 {plugin_name}", tag="可视化面板")
putil.update_plugin(plugin_name, self.dashboard_data.plugins)
self.logger.log(f"更新插件 {plugin_name} 成功", tag="可视化面板")
logger.info(f"正在更新插件 {plugin_name}")
self.plugin_manager.update_plugin(plugin_name)
logger.info(f"更新插件 {plugin_name} 成功")
return Response(
status="success",
message="更新成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/update: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/log")
def log():
for item in self.ws_clients:
try:
asyncio.run_coroutine_threadsafe(self.ws_clients[item].send(request.data.decode()), self.loop)
asyncio.run_coroutine_threadsafe(
self.ws_clients[item].send(request.data.decode()), self.loop).result()
except Exception as e:
pass
return 'ok'
@self.dashboard_be.get("/api/check_update")
def get_update_info():
try:
ret = check_update()
ret = self.astrbot_updator.check_update(None, None)
return Response(
status="success",
message=ret,
message=str(ret) if ret is not None else "已经是最新版本了。",
data={
"has_new_version": ret != "当前已经是最新版本。" # 先这样吧,累了=.=
"has_new_version": ret is not None
}
).__dict__
except Exception as e:
logger.error(f"/api/check_update: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/update_project")
def update_project_api():
version = request.json['version']
@@ -262,41 +306,89 @@ class AstrBotDashBoard():
version = ''
else:
latest = False
version = request.json["version"]
try:
update_project(request_release_info(), latest=latest, version=version)
threading.Thread(target=self.shutdown_bot, args=(3,)).start()
self.astrbot_updator.update(latest=latest, version=version)
threading.Thread(target=self.astrbot_updator._reboot, args=(3, )).start()
return Response(
status="success",
message="更新成功,机器人将在 3 秒内重启。",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/update_project: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
def shutdown_bot(self, delay_s: int):
time.sleep(delay_s)
py = sys.executable
os.execl(py, py, *sys.argv)
@self.dashboard_be.get("/api/llm/list")
def llm_list():
ret = []
for llm in self.context.llms:
ret.append(llm.llm_name)
return Response(
status="success",
message="",
data=ret
).__dict__
@self.dashboard_be.get("/api/llm")
def llm():
text = request.args["text"]
llm = request.args["llm"]
for llm_ in self.context.llms:
if llm_.llm_name == llm:
try:
ret = asyncio.run_coroutine_threadsafe(
llm_.llm_instance.text_chat(text), self.loop).result()
return Response(
status="success",
message="",
data=ret
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
return Response(
status="error",
message="LLM not found.",
data=None
).__dict__
def on_post_configs(self, post_configs: dict):
try:
if 'base_config' in post_configs:
self.dashboard_helper.save_config(
post_configs['base_config'], namespace='') # 基础配置
self.dashboard_helper.save_config(
post_configs['config'], namespace=post_configs['namespace']) # 选定配置
self.dashboard_helper.parse_default_config(
self.dashboard_data, self.context.config_helper.get_all())
# 重启
threading.Thread(target=self.astrbot_updator._reboot,
args=(2, ), daemon=True).start()
except Exception as e:
raise e
def _get_configs(self, namespace: str):
if namespace == "":
ret = [self.dashboard_data.configs['data'][5],
self.dashboard_data.configs['data'][6],]
ret = [self.dashboard_data.configs['data'][4],
self.dashboard_data.configs['data'][5],]
elif namespace == "internal_platform_qq_official":
ret = [self.dashboard_data.configs['data'][0],]
elif namespace == "internal_platform_qq_gocq":
ret = [self.dashboard_data.configs['data'][1],]
elif namespace == "internal_platform_general": # 全局平台配置
elif namespace == "internal_platform_general": # 全局平台配置
ret = [self.dashboard_data.configs['data'][2],]
elif namespace == "internal_llm_openai_official":
ret = [self.dashboard_data.configs['data'][3],]
elif namespace == "internal_llm_rev_chatgpt":
ret = [self.dashboard_data.configs['data'][4],]
elif namespace == "internal_platform_qq_aiocqhttp":
ret = [self.dashboard_data.configs['data'][6],]
else:
path = f"data/config/{namespace}.json"
if not os.path.exists(path):
@@ -317,28 +409,34 @@ class AstrBotDashBoard():
'''
outline = [
{
"type": "platform",
"name": "配置通用消息平台",
"body": [
{
"title": "通用",
"desc": "通用平台配置",
"namespace": "internal_platform_general",
"tag": ""
},
{
"title": "QQ_OFFICIAL",
"desc": "QQ官方API,仅支持频道",
"namespace": "internal_platform_qq_official",
"tag": ""
},
{
"title": "OneBot协议",
"desc": "支持cq-http、shamrock等目前仅支持QQ平台",
"namespace": "internal_platform_qq_gocq",
"tag": ""
}
]
"type": "platform",
"name": "配置通用消息平台",
"body": [
{
"title": "通用",
"desc": "通用平台配置",
"namespace": "internal_platform_general",
"tag": ""
},
{
"title": "QQ(官方)",
"desc": "QQ官方API支持频道、群、私聊(需获得群权限)",
"namespace": "internal_platform_qq_official",
"tag": ""
},
{
"title": "QQ(nakuru)",
"desc": "适用于 go-cqhttp",
"namespace": "internal_platform_qq_gocq",
"tag": ""
},
{
"title": "QQ(aiocqhttp)",
"desc": "适用于 Lagrange, LLBot, Shamrock 等支持反向WS的协议实现。",
"namespace": "internal_platform_qq_aiocqhttp",
"tag": ""
}
]
},
{
"type": "llm",
@@ -349,17 +447,11 @@ class AstrBotDashBoard():
"desc": "也支持使用官方接口的中转服务",
"namespace": "internal_llm_openai_official",
"tag": ""
},
{
"title": "Rev ChatGPT",
"desc": "早期的逆向ChatGPT不推荐",
"namespace": "internal_llm_rev_chatgpt",
"tag": ""
}
]
}
]
for plugin in self.global_object.cached_plugins:
for plugin in self.context.cached_plugins:
for item in outline:
if item['type'] == plugin.metadata.plugin_type:
item['body'].append({
@@ -370,41 +462,44 @@ class AstrBotDashBoard():
})
return outline
def register(self, name: str):
def decorator(func):
self.funcs[name] = func
return func
return decorator
async def get_log_history(self):
try:
with open("logs/astrbot/astrbot.log", "r", encoding="utf-8") as f:
return f.readlines()[-100:]
except Exception as e:
logger.warning(f"读取日志历史失败: {e.__str__()}")
return []
async def __handle_msg(self, websocket, path):
address = websocket.remote_address
# self.logger.log(f"和 {address} 建立了 websocket 连接", tag="可视化面板")
self.ws_clients[address] = websocket
data = ''.join(self.logger.history).replace('\n', '\r\n')
data = await self.get_log_history()
data = ''.join(data).replace('\n', '\r\n')
await websocket.send(data)
while True:
try:
msg = await websocket.recv()
except websockets.exceptions.ConnectionClosedError:
# self.logger.log(f"和 {address} 的 websocket 连接已断开", tag="可视化面板")
# logger.info(f"和 {address} 的 websocket 连接已断开")
del self.ws_clients[address]
break
except Exception as e:
# self.logger.log(f"和 {path} 的 websocket 连接发生了错误: {e.__str__()}", tag="可视化面板")
# logger.info(f"和 {path} 的 websocket 连接发生了错误: {e.__str__()}")
del self.ws_clients[address]
break
def run_ws_server(self, loop):
asyncio.set_event_loop(loop)
loop.run_until_complete(self.ws_server)
loop.run_forever()
def run(self):
threading.Thread(target=self.run_ws_server, args=(self.loop,)).start()
self.logger.log("已启动 websocket 服务器", tag="可视化面板")
ip_address = gu.get_local_ip_addresses()
ip_str = f"http://{ip_address}:6185\n\thttp://localhost:6185"
self.logger.log(f"\n==================\n您可访问:\n\n\t{ip_str}\n\n来登录可视化面板,默认账号密码为空。\n注意: 所有配置项现已全量迁移至 cmd_config.json 文件下,可登录可视化面板在线修改配置。\n==================\n", tag="可视化面板")
http_server = make_server('0.0.0.0', 6185, self.dashboard_be, threaded=True)
async def ws_server(self):
ws_server = websockets.serve(self.__handle_msg, "0.0.0.0", 6186)
logger.info("WebSocket 服务器已启动。")
await ws_server
def http_server(self):
http_server = make_server(
'0.0.0.0', 6185, self.dashboard_be, threaded=True)
http_server.serve_forever()
def run_http_server(self):
self.http_server_thread = threading.Thread(target=self.http_server, daemon=True).start()
ip_address = get_local_ip_addresses()
ip_str = f"http://{ip_address}:6185"
logger.info(f"HTTP 服务器已启动,可访问: {ip_str} 等来登录可视化面板。")

View File

@@ -1,26 +0,0 @@
{
"llms_claude_cookie": {
"config_type": "item",
"name": "llms_claude_cookie",
"description": "Claude 的 Cookie",
"path": "llms_claude_cookie",
"value": "hihi",
"val_type": "str"
},
"llms_huggingchat_email": {
"config_type": "item",
"name": "llms_huggingchat_email",
"description": "HuggingChat 的邮箱",
"path": "llms_huggingchat_email",
"value": "",
"val_type": "str"
},
"llms_huggingchat_psw": {
"config_type": "item",
"name": "llms_huggingchat_psw",
"description": "HuggingChat 的密码",
"path": "llms_huggingchat_psw",
"value": "",
"val_type": "str"
}
}

125
main.py
View File

@@ -1,96 +1,55 @@
import os, sys
from pip._internal import main as pipmain
import os
import asyncio
import sys
import warnings
import traceback
import threading
from astrbot.bootstrap import AstrBotBootstrap
from SparkleLogging.utils.core import LogManager
from logging import Formatter
warnings.filterwarnings("ignore")
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
logo_tmpl = """
___ _______.___________..______ .______ ______ .___________.
/ \ / | || _ \ | _ \ / __ \ | |
/ ^ \ | (----`---| |----`| |_) | | |_) | | | | | `---| |----`
/ /_\ \ \ \ | | | / | _ < | | | | | |
/ _____ \ .----) | | | | |\ \----.| |_) | | `--' | | |
/__/ \__\ |_______/ |__| | _| `._____||______/ \______/ |__|
"""
def main():
# config.yaml 配置文件加载和环境确认
global logger
try:
import cores.qqbot.core as qqBot
import yaml
import util.general_utils as gu
ymlfile = open(abs_path+"configs/config.yaml", 'r', encoding='utf-8')
cfg = yaml.safe_load(ymlfile)
except ImportError as import_error:
traceback.print_exc()
print(import_error)
input("第三方库未完全安装完毕,请退出程序重试")
except FileNotFoundError as file_not_found:
print(file_not_found)
input("配置文件不存在,请检查是否已经下载配置文件。")
import botpy, logging
# delete qqbotpy's logger
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
bootstrap = AstrBotBootstrap()
asyncio.run(bootstrap.run())
except KeyboardInterrupt:
logger.info("AstrBot 已退出")
except BaseException as e:
raise e
# 设置代理
if 'http_proxy' in cfg and cfg['http_proxy'] != '':
os.environ['HTTP_PROXY'] = cfg['http_proxy']
if 'https_proxy' in cfg and cfg['https_proxy'] != '':
os.environ['HTTPS_PROXY'] = cfg['https_proxy']
os.environ['NO_PROXY'] = 'https://api.sgroup.qq.com'
logger.error(traceback.format_exc())
# 检查并创建 temp 文件夹
if not os.path.exists(abs_path + "temp"):
os.mkdir(abs_path+"temp")
if not os.path.exists(abs_path + "data"):
os.mkdir(abs_path+"data")
if not os.path.exists(abs_path + "data/config"):
os.mkdir(abs_path+"data/config")
# 启动主程序cores/qqbot/core.py
qqBot.init(cfg)
def check_env(ch_mirror=False):
def check_env():
if not (sys.version_info.major == 3 and sys.version_info.minor >= 9):
print("请使用Python3.9+运行本项目")
input("按任意键退出...")
logger.error("请使用 Python3.9+ 运行本项目")
exit()
os.makedirs("data/config", exist_ok=True)
os.makedirs("temp", exist_ok=True)
if os.path.exists('requirements.txt'):
pth = 'requirements.txt'
else:
pth = 'QQChannelChatGPT'+ os.sep +'requirements.txt'
print("正在检查或下载第三方库,请耐心等待...")
try:
if ch_mirror:
print("使用阿里云镜像")
pipmain(['install', '-r', pth, '-i', 'https://mirrors.aliyun.com/pypi/simple/'])
else:
pipmain(['install', '-r', pth])
except BaseException as e:
print(e)
while True:
res = input("安装失败。\n如报错ValueError: check_hostname requires server_hostname请尝试先关闭代理后重试。\n1.输入y回车重试\n2. 输入c回车使用国内镜像源下载\n3. 输入其他按键回车继续往下执行。")
if res == "y":
try:
pipmain(['install', '-r', pth])
break
except BaseException as e:
print(e)
continue
elif res == "c":
try:
pipmain(['install', '-r', pth, '-i', 'https://mirrors.aliyun.com/pypi/simple/'])
break
except BaseException as e:
print(e)
continue
else:
break
print("第三方库检查完毕。")
if __name__ == "__main__":
args = sys.argv
if '-cn' in args:
check_env(True)
else:
check_env()
check_env()
t = threading.Thread(target=main, daemon=False)
t.start()
t.join()
logger = LogManager.GetLogger(
log_name='astrbot',
out_to_console=True,
custom_formatter=Formatter('[%(asctime)s| %(name)s - %(levelname)s|%(filename)s:%(lineno)d]: %(message)s', datefmt="%H:%M:%S")
)
logger.info(logo_tmpl)
main()

View File

@@ -1,310 +0,0 @@
import json
import inspect
import aiohttp
import asyncio
import json
import util.plugin_util as putil
import util.updator
from nakuru.entities.components import (
Image
)
from util import general_utils as gu
from model.provider.provider import Provider
from util.cmd_config import CmdConfig as cc
from util.general_utils import Logger
from cores.qqbot.types import (
GlobalObject,
AstrMessageEvent,
PluginType,
CommandResult,
RegisteredPlugin,
RegisteredPlatform
)
from typing import List, Tuple
PLATFORM_QQCHAN = 'qqchan'
PLATFORM_GOCQ = 'gocq'
# 指令功能的基类,通用的(不区分语言模型)的指令就在这实现
class Command:
def __init__(self, provider: Provider, global_object: GlobalObject = None):
self.provider = provider
self.global_object = global_object
self.logger: Logger = global_object.logger
async def check_command(self,
message,
session_id: str,
role: str,
platform: RegisteredPlatform,
message_obj):
self.platform = platform
# 插件
cached_plugins = self.global_object.cached_plugins
# 将消息封装成 AstrMessageEvent 对象
ame = AstrMessageEvent(
message_str=message,
message_obj=message_obj,
platform=platform,
role=role,
context=self.global_object,
session_id = session_id
)
# 从已启动的插件中查找是否有匹配的指令
for plugin in cached_plugins:
# 过滤掉平台类插件
if plugin.metadata.plugin_type == PluginType.PLATFORM:
continue
try:
if inspect.iscoroutinefunction(plugin.plugin_instance.run):
result = await plugin.plugin_instance.run(ame)
else:
result = await asyncio.to_thread(plugin.plugin_instance.run, ame)
if isinstance(result, CommandResult):
hit = result.hit
res = result._result_tuple()
elif isinstance(result, tuple):
hit = result[0]
res = result[1]
else:
raise TypeError("插件返回值格式错误。")
if hit:
return True, res
except TypeError as e:
# 参数不匹配,尝试使用旧的参数方案
try:
if inspect.iscoroutinefunction(plugin.plugin_instance.run):
hit, res = await plugin.plugin_instance.run(message, role, platform, message_obj, self.global_object.platform_qq)
else:
hit, res = await asyncio.to_thread(plugin.plugin_instance.run, message, role, platform, message_obj, self.global_object.platform_qq)
if hit:
return True, res
except BaseException as e:
self.logger.log(f"{plugin.metadata.plugin_name} 插件异常,原因: {str(e)}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。", level=gu.LEVEL_WARNING)
except BaseException as e:
self.logger.log(f"{plugin.metadata.plugin_name} 插件异常,原因: {str(e)}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。", level=gu.LEVEL_WARNING)
if self.command_start_with(message, "nick"):
return True, self.set_nick(message, platform, role)
if self.command_start_with(message, "plugin"):
return True, self.plugin_oper(message, role, cached_plugins, platform)
if self.command_start_with(message, "myid") or self.command_start_with(message, "!myid"):
return True, self.get_my_id(message_obj, platform)
if self.command_start_with(message, "web"): # 网页搜索
return True, self.web_search(message)
if not self.provider and self.command_start_with(message, "help"):
return True, await self.help()
return False, None
def web_search(self, message):
l = message.split(' ')
if len(l) == 1:
return True, f"网页搜索功能当前状态: {self.global_object.web_search}", "web"
elif l[1] == 'on':
self.global_object.web_search = True
return True, "已开启网页搜索", "web"
elif l[1] == 'off':
self.global_object.web_search = False
return True, "已关闭网页搜索", "web"
def get_my_id(self, message_obj, platform):
try:
user_id = str(message_obj.user_id)
return True, f"你在此平台上的ID{user_id}", "plugin"
except BaseException as e:
return False, f"{platform}上获取你的ID失败原因: {str(e)}", "plugin"
def get_new_conf(self, message, role):
if role != "admin":
return False, f"你的身份组{role}没有权限使用此指令。", "newconf"
l = message.split(" ")
if len(l) <= 1:
obj = cc.get_all()
p = gu.create_text_image("【cmd_config.json】", json.dumps(obj, indent=4, ensure_ascii=False))
return True, [Image.fromFileSystem(p)], "newconf"
'''
插件指令
'''
def plugin_oper(self, message: str, role: str, cached_plugins: List[RegisteredPlugin], platform: str):
l = message.split(" ")
if len(l) < 2:
p = gu.create_text_image("【插件指令面板】", "安装插件: \nplugin i 插件Github地址\n卸载插件: \nplugin d 插件名 \n重载插件: \nplugin reload\n查看插件列表:\nplugin l\n更新插件: plugin u 插件名\n")
return True, [Image.fromFileSystem(p)], "plugin"
else:
if l[1] == "i":
if role != "admin":
return False, f"你的身份组{role}没有权限安装插件", "plugin"
try:
putil.install_plugin(l[2], cached_plugins)
return True, "插件拉取并载入成功~", "plugin"
except BaseException as e:
return False, f"拉取插件失败,原因: {str(e)}", "plugin"
elif l[1] == "d":
if role != "admin":
return False, f"你的身份组{role}没有权限删除插件", "plugin"
try:
putil.uninstall_plugin(l[2], cached_plugins)
return True, "插件卸载成功~", "plugin"
except BaseException as e:
return False, f"卸载插件失败,原因: {str(e)}", "plugin"
elif l[1] == "u":
try:
putil.update_plugin(l[2], cached_plugins)
return True, "\n更新插件成功!!", "plugin"
except BaseException as e:
return False, f"更新插件失败,原因: {str(e)}\n建议: 使用 plugin i 指令进行覆盖安装(插件数据可能会丢失)", "plugin"
elif l[1] == "l":
try:
plugin_list_info = ""
for plugin in cached_plugins:
plugin_list_info += f"{plugin.metadata.plugin_name}: \n名称: {plugin.metadata.plugin_name}\n简介: {plugin.metadata.plugin_desc}\n版本: {plugin.metadata.version}\n作者: {plugin.metadata.author}\n"
p = gu.create_text_image("【已激活插件列表】", plugin_list_info + "\n使用plugin v 插件名 查看插件帮助\n")
return True, [Image.fromFileSystem(p)], "plugin"
except BaseException as e:
return False, f"获取插件列表失败,原因: {str(e)}", "plugin"
elif l[1] == "v":
try:
info = None
for i in cached_plugins:
if i.metadata.plugin_name == l[2]:
info = i.metadata
break
if info:
p = gu.create_text_image(f"【插件信息】", f"名称: {info['name']}\n{info['desc']}\n版本: {info['version']}\n作者: {info['author']}\n\n帮助:\n{info['help']}")
return True, [Image.fromFileSystem(p)], "plugin"
else:
return False, "未找到该插件", "plugin"
except BaseException as e:
return False, f"获取插件信息失败,原因: {str(e)}", "plugin"
'''
nick: 存储机器人的昵称
'''
def set_nick(self, message: str, platform: str, role: str = "member"):
if role != "admin":
return True, "你无权使用该指令 :P", "nick"
if platform == PLATFORM_GOCQ:
l = message.split(" ")
if len(l) == 1:
return True, "【设置机器人昵称】示例:\n支持多昵称\nnick 昵称1 昵称2 昵称3", "nick"
nick = l[1:]
cc.put("nick_qq", nick)
self.global_object.nick = tuple(nick)
return True, f"设置成功!现在你可以叫我这些昵称来提问我啦~", "nick"
elif platform == PLATFORM_QQCHAN:
nick = message.split(" ")[2]
return False, "QQ频道平台不支持为机器人设置昵称。", "nick"
def general_commands(self):
return {
"help": "帮助",
"keyword": "设置关键词/关键指令回复",
"update": "更新项目",
"nick": "设置机器人昵称",
"plugin": "插件安装、卸载和重载",
"web on/off": "LLM 网页搜索能力",
"reset": "重置 LLM 对话",
"/gpt": "切换到 OpenAI 官方接口",
"/revgpt": "切换到网页版ChatGPT",
}
async def help_messager(self, commands: dict, platform: str, cached_plugins: List[RegisteredPlugin] = None):
try:
async with aiohttp.ClientSession() as session:
async with session.get("https://soulter.top/channelbot/notice.json") as resp:
notice = (await resp.json())["notice"]
except BaseException as e:
notice = ""
msg = "# Help Center\n## 指令列表\n"
for key, value in commands.items():
msg += f"`{key}` - {value}\n"
# plugins
if cached_plugins != None:
plugin_list_info = ""
for plugin in cached_plugins:
plugin_list_info += f"`{plugin.metadata.plugin_name}` {plugin.metadata.desc}\n"
if plugin_list_info.strip() != "":
msg += "\n## 插件列表\n> 使用plugin v 插件名 查看插件帮助\n"
msg += plugin_list_info
msg += notice
try:
p = gu.create_markdown_image(msg)
return [Image.fromFileSystem(p),]
except BaseException as e:
self.logger.log(str(e))
return msg
def command_start_with(self, message: str, *args):
'''
当消息以指定的指令开头时返回True
'''
for arg in args:
if message.startswith(arg) or message.startswith('/'+arg):
return True
return False
def update(self, message: str, role: str):
if role != "admin":
return True, "你没有权限使用该指令", "update"
l = message.split(" ")
if len(l) == 1:
try:
update_info = util.updator.check_update()
update_info += "\nTips:\n输入「update latest」更新到最新版本\n输入「update <版本号如v3.1.3>」切换到指定版本\n输入「update r」重启机器人\n"
return True, update_info, "update"
except BaseException as e:
return False, "检查更新失败: "+str(e), "update"
else:
if l[1] == "latest":
try:
release_data = util.updator.request_release_info()
util.updator.update_project(release_data)
return True, "更新成功重启生效。可输入「update r」重启", "update"
except BaseException as e:
return False, "更新失败: "+str(e), "update"
elif l[1] == "r":
util.updator._reboot()
else:
if l[1].lower().startswith('v'):
try:
release_data = util.updator.request_release_info(latest=False)
util.updator.update_project(release_data, latest=False, version=l[1])
return True, "更新成功重启生效。可输入「update r」重启", "update"
except BaseException as e:
return False, "更新失败: "+str(e), "update"
else:
return False, "版本号格式错误", "update"
def reset(self):
return False
def set(self):
return False
def unset(self):
return False
def key(self):
return False
async def help(self):
ret = await self.help_messager(self.general_commands(), self.platform, self.global_object.cached_plugins)
return True, ret, "help"
def status(self):
return False
def token(self):
return False
def his(self):
return False
def draw(self):
return False

View File

@@ -0,0 +1,262 @@
import aiohttp
from model.command.manager import CommandManager
from model.plugin.manager import PluginManager
from type.message_event import AstrMessageEvent
from type.command import CommandResult
from type.types import Context
from type.config import VERSION
from SparkleLogging.utils.core import LogManager
from logging import Logger
from nakuru.entities.components import Image
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class InternalCommandHandler:
def __init__(self, manager: CommandManager, plugin_manager: PluginManager) -> None:
self.manager = manager
self.plugin_manager = plugin_manager
self.manager.register("help", "查看帮助", 10, self.help)
self.manager.register("wake", "设置机器人唤醒词", 10, self.set_nick)
self.manager.register("update", "更新 AstrBot", 10, self.update)
self.manager.register("plugin", "插件管理", 10, self.plugin)
self.manager.register("reboot", "重启 AstrBot", 10, self.reboot)
self.manager.register("websearch", "网页搜索开关", 10, self.web_search)
self.manager.register("t2i", "文本转图片开关", 10, self.t2i_toggle)
self.manager.register("myid", "获取你在此平台上的ID", 10, self.myid)
self.manager.register("provider", "查看和切换当前使用的 LLM 资源来源", 10, self.provider)
def provider(self, message: AstrMessageEvent, context: Context):
if len(context.llms) == 0:
return CommandResult().message("当前没有加载任何 LLM 资源。")
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
ret = "## 当前载入的 LLM 资源\n"
for idx, llm in enumerate(context.llms):
ret += f"{idx}. {llm.llm_name}"
if llm.origin:
ret += f" (来源: {llm.origin})"
if context.message_handler.provider == llm.llm_instance:
ret += " (当前使用)"
ret += "\n"
ret += "\n使用 provider <序号> 切换 LLM 资源。"
return CommandResult().message(ret)
else:
try:
idx = int(tokens.get(1))
if idx >= len(context.llms):
return CommandResult().message("provider: 无效的序号。")
context.message_handler.set_provider(context.llms[idx].llm_instance)
return CommandResult().message(f"已经成功切换到 LLM 资源 {context.llms[idx].llm_name}")
except BaseException as e:
return CommandResult().message("provider: 参数错误。")
def set_nick(self, message: AstrMessageEvent, context: Context):
message_str = message.message_str
if message.role != "admin":
return CommandResult().message("你没有权限使用该指令。")
l = message_str.split(" ")
if len(l) == 1:
return CommandResult().message("设置机器人唤醒词,支持多唤醒词。以唤醒词开头的消息会唤醒机器人处理,起到 @ 的效果。\n示例wake 昵称1 昵称2 昵称3")
nick = l[1:]
context.config_helper.put("nick_qq", nick)
context.nick = tuple(nick)
return CommandResult(
hit=True,
success=True,
message_chain=f"已经成功将唤醒词设定为 {nick}",
)
def update(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if message.role != "admin":
return CommandResult(
hit=True,
success=False,
message_chain="你没有权限使用该指令",
)
update_info = context.updator.check_update(None, None)
if tokens.len == 1:
ret = ""
if not update_info:
ret = f"当前已经是最新版本 v{VERSION}"
else:
ret = f"发现新版本 {update_info.version},更新内容如下:\n---\n{update_info.body}\n---\n- 使用 /update latest 更新到最新版本。\n- 使用 /update vX.X.X 更新到指定版本。"
return CommandResult(
hit=True,
success=False,
message_chain=ret,
)
else:
if tokens.get(1) == "latest":
try:
context.updator.update()
return CommandResult().message(f"已经成功更新到最新版本 v{update_info.version}。要应用更新,请重启 AstrBot。输入 /reboot 即可重启")
except BaseException as e:
return CommandResult().message(f"更新失败。原因:{str(e)}")
elif tokens.get(1).startswith("v"):
try:
context.updator.update(version=tokens.get(1))
return CommandResult().message(f"已经成功更新到版本 v{tokens.get(1)}。要应用更新,请重启 AstrBot。输入 /reboot 即可重启")
except BaseException as e:
return CommandResult().message(f"更新失败。原因:{str(e)}")
else:
return CommandResult().message("update: 参数错误。")
def reboot(self, message: AstrMessageEvent, context: Context):
if message.role != "admin":
return CommandResult(
hit=True,
success=False,
message_chain="你没有权限使用该指令",
)
context.updator._reboot(5)
return CommandResult(
hit=True,
success=True,
message_chain="AstrBot 将在 5s 后重启。",
)
def plugin(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
ret = "# 插件指令面板 \n- 安装插件: `plugin i 插件Github地址`\n- 卸载插件: `plugin d 插件名`\n- 查看插件列表:`plugin l`\n - 更新插件: `plugin u 插件名`\n"
return CommandResult().message(ret)
if tokens.get(1) == "l":
plugin_list_info = ""
for plugin in context.cached_plugins:
plugin_list_info += f"- `{plugin.metadata.plugin_name}` By {plugin.metadata.author}: {plugin.metadata.desc}\n"
if plugin_list_info.strip() == "":
return CommandResult().message("plugin v: 没有找到插件。")
return CommandResult().message(plugin_list_info)
elif tokens.get(1) == "d":
if message.role != "admin":
return CommandResult().message("plugin d: 你没有权限使用该指令。")
if tokens.len == 2:
return CommandResult().message("plugin d: 请指定要卸载的插件名。")
plugin_name = tokens.get(2)
try:
self.plugin_manager.uninstall_plugin(plugin_name)
except BaseException as e:
return CommandResult().message(f"plugin d: 卸载插件失败。原因:{str(e)}")
return CommandResult().message(f"plugin d: 已经成功卸载插件 {plugin_name}")
elif tokens.get(1) == "i":
if message.role != "admin":
return CommandResult().message("plugin i: 你没有权限使用该指令。")
if tokens.len == 2:
return CommandResult().message("plugin i: 请指定要安装的插件的 Github 地址,或者前往可视化面板安装。")
plugin_url = tokens.get(2)
try:
self.plugin_manager.install_plugin(plugin_url)
except BaseException as e:
return CommandResult().message(f"plugin i: 安装插件失败。原因:{str(e)}")
return CommandResult().message("plugin i: 已经成功安装插件。")
elif tokens.get(1) == "u":
if message.role != "admin":
return CommandResult().message("plugin u: 你没有权限使用该指令。")
if tokens.len == 2:
return CommandResult().message("plugin u: 请指定要更新的插件名。")
plugin_name = tokens.get(2)
try:
self.plugin_manager.update_plugin(plugin_name)
except BaseException as e:
return CommandResult().message(f"plugin u: 更新插件失败。原因:{str(e)}")
return CommandResult().message(f"plugin u: 已经成功更新插件 {plugin_name}")
return CommandResult().message("plugin: 参数错误。")
async def help(self, message: AstrMessageEvent, context: Context):
notice = ""
try:
async with aiohttp.ClientSession() as session:
async with session.get("https://soulter.top/channelbot/notice.json") as resp:
notice = (await resp.json())["notice"]
except BaseException as e:
logger.warn("An error occurred while fetching astrbot notice. Never mind, it's not important.")
msg = "# Help Center\n## 指令列表\n"
for key, value in self.manager.commands_handler.items():
if value.plugin_metadata:
msg += f"- `{key}` ({value.plugin_metadata.plugin_name}): {value.description}\n"
else: msg += f"- `{key}`: {value.description}\n"
# plugins
if context.cached_plugins != None:
plugin_list_info = ""
for plugin in context.cached_plugins:
plugin_list_info += f"- `{plugin.metadata.plugin_name}` {plugin.metadata.desc}\n"
if plugin_list_info.strip() != "":
msg += "\n## 插件列表\n> 使用plugin v 插件名 查看插件帮助\n"
msg += plugin_list_info
msg += notice
return CommandResult().message(msg)
def web_search(self, message: AstrMessageEvent, context: Context):
l = message.message_str.split(' ')
if len(l) == 1:
return CommandResult(
hit=True,
success=True,
message_chain=f"网页搜索功能当前状态: {context.web_search}",
)
elif l[1] == 'on':
context.web_search = True
return CommandResult(
hit=True,
success=True,
message_chain="已开启网页搜索",
)
elif l[1] == 'off':
context.web_search = False
return CommandResult(
hit=True,
success=True,
message_chain="已关闭网页搜索",
)
else:
return CommandResult(
hit=True,
success=False,
message_chain="参数错误",
)
def t2i_toggle(self, message: AstrMessageEvent, context: Context):
p = context.config_helper.get("qq_pic_mode", True)
if p:
context.config_helper.put("qq_pic_mode", False)
return CommandResult(
hit=True,
success=True,
message_chain="已关闭文本转图片模式。",
)
context.config_helper.put("qq_pic_mode", True)
return CommandResult(
hit=True,
success=True,
message_chain="已开启文本转图片模式。",
)
def myid(self, message: AstrMessageEvent, context: Context):
try:
user_id = str(message.message_obj.sender.user_id)
return CommandResult(
hit=True,
success=True,
message_chain=f"你在此平台上的ID{user_id}",
)
except BaseException as e:
return CommandResult(
hit=True,
success=False,
message_chain=f"{message.platform} 上获取你的ID失败原因: {str(e)}",
)

107
model/command/manager.py Normal file
View File

@@ -0,0 +1,107 @@
import heapq
import inspect
import traceback
from typing import Dict
from type.types import Context
from type.plugin import PluginMetadata
from type.message_event import AstrMessageEvent
from type.command import CommandResult
from type.register import RegisteredPlugins
from model.command.parser import CommandParser
from model.plugin.command import PluginCommandBridge
from SparkleLogging.utils.core import LogManager
from logging import Logger
from dataclasses import dataclass
logger: Logger = LogManager.GetLogger(log_name='astrbot')
@dataclass
class CommandMetadata():
inner_command: bool
plugin_metadata: PluginMetadata
handler: callable
description: str
class CommandManager():
def __init__(self):
self.commands = []
self.commands_handler: Dict[str, CommandMetadata] = {}
self.command_parser = CommandParser()
def register(self,
command: str,
description: str,
priority: int,
handler: callable,
plugin_metadata: PluginMetadata = None,
):
'''
优先级越高,越先被处理。
'''
if command in self.commands_handler:
raise ValueError(f"Command {command} already exists.")
if not handler:
raise ValueError(f"Handler of {command} is None.")
heapq.heappush(self.commands, (-priority, command))
self.commands_handler[command] = CommandMetadata(
inner_command=plugin_metadata == None,
plugin_metadata=plugin_metadata,
handler=handler,
description=description
)
if plugin_metadata:
logger.info(f"已注册 {plugin_metadata.author}/{plugin_metadata.plugin_name} 的指令 {command}")
else:
logger.info(f"已注册指令 {command}")
def register_from_pcb(self, pcb: PluginCommandBridge):
for request in pcb.plugin_commands_waitlist:
plugin = None
for registered_plugin in pcb.cached_plugins:
if registered_plugin.metadata.plugin_name == request.plugin_name:
plugin = registered_plugin
break
if not plugin:
logger.warning(f"插件 {request.plugin_name} 未找到,无法注册指令 {request.command_name}")
self.register(request.command_name, request.description, request.priority, request.handler, plugin.metadata)
self.plugin_commands_waitlist = []
async def scan_command(self, message_event: AstrMessageEvent, context: Context) -> CommandResult:
message_str = message_event.message_str
for _, command in self.commands:
if message_str.startswith(command):
logger.info(f"触发 {command} 指令。")
command_result = await self.execute_handler(command, message_event, context)
if command_result.hit:
return command_result
async def execute_handler(self,
command: str,
message_event: AstrMessageEvent,
context: Context) -> CommandResult:
command_metadata = self.commands_handler[command]
handler = command_metadata.handler
# call handler
try:
if inspect.iscoroutinefunction(handler):
command_result = await handler(message_event, context)
else:
command_result = handler(message_event, context)
if not isinstance(command_result, CommandResult):
raise ValueError(f"Command {command} handler should return CommandResult.")
context.metrics_uploader.command_stats[command] += 1
return command_result
except BaseException as e:
logger.error(traceback.format_exc())
if not command_metadata.inner_command:
text = f"执行 {command}/({command_metadata.plugin_metadata.plugin_name} By {command_metadata.plugin_metadata.author}) 指令时发生了异常。{e}"
logger.error(text)
else:
text = f"执行 {command} 指令时发生了异常。{e}"
logger.error(text)
return CommandResult().message(text)

View File

@@ -1,273 +0,0 @@
from model.command.command import Command
from model.provider.openai_official import ProviderOpenAIOfficial
from cores.qqbot.personality import personalities
from cores.qqbot.types import GlobalObject
class CommandOpenAIOfficial(Command):
def __init__(self, provider: ProviderOpenAIOfficial, global_object: GlobalObject):
self.provider = provider
self.global_object = global_object
self.personality_str = ""
super().__init__(provider, global_object)
async def check_command(self,
message: str,
session_id: str,
role: str,
platform: str,
message_obj):
self.platform = platform
# 检查基础指令
hit, res = await super().check_command(
message,
session_id,
role,
platform,
message_obj
)
# 这里是这个 LLM 的专属指令
if hit:
return True, res
if self.command_start_with(message, "reset", "重置"):
return True, await self.reset(session_id, message)
elif self.command_start_with(message, "his", "历史"):
return True, self.his(message, session_id)
elif self.command_start_with(message, "token"):
return True, self.token(session_id)
elif self.command_start_with(message, "gpt"):
return True, self.gpt()
elif self.command_start_with(message, "status"):
return True, self.status()
elif self.command_start_with(message, "help", "帮助"):
return True, await self.help()
elif self.command_start_with(message, "unset"):
return True, self.unset(session_id)
elif self.command_start_with(message, "set"):
return True, self.set(message, session_id)
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
elif self.command_start_with(message, "", "draw"):
return True, await self.draw(message)
elif self.command_start_with(message, "key"):
return True, self.key(message)
elif self.command_start_with(message, "switch"):
return True, await self.switch(message)
return False, None
async def help(self):
commands = super().general_commands()
commands[''] = '画画'
commands['key'] = '添加OpenAI key'
commands['set'] = '人格设置面板'
commands['gpt'] = '查看gpt配置信息'
commands['status'] = '查看key使用状态'
commands['token'] = '查看本轮会话token'
return True, await super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help"
async def reset(self, session_id: str, message: str = "reset"):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "reset"
l = message.split(" ")
if len(l) == 1:
await self.provider.forget(session_id)
return True, "重置成功", "reset"
if len(l) == 2 and l[1] == "p":
self.provider.forget(session_id)
if self.personality_str != "":
self.set(self.personality_str, session_id) # 重新设置人格
return True, "重置成功", "reset"
def his(self, message: str, session_id: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "his"
#分页每页5条
msg = ''
size_per_page = 3
page = 1
if message[4:]:
page = int(message[4:])
# 检查是否有过历史记录
if session_id not in self.provider.session_dict:
msg = f"历史记录为空"
return True, msg, "his"
l = self.provider.session_dict[session_id]
max_page = len(l)//size_per_page + 1 if len(l)%size_per_page != 0 else len(l)//size_per_page
p = self.provider.get_prompts_by_cache_list(self.provider.session_dict[session_id], divide=True, paging=True, size=size_per_page, page=page)
return True, f"历史记录如下:\n{p}\n{page}页 | 共{max_page}\n*输入/his 2跳转到第2页", "his"
def token(self, session_id: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "token"
return True, f"会话的token数: {self.provider.get_user_usage_tokens(self.provider.session_dict[session_id])}\n系统最大缓存token数: {self.provider.max_tokens}", "token"
def gpt(self):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "gpt"
return True, f"OpenAI GPT配置:\n {self.provider.chatGPT_configs}", "gpt"
def status(self):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "status"
chatgpt_cfg_str = ""
key_stat = self.provider.get_key_stat()
index = 1
max = 9000000
gg_count = 0
total = 0
tag = ''
for key in key_stat.keys():
sponsor = ''
total += key_stat[key]['used']
if key_stat[key]['exceed']:
gg_count += 1
continue
if 'sponsor' in key_stat[key]:
sponsor = key_stat[key]['sponsor']
chatgpt_cfg_str += f" |-{index}: {key[-8:]} {key_stat[key]['used']}/{max} {sponsor}{tag}\n"
index += 1
return True, f"⭐使用情况({str(gg_count)}个已用):\n{chatgpt_cfg_str}", "status"
def key(self, message: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "reset"
l = message.split(" ")
if len(l) == 1:
msg = "感谢您赞助keykey为官方API使用请以以下格式赞助:\n/key xxxxx"
return True, msg, "key"
key = l[1]
if self.provider.check_key(key):
self.provider.append_key(key)
return True, f"*★,°*:.☆( ̄▽ ̄)/$:*.°★* 。\n该Key被验证为有效。感谢你的赞助~"
else:
return True, "该Key被验证为无效。也许是输入错误了或者重试。", "key"
async def switch(self, message: str):
'''
切换账号
'''
l = message.split(" ")
if len(l) == 1:
_, ret, _ = self.status()
curr_ = self.provider.get_curr_key()
if curr_ is None:
ret += "当前您未选择账号。输入/switch <账号序号>切换账号。"
else:
ret += f"当前您选择的账号为:{curr_[-8:]}。输入/switch <账号序号>切换账号。"
return True, ret, "switch"
elif len(l) == 2:
try:
key_stat = self.provider.get_key_stat()
index = int(l[1])
if index > len(key_stat) or index < 1:
return True, "账号序号不合法。", "switch"
else:
try:
new_key = list(key_stat.keys())[index-1]
ret = await self.provider.check_key(new_key)
self.provider.set_key(new_key)
except BaseException as e:
return True, "账号切换失败,原因: " + str(e), "switch"
return True, f"账号切换成功。", "switch"
except BaseException as e:
return True, "未知错误: "+str(e), "switch"
else:
return True, "参数过多。", "switch"
def unset(self, session_id: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "unset"
self.provider.curr_personality = {}
self.provider.forget(session_id)
return True, "已清除人格并重置历史记录。", "unset"
def set(self, message: str, session_id: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "set"
l = message.split(" ")
if len(l) == 1:
return True, f"【人格文本由PlexPt开源项目awesome-chatgpt-pr \
ompts-zh提供】\n设置人格: \n/set 人格名。例如/set 编剧\n人格列表: /set list\n人格详细信息: \
/set view 人格名\n自定义人格: /set 人格文本\n重置会话(清除人格): /reset\n重置会话(保留人格): /reset p\n【当前人格】: {str(self.provider.curr_personality)}", "set"
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f" |-{key}\n"
msg += '\n\n*输入/set view 人格名查看人格详细信息'
msg += '\n*不定时更新人格库,请及时更新本项目。'
return True, msg, "set"
elif l[1] == "view":
if len(l) == 2:
return True, "请输入/set view 人格名", "set"
ps = l[2].strip()
if ps in personalities:
msg = f"人格{ps}的详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格{ps}不存在"
return True, msg, "set"
else:
ps = l[1].strip()
if ps in personalities:
self.provider.curr_personality = {
'name': ps,
'prompt': personalities[ps]
}
self.provider.session_dict[session_id] = []
new_record = {
"user": {
"role": "user",
"content": personalities[ps],
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id].append(new_record)
self.personality_str = message
return True, f"人格{ps}已设置。", "set"
else:
self.provider.curr_personality = {
'name': '自定义人格',
'prompt': ps
}
new_record = {
"user": {
"role": "user",
"content": ps,
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id] = []
self.provider.session_dict[session_id].append(new_record)
self.personality_str = message
return True, f"自定义人格已设置。 \n人格信息: {ps}", "set"
async def draw(self, message):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "draw"
if message.startswith("/画"):
message = message[2:]
elif message.startswith(""):
message = message[1:]
try:
# 画图模式传回3个参数
img_url = await self.provider.image_chat(message)
return True, img_url, "draw"
except Exception as e:
if 'exceeded' in str(e):
return f"OpenAI API错误。原因\n{str(e)} \n超额了。可自己搭建一个机器人(Github仓库QQChannelChatGPT)"
return False, f"图片生成失败: {e}", "draw"

View File

@@ -0,0 +1,186 @@
from model.command.manager import CommandManager
from type.message_event import AstrMessageEvent
from type.command import CommandResult
from type.types import Context
from SparkleLogging.utils.core import LogManager
from logging import Logger
from nakuru.entities.components import Image
from model.provider.openai_official import ProviderOpenAIOfficial, MODELS
from util.personality import personalities
from util.io import download_image_by_url
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class OpenAIOfficialCommandHandler():
def __init__(self, manager: CommandManager) -> None:
self.manager = manager
self.provider = None
self.manager.register("reset", "重置会话", 10, self.reset)
self.manager.register("his", "查看历史记录", 10, self.his)
self.manager.register("status", "查看当前状态", 10, self.status)
self.manager.register("switch", "切换账号", 10, self.switch)
self.manager.register("unset", "清除个性化人格设置", 10, self.unset)
self.manager.register("set", "设置个性化人格", 10, self.set)
self.manager.register("draw", "调用 DallE 模型画图", 10, self.draw)
self.manager.register("model", "切换模型", 10, self.model)
self.manager.register("", "调用 DallE 模型画图", 10, self.draw)
def set_provider(self, provider):
self.provider = provider
async def reset(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
await self.provider.forget(message.session_id, keep_system_prompt=True)
return CommandResult().message("重置成功")
elif tokens.get(1) == 'p':
await self.provider.forget(message.session_id)
async def model(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
ret = await self._print_models()
return CommandResult().message(ret)
model = tokens.get(1)
if model.isdigit():
try:
models = await self.provider.get_models()
except BaseException as e:
logger.error(f"获取模型列表失败: {str(e)}")
return CommandResult().message("获取模型列表失败,无法使用编号切换模型。可以尝试直接输入模型名来切换,如 gpt-4o。")
models = list(models)
if int(model) <= len(models) and int(model) >= 1:
model = models[int(model)-1]
self.provider.set_model(model.id)
return CommandResult().message(f"模型已设置为 {model.id}")
else:
self.provider.set_model(model)
return CommandResult().message(f"模型已设置为 {model} (自定义)")
async def _print_models(self):
try:
models = await self.provider.get_models()
except BaseException as e:
return "获取模型列表失败: " + str(e)
i = 1
ret = "OpenAI GPT 类可用模型"
for model in models:
ret += f"\n{i}. {model.id}"
i += 1
ret += "\nTips: 使用 /model 模型名/编号,即可实时更换模型。如目标模型不存在于上表,请输入模型名。"
logger.debug(ret)
return ret
def his(self, message: AstrMessageEvent, context: Context):
tokens = self.manager.command_parser.parse(message.message_str)
size_per_page = 3
page = 1
if tokens.len == 2:
try:
page = int(tokens.get(1))
except BaseException as e:
return CommandResult().message("页码格式错误")
contexts, total_num = self.provider.dump_contexts_page(message.session_id, size_per_page, page=page)
t_pages = total_num // size_per_page + 1
return CommandResult().message(f"历史记录如下:\n{contexts}\n{page} 页 | 共 {t_pages}\n*输入 /his 2 跳转到第 2 页")
def status(self, message: AstrMessageEvent, context: Context):
keys_data = self.provider.get_keys_data()
ret = "OpenAI Key"
for k in keys_data:
status = "🟢" if keys_data[k] else "🔴"
ret += "\n|- " + k[:8] + " " + status
conf = self.provider.get_configs()
ret += "\n当前模型: " + conf['model']
if conf['model'] in MODELS:
ret += "\n最大上下文窗口: " + str(MODELS[conf['model']]) + " tokens"
if message.session_id in self.provider.session_memory and len(self.provider.session_memory[message.session_id]):
ret += "\n你的会话上下文: " + str(self.provider.session_memory[message.session_id][-1]['usage_tokens']) + " tokens"
return CommandResult().message(ret)
async def switch(self, message: AstrMessageEvent, context: Context):
'''
切换账号
'''
tokens = self.manager.command_parser.parse(message.message_str)
if tokens.len == 1:
_, ret, _ = self.status()
curr_ = self.provider.get_curr_key()
if curr_ is None:
ret += "当前您未选择账号。输入/switch <账号序号>切换账号。"
else:
ret += f"当前您选择的账号为:{curr_[-8:]}。输入/switch <账号序号>切换账号。"
return CommandResult().message(ret)
elif tokens.len == 2:
try:
key_stat = self.provider.get_keys_data()
index = int(tokens.get(1))
if index > len(key_stat) or index < 1:
return CommandResult().message("账号序号错误。")
else:
try:
new_key = list(key_stat.keys())[index-1]
self.provider.set_key(new_key)
except BaseException as e:
return CommandResult().message("切换账号未知错误: "+str(e))
return CommandResult().message("切换账号成功。")
except BaseException as e:
return CommandResult().message("切换账号错误。")
else:
return CommandResult().message("参数过多。")
def unset(self, message: AstrMessageEvent, context: Context):
self.provider.curr_personality = {}
self.provider.forget(message.session_id)
return CommandResult().message("已清除个性化设置。")
def set(self, message: AstrMessageEvent, context: Context):
l = message.message_str.split(" ")
if len(l) == 1:
return CommandResult().message("- 设置人格: \nset 人格名。例如 set 编剧\n- 人格列表: set list\n- 人格详细信息: set view 人格名\n- 自定义人格: set 人格文本\n- 重置会话(清除人格): reset\n- 重置会话(保留人格): reset p\n\n【当前人格】: " + str(self.provider.curr_personality['prompt']))
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f"- {key}\n"
msg += '\n\n*输入 set view 人格名 查看人格详细信息'
return CommandResult().message(msg)
elif l[1] == "view":
if len(l) == 2:
return CommandResult().message("请输入人格名")
ps = l[2].strip()
if ps in personalities:
msg = f"人格{ps}的详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格{ps}不存在"
return CommandResult().message(msg)
else:
ps = "".join(l[1:]).strip()
if ps in personalities:
self.provider.curr_personality = {
'name': ps,
'prompt': personalities[ps]
}
self.provider.personality_set(self.provider.curr_personality, message.session_id)
return CommandResult().message(f"人格已设置。 \n人格信息: {ps}")
else:
self.provider.curr_personality = {
'name': '自定义人格',
'prompt': ps
}
self.provider.personality_set(self.provider.curr_personality, message.session_id)
return CommandResult().message(f"人格已设置。 \n人格信息: {ps}")
async def draw(self, message: AstrMessageEvent, context: Context):
message = message.message_str.removeprefix("")
img_url = await self.provider.image_generate(message)
return CommandResult(
message_chain=[Image.fromURL(img_url)],
)

19
model/command/parser.py Normal file
View File

@@ -0,0 +1,19 @@
class CommandTokens():
def __init__(self) -> None:
self.tokens = []
self.len = 0
def get(self, idx: int):
if idx >= self.len:
return None
return self.tokens[idx].strip()
class CommandParser():
def __init__(self):
pass
def parse(self, message: str):
cmd_tokens = CommandTokens()
cmd_tokens.tokens = message.split(" ")
cmd_tokens.len = len(cmd_tokens.tokens)
return cmd_tokens

View File

@@ -1,132 +0,0 @@
from model.command.command import Command
from model.provider.rev_chatgpt import ProviderRevChatGPT
from cores.qqbot.personality import personalities
from cores.qqbot.types import GlobalObject
class CommandRevChatGPT(Command):
def __init__(self, provider: ProviderRevChatGPT, global_object: GlobalObject):
self.provider = provider
self.global_object = global_object
self.personality_str = ""
super().__init__(provider, global_object)
async def check_command(self,
message: str,
session_id: str,
role: str,
platform: str,
message_obj):
self.platform = platform
hit, res = await super().check_command(
message,
session_id,
role,
platform,
message_obj
)
if hit:
return True, res
if self.command_start_with(message, "help", "帮助"):
return True, await self.help()
elif self.command_start_with(message, "reset"):
return True, self.reset(session_id, message)
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
elif self.command_start_with(message, "set"):
return True, self.set(message, session_id)
elif self.command_start_with(message, "switch"):
return True, self.switch(message, session_id)
return False, None
def reset(self, session_id, message: str):
l = message.split(" ")
if len(l) == 1:
self.provider.forget(session_id)
return True, "重置完毕。", "reset"
if len(l) == 2 and l[1] == "p":
self.provider.forget(session_id)
ret = self.provider.text_chat(self.personality_str)
return True, f"重置完毕(保留人格)。\n\n{ret}", "reset"
def set(self, message: str, session_id: str):
l = message.split(" ")
if len(l) == 1:
return True, f"设置人格: \n/set 人格名或人格文本。例如/set 编剧\n人格列表: /set list\n人格详细信息: \
/set view 人格名\n重置会话(清除人格): /reset\n重置会话(保留人格): /reset p", "set"
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f" |-{key}\n"
msg += '\n\n*输入/set view 人格名查看人格详细信息'
msg += '\n*不定时更新人格库,请及时更新本项目。'
return True, msg, "set"
elif l[1] == "view":
if len(l) == 2:
return True, "请输入/set view 人格名", "set"
ps = l[2].strip()
if ps in personalities:
msg = f"人格【{ps}】详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格【{ps}】不存在。"
return True, msg, "set"
else:
ps = l[1].strip()
if ps in personalities:
self.reset(session_id, "reset")
self.personality_str = personalities[ps]
ret = self.provider.text_chat(self.personality_str, session_id)
return True, f"人格【{ps}】已设置。\n\n{ret}", "set"
else:
self.reset(session_id, "reset")
self.personality_str = ps
ret = self.provider.text_chat(ps, session_id)
return True, f"人格信息已设置。\n\n{ret}", "set"
def switch(self, message: str, session_id: str):
'''
切换账号
'''
l = message.split(" ")
rev_chatgpt = self.provider.get_revchatgpt()
if len(l) == 1:
ret = "当前账号:\n"
index = 0
curr_ = None
for revstat in rev_chatgpt:
index += 1
ret += f"[{index}]. {revstat['id']}\n"
# if session_id in revstat['user']:
# curr_ = revstat['id']
for user in revstat['user']:
if session_id == user['id']:
curr_ = revstat['id']
break
if curr_ is None:
ret += "当前您未选择账号。输入/switch <账号序号>切换账号。"
else:
ret += f"当前您选择的账号为:{curr_}。输入/switch <账号序号>切换账号。"
return True, ret, "switch"
elif len(l) == 2:
try:
index = int(l[1])
if index > len(self.provider.rev_chatgpt) or index < 1:
return True, "账号序号不合法。", "switch"
else:
# pop
for revstat in self.provider.rev_chatgpt:
if session_id in revstat['user']:
revstat['user'].remove(session_id)
# append
self.provider.rev_chatgpt[index - 1]['user'].append(session_id)
return True, f"切换账号成功。当前账号为:{self.provider.rev_chatgpt[index - 1]['id']}", "switch"
except BaseException:
return True, "账号序号不合法。", "switch"
else:
return True, "参数过多。", "switch"
async def help(self):
commands = super().general_commands()
commands['set'] = '设置人格'
return True, await super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help"

View File

@@ -0,0 +1,74 @@
import abc
from typing import Union, Any, List
from nakuru.entities.components import Plain, At, Image, BaseMessageComponent
from type.astrbot_message import AstrBotMessage
from type.command import CommandResult
class Platform():
def __init__(self) -> None:
pass
@abc.abstractmethod
async def handle_msg(self, message: AstrBotMessage):
'''
处理到来的消息
'''
pass
@abc.abstractmethod
async def reply_msg(self, message: AstrBotMessage,
result_message: List[BaseMessageComponent]):
'''
回复用户唤醒机器人的消息。(被动回复)
'''
pass
@abc.abstractmethod
async def send_msg(self, target: Any, result_message: CommandResult):
'''
发送消息(主动)
'''
pass
def parse_message_outline(self, message: AstrBotMessage) -> str:
'''
将消息解析成大纲消息形式,如: xxxxx[图片]xxxxx。用于输出日志等。
'''
if isinstance(message, str):
return message
ret = ''
parsed = message if isinstance(message, list) else message.message
try:
for node in parsed:
if isinstance(node, Plain):
ret += node.text.replace('\n', ' ')
elif isinstance(node, At):
ret += f'[At: {node.name}/{node.qq}]'
elif isinstance(node, Image):
ret += '[图片]'
except Exception as e:
pass
return ret[:100] if len(ret) > 100 else ret
def check_nick(self, message_str: str) -> bool:
if self.context.nick:
for nick in self.context.nick:
if nick and message_str.strip().startswith(nick):
return True
return False
async def convert_to_t2i_chain(self, message_result: list) -> list:
plain_str = ""
rendered_images = []
for i in message_result:
if isinstance(i, Plain):
plain_str += i.text
if plain_str and len(plain_str) > 50:
p = await self.context.image_renderer.render(plain_str, return_url=True)
if p.startswith('http'):
rendered_images.append(Image.fromURL(p))
else:
rendered_images.append(Image.fromFileSystem(p))
return rendered_images

View File

@@ -1,8 +0,0 @@
from dataclasses import dataclass
from typing import Union, Optional
@dataclass
class MessageResult():
result_message: Union[str, list]
is_command_call: Optional[bool] = False
callback: Optional[callable] = None

View File

@@ -1,77 +0,0 @@
from nakuru.entities.components import Plain, At, Image
from botpy.message import Message, DirectMessage
class NakuruGuildMember():
tiny_id: int # 发送者识别号
user_id: int # 发送者识别号
title: str
nickname: str # 昵称
role: int # 角色
icon_url: str # 头像url
class NakuruGuildMessage():
type: str = "GuildMessage"
self_id: int # bot的qq号
self_tiny_id: int # bot的qq号
sub_type: str # 消息类型
message_id: str # 消息id
guild_id: int # 频道号
channel_id: int # 子频道号
user_id: int # 发送者qq号
message: list # 消息内容
sender: NakuruGuildMember # 发送者信息
raw_message: Message
def __str__(self) -> str:
return str(self.__dict__)
# gocq-频道SDK兼容层
def gocq_compatible_send(gocq_message_chain: list):
plain_text = ""
image_path = None # only one img supported
for i in gocq_message_chain:
if isinstance(i, Plain):
plain_text += i.text
elif isinstance(i, Image) and image_path == None:
if i.path is not None:
image_path = i.path
else:
image_path = i.file
return plain_text, image_path
# gocq-频道SDK兼容层
def gocq_compatible_receive(message: Message) -> NakuruGuildMessage:
ngm = NakuruGuildMessage()
try:
ngm.self_id = message.mentions[0].id
ngm.self_tiny_id = message.mentions[0].id
except:
ngm.self_id = 0
ngm.self_tiny_id = 0
ngm.sub_type = "normal"
ngm.message_id = message.id
ngm.guild_id = int(message.guild_id)
ngm.channel_id = int(message.channel_id)
ngm.user_id = int(message.author.id)
msg = []
plain_content = message.content.replace("<@!"+str(ngm.self_id)+">", "").strip()
msg.append(Plain(plain_content))
if message.attachments:
for i in message.attachments:
if i.content_type.startswith("image"):
url = i.url
if not url.startswith("http"):
url = "https://"+url
img = Image.fromURL(url)
msg.append(img)
ngm.message = msg
ngm.sender = NakuruGuildMember()
ngm.sender.tiny_id = int(message.author.id)
ngm.sender.user_id = int(message.author.id)
ngm.sender.title = ""
ngm.sender.nickname = message.author.username
ngm.sender.role = 0
ngm.sender.icon_url = message.author.avatar
ngm.raw_message = message
return ngm

View File

@@ -1,70 +0,0 @@
import abc
from typing import Union
from nakuru import (
GuildMessage,
GroupMessage,
FriendMessage,
)
from ._nakuru_translation_layer import (
NakuruGuildMessage,
)
from nakuru.entities.components import Plain, At, Image
class Platform():
def __init__(self, message_handler: callable) -> None:
'''
初始化平台的各种接口
'''
self.message_handler = message_handler
pass
@abc.abstractmethod
async def handle_msg():
'''
处理到来的消息
'''
pass
@abc.abstractmethod
async def reply_msg():
'''
回复消息(被动发送)
'''
pass
@abc.abstractmethod
async def send_msg(target: Union[GuildMessage, GroupMessage, FriendMessage, str], message: Union[str, list]):
'''
发送消息(主动发送)
'''
pass
@abc.abstractmethod
async def send(target: Union[GuildMessage, GroupMessage, FriendMessage, str], message: Union[str, list]):
'''
发送消息(主动发送)同 send_msg()
'''
pass
def parse_message_outline(self, message: Union[GuildMessage, GroupMessage, FriendMessage, str, list]) -> str:
'''
将消息解析成大纲消息形式。
如: xxxxx[图片]xxxxx
'''
if isinstance(message, str):
return message
ret = ''
ls_to_parse = message if isinstance(message, list) else message.message
try:
for node in ls_to_parse:
if isinstance(node, Plain):
ret += node.text
elif isinstance(node, At):
ret += f'[At: {node.name}/{node.qq}]'
elif isinstance(node, Image):
ret += '[图片]'
except Exception as e:
pass
ret.replace('\n', '')
return ret

87
model/platform/manager.py Normal file
View File

@@ -0,0 +1,87 @@
import asyncio
from util.io import port_checker
from type.register import RegisteredPlatform
from type.types import Context
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class PlatformManager():
def __init__(self, context: Context, message_handler: MessageHandler) -> None:
self.context = context
self.config = context.base_config
self.msg_handler = message_handler
def load_platforms(self):
tasks = []
if 'gocqbot' in self.config and self.config['gocqbot']['enable']:
logger.info("启用 QQ(nakuru 适配器)")
tasks.append(asyncio.create_task(self.gocq_bot(), name="nakuru-adapter"))
if 'aiocqhttp' in self.config and self.config['aiocqhttp']['enable']:
logger.info("启用 QQ(aiocqhttp 适配器)")
tasks.append(asyncio.create_task(self.aiocq_bot(), name="aiocqhttp-adapter"))
# QQ频道
if 'qqbot' in self.config and self.config['qqbot']['enable'] and self.config['qqbot']['appid'] != None:
logger.info("启用 QQ(官方 API) 机器人消息平台")
tasks.append(asyncio.create_task(self.qqchan_bot(), name="qqofficial-adapter"))
return tasks
async def gocq_bot(self):
'''
运行 QQ(nakuru 适配器)
'''
from model.platform.qq_nakuru import QQGOCQ
noticed = False
host = self.config.get("gocq_host", "127.0.0.1")
port = self.config.get("gocq_websocket_port", 6700)
http_port = self.config.get("gocq_http_port", 5700)
logger.info(
f"正在检查连接...host: {host}, ws port: {port}, http port: {http_port}")
while True:
if not port_checker(port=port, host=host) or not port_checker(port=http_port, host=host):
if not noticed:
noticed = True
logger.warning(
f"连接到{host}:{port}(或{http_port})失败。程序会每隔 5s 自动重试。")
await asyncio.sleep(5)
else:
logger.info("nakuru 适配器已连接。")
break
try:
qq_gocq = QQGOCQ(self.context, self.msg_handler)
self.context.platforms.append(RegisteredPlatform(
platform_name="gocq", platform_instance=qq_gocq, origin="internal"))
await qq_gocq.run()
except BaseException as e:
logger.error("启动 nakuru 适配器时出现错误: " + str(e))
def aiocq_bot(self):
'''
运行 QQ(aiocqhttp 适配器)
'''
from model.platform.qq_aiocqhttp import AIOCQHTTP
qq_aiocqhttp = AIOCQHTTP(self.context, self.msg_handler)
self.context.platforms.append(RegisteredPlatform(
platform_name="aiocqhttp", platform_instance=qq_aiocqhttp, origin="internal"))
return qq_aiocqhttp.run_aiocqhttp()
def qqchan_bot(self):
'''
运行 QQ 官方机器人适配器
'''
try:
from model.platform.qq_official import QQOfficial
qqchannel_bot = QQOfficial(self.context, self.msg_handler)
self.context.platforms.append(RegisteredPlatform(
platform_name="qqchan", platform_instance=qqchannel_bot, origin="internal"))
return qqchannel_bot.run()
except BaseException as e:
logger.error("启动 QQ官方机器人适配器时出现错误: " + str(e))

View File

@@ -0,0 +1,220 @@
import time
import asyncio
import traceback
import logging
from aiocqhttp import CQHttp, Event
from aiocqhttp.exceptions import ActionFailed
from . import Platform
from type.astrbot_message import *
from type.message_event import *
from type.command import *
from typing import Union, List, Dict
from nakuru.entities.components import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class AIOCQHTTP(Platform):
def __init__(self, context: Context, message_handler: MessageHandler) -> None:
self.message_handler = message_handler
self.waiting = {}
self.context = context
self.unique_session = self.context.unique_session
self.announcement = self.context.base_config.get("announcement", "欢迎新人!")
self.host = self.context.base_config['aiocqhttp']['ws_reverse_host']
self.port = self.context.base_config['aiocqhttp']['ws_reverse_port']
def convert_message(self, event: Event) -> AstrBotMessage:
abm = AstrBotMessage()
abm.self_id = str(event.self_id)
abm.tag = "aiocqhttp"
abm.sender = MessageMember(str(event.sender['user_id']), event.sender['nickname'])
if event['message_type'] == 'group':
abm.type = MessageType.GROUP_MESSAGE
elif event['message_type'] == 'private':
abm.type = MessageType.FRIEND_MESSAGE
if self.unique_session:
abm.session_id = abm.sender.user_id
else:
abm.session_id = str(event.group_id) if abm.type == MessageType.GROUP_MESSAGE else abm.sender.user_id
abm.message_id = str(event.message_id)
abm.message = []
message_str = ""
for m in event.message:
t = m['type']
a = None
if t == 'at':
a = At(**m['data'])
abm.message.append(a)
if t == 'text':
a = Plain(text=m['data']['text'])
message_str += m['data']['text'].strip()
abm.message.append(a)
if t == 'image':
a = Image(file=m['data']['file'])
abm.message.append(a)
abm.timestamp = int(time.time())
abm.message_str = message_str
abm.raw_message = event
return abm
def run_aiocqhttp(self):
if not self.host or not self.port:
return
self.bot = CQHttp(use_ws_reverse=True, import_name='aiocqhttp')
@self.bot.on_message('group')
async def group(event: Event):
abm = self.convert_message(event)
if abm:
await self.handle_msg(abm)
# return {'reply': event.message}
@self.bot.on_message('private')
async def private(event: Event):
abm = self.convert_message(event)
if abm:
await self.handle_msg(abm)
# return {'reply': event.message}
bot = self.bot.run_task(host=self.host, port=int(self.port), shutdown_trigger=self.shutdown_trigger_placeholder)
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging.getLogger('aiocqhttp').setLevel(logging.ERROR)
return bot
async def shutdown_trigger_placeholder(self):
while True:
await asyncio.sleep(1)
def pre_check(self, message: AstrBotMessage) -> bool:
# if message chain contains Plain components or At components which points to self_id, return True
if message.type == MessageType.FRIEND_MESSAGE:
return True
for comp in message.message:
if isinstance(comp, At) and str(comp.qq) == message.self_id:
return True
# check nicks
if self.check_nick(message.message_str):
return True
return False
async def handle_msg(self, message: AstrBotMessage):
logger.info(
f"{message.sender.nickname}/{message.sender.user_id} -> {self.parse_message_outline(message)}")
if not self.pre_check(message):
return
# 解析 role
sender_id = str(message.sender.user_id)
if sender_id == self.context.config_helper.get('admin_qq', '') or \
sender_id in self.context.config_helper.get('other_admins', []):
role = 'admin'
else:
role = 'member'
# construct astrbot message event
ame = AstrMessageEvent.from_astrbot_message(message, self.context, "aiocqhttp", message.session_id, role)
# transfer control to message handler
message_result = await self.message_handler.handle(ame)
if not message_result: return
await self.reply_msg(message, message_result.result_message)
if message_result.callback:
message_result.callback()
# 如果是等待回复的消息
if message.session_id in self.waiting and self.waiting[message.session_id] == '':
self.waiting[message.session_id] = message
async def reply_msg(self,
message: AstrBotMessage,
result_message: list):
"""
回复用户唤醒机器人的消息。(被动回复)
"""
logger.info(
f"{message.sender.user_id} <- {self.parse_message_outline(message)}")
res = result_message
if isinstance(res, str):
res = [Plain(text=res), ]
# if image mode, put all Plain texts into a new picture.
if self.context.config_helper.get("qq_pic_mode", False) and isinstance(res, list):
rendered_images = await self.convert_to_t2i_chain(res)
if rendered_images:
try:
await self._reply(message, rendered_images)
return
except BaseException as e:
logger.warn(traceback.format_exc())
logger.warn(f"以文本转图片的形式回复消息时发生错误: {e},将尝试默认方式。")
await self._reply(message, res)
async def _reply(self, message: Union[AstrBotMessage, Dict], message_chain: List[BaseMessageComponent]):
if isinstance(message_chain, str):
message_chain = [Plain(text=message_chain), ]
ret = []
image_idx = []
for idx, segment in enumerate(message_chain):
d = segment.toDict()
if isinstance(segment, Plain):
d['type'] = 'text'
if isinstance(segment, Image):
image_idx.append(idx)
ret.append(d)
try:
if isinstance(message, AstrBotMessage):
await self.bot.send(message.raw_message, ret)
if isinstance(message, dict):
if 'group_id' in message:
await self.bot.send_group_msg(group_id=message['group_id'], message=ret)
elif 'user_id' in message:
await self.bot.send_private_msg(user_id=message['user_id'], message=ret)
else:
raise Exception("aiocqhttp: 无法识别的消息来源。仅支持 group_id 和 user_id。")
except ActionFailed as e:
logger.error(traceback.format_exc())
logger.error(f"回复消息失败: {e}")
if e.retcode == 1200:
# ENOENT
if not image_idx:
raise e
logger.info("检测到失败原因为文件未找到,猜测用户的协议端与 AstrBot 位于不同的文件系统上。尝试采用上传图片的方式发图。")
for idx in image_idx:
if ret[idx]['data']['file'].startswith('file://'):
logger.info(f"正在上传图片: {ret[idx]['data']['path']}")
image_url = await self.context.image_uploader.upload_image(ret[idx]['data']['path'])
logger.info(f"上传成功。")
ret[idx]['data']['file'] = image_url
ret[idx]['data']['path'] = image_url
await self.bot.send(message.raw_message, ret)
async def send_msg(self, target: Dict[str, int], result_message: CommandResult):
'''
以主动的方式给QQ用户、QQ群发送一条消息。
`target` 接收一个 dict 类型的值引用。
- 要发给 QQ 下的某个用户,请添加 key `user_id`,值为 int 类型的 qq 号;
- 要发给某个群聊,请添加 key `group_id`,值为 int 类型的 qq 群号;
'''
await self._reply(target, result_message.message_chain)

View File

@@ -1,300 +0,0 @@
from nakuru.entities.components import Plain, At, Image, Node
from util import general_utils as gu
from util.cmd_config import CmdConfig
import asyncio
from nakuru import (
CQHTTP,
GuildMessage,
GroupMessage,
FriendMessage,
GroupMemberIncrease,
Notify
)
from typing import Union
import time
from ._platfrom import Platform
class FakeSource:
def __init__(self, type, group_id):
self.type = type
self.group_id = group_id
class QQGOCQ(Platform):
def __init__(self, cfg: dict, message_handler: callable, global_object) -> None:
super().__init__(message_handler)
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.waiting = {}
self.cc = CmdConfig()
self.cfg = cfg
self.logger: gu.Logger = global_object.logger
try:
self.nick_qq = cfg['nick_qq']
except:
self.nick_qq = ["ai","!",""]
nick_qq = self.nick_qq
if isinstance(nick_qq, str):
nick_qq = [nick_qq]
self.unique_session = cfg['uniqueSessionMode']
self.pic_mode = cfg['qq_pic_mode']
self.client = CQHTTP(
host=self.cc.get("gocq_host", "127.0.0.1"),
port=self.cc.get("gocq_websocket_port", 6700),
http_port=self.cc.get("gocq_http_port", 5700),
)
gocq_app = self.client
self.announcement = self.cc.get("announcement", "欢迎新人!")
@gocq_app.receiver("GroupMessage")
async def _(app: CQHTTP, source: GroupMessage):
if self.cc.get("gocq_react_group", True):
if isinstance(source.message[0], Plain):
await self.handle_msg(source, True)
elif isinstance(source.message[0], At):
if source.message[0].qq == source.self_id:
await self.handle_msg(source, True)
else:
return
@gocq_app.receiver("FriendMessage")
async def _(app: CQHTTP, source: FriendMessage):
if self.cc.get("gocq_react_friend", True):
if isinstance(source.message[0], Plain):
await self.handle_msg(source, False)
else:
return
@gocq_app.receiver("GroupMemberIncrease")
async def _(app: CQHTTP, source: GroupMemberIncrease):
if self.cc.get("gocq_react_group_increase", True):
await app.sendGroupMessage(source.group_id, [
Plain(text = self.announcement)
])
@gocq_app.receiver("Notify")
async def _(app: CQHTTP, source: Notify):
print(source)
if source.sub_type == "poke" and source.target_id == source.self_id:
await self.handle_msg(source, False)
@gocq_app.receiver("GuildMessage")
async def _(app: CQHTTP, source: GuildMessage):
if self.cc.get("gocq_react_guild", True):
if isinstance(source.message[0], Plain):
await self.handle_msg(source, True)
elif isinstance(source.message[0], At):
if source.message[0].qq == source.self_tiny_id:
await self.handle_msg(source, True)
else:
return
def run(self):
self.client.run()
async def handle_msg(self, message: Union[GroupMessage, FriendMessage, GuildMessage, Notify], is_group: bool):
self.logger.log(f"{message.user_id} -> {self.parse_message_outline(message)}", tag="QQ_GOCQ")
# 判断是否响应消息
resp = False
if not is_group:
resp = True
else:
for i in message.message:
if isinstance(i, At):
if message.type == "GuildMessage":
if i.qq == message.user_id or i.qq == message.self_tiny_id:
resp = True
if message.type == "FriendMessage":
if i.qq == message.self_id:
resp = True
if message.type == "GroupMessage":
if i.qq == message.self_id:
resp = True
elif isinstance(i, Plain):
for nick in self.nick_qq:
if nick != '' and i.text.strip().startswith(nick):
resp = True
break
if not resp: return
# 解析 session_id
if self.unique_session or not is_group:
session_id = message.user_id
elif message.type == "GroupMessage":
session_id = message.group_id
elif message.type == "GuildMessage":
session_id = message.channel_id
else:
session_id = message.user_id
# 解析 role
sender_id = str(message.user_id)
if sender_id == self.cc.get('admin_qq', '') or \
sender_id in self.cc.get('other_admins', []):
role = 'admin'
else:
role = 'member'
message_result = await self.message_handler(
message=message,
session_id=session_id,
role=role,
platform='gocq'
)
if message_result is None:
return
await self.reply_msg(message, message_result.result_message)
if message_result.callback is not None:
message_result.callback()
# 如果是等待回复的消息
if session_id in self.waiting and self.waiting[session_id] == '':
self.waiting[session_id] = message
async def reply_msg(self,
message: Union[GroupMessage, FriendMessage, GuildMessage, Notify],
result_message: list):
"""
插件开发者请使用send方法, 可以不用直接调用这个方法。
"""
source = message
res = result_message
self.logger.log(f"{source.user_id} <- {self.parse_message_outline(res)}", tag="QQ_GOCQ")
if isinstance(source, int):
source = FakeSource("GroupMessage", source)
# str convert to CQ Message Chain
if isinstance(res, str):
res_str = res
res = []
if source.type == "GroupMessage" and not isinstance(source, FakeSource):
res.append(At(qq=source.user_id))
res.append(Plain(text=res_str))
# if image mode, put all Plain texts into a new picture.
if self.pic_mode and isinstance(res, list):
plains = []
news = []
for i in res:
if isinstance(i, Plain):
plains.append(i.text)
else:
news.append(i)
plains_str = "".join(plains).strip()
if plains_str != "" and len(plains_str) > 50:
p = gu.create_markdown_image("".join(plains))
news.append(Image.fromFileSystem(p))
res = news
# 回复消息链
if isinstance(res, list) and len(res) > 0:
if source.type == "GuildMessage":
await self.client.sendGuildChannelMessage(source.guild_id, source.channel_id, res)
return
elif source.type == "FriendMessage":
await self.client.sendFriendMessage(source.user_id, res)
return
elif source.type == "GroupMessage":
# 过长时forward发送
plain_text_len = 0
image_num = 0
for i in res:
if isinstance(i, Plain):
plain_text_len += len(i.text)
elif isinstance(i, Image):
image_num += 1
if plain_text_len > self.cc.get('qq_forward_threshold', 200):
# 删除At
for i in res:
if isinstance(i, At):
res.remove(i)
node = Node(res)
# node.content = res
node.uin = 123456
node.name = f"bot"
node.time = int(time.time())
# print(node)
nodes=[node]
await self.client.sendGroupForwardMessage(source.group_id, nodes)
return
await self.client.sendGroupMessage(source.group_id, res)
return
async def send_msg(self, message: Union[GroupMessage, FriendMessage, GuildMessage, Notify], result_message: list):
'''
提供给插件的发送QQ消息接口。
参数说明第一个参数可以是消息对象也可以是QQ群号。第二个参数是消息内容消息内容可以是消息链列表也可以是纯文字信息
'''
try:
await self.reply_msg(message, result_message)
except BaseException as e:
raise e
async def send(self,
to,
res):
'''
同 send_msg()
'''
await self.reply_msg(to, res)
def create_text_image(title: str, text: str, max_width=30, font_size=20):
'''
文本转图片。
title: 标题
text: 文本内容
max_width: 文本宽度最大值默认30
font_size: 字体大小默认20
返回:文件路径
'''
try:
img = gu.word2img(title, text, max_width, font_size)
p = gu.save_temp_img(img)
return p
except Exception as e:
raise e
def wait_for_message(self, group_id) -> Union[GroupMessage, FriendMessage, GuildMessage]:
'''
等待下一条消息,超时 300s 后抛出异常
'''
self.waiting[group_id] = ''
cnt = 0
while True:
if group_id in self.waiting and self.waiting[group_id] != '':
# 去掉
ret = self.waiting[group_id]
del self.waiting[group_id]
return ret
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)
def get_client(self):
return self.client
async def nakuru_method_invoker(self, func, *args, **kwargs):
"""
返回一个方法调用器可以用来立即调用nakuru的方法。
"""
try:
ret = func(*args, **kwargs)
return ret
except BaseException as e:
raise e

255
model/platform/qq_nakuru.py Normal file
View File

@@ -0,0 +1,255 @@
import time, asyncio, traceback
from nakuru.entities.components import Plain, At, Image, Node, BaseMessageComponent
from nakuru import (
CQHTTP,
GuildMessage,
GroupMessage,
FriendMessage,
GroupMemberIncrease,
MessageItemType
)
from typing import Union, List, Dict
from type.types import Context
from . import Platform
from type.astrbot_message import *
from type.message_event import *
from type.command import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class FakeSource:
def __init__(self, type, group_id):
self.type = type
self.group_id = group_id
class QQGOCQ(Platform):
def __init__(self, context: Context, message_handler: MessageHandler) -> None:
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.message_handler = message_handler
self.waiting = {}
self.context = context
self.unique_session = self.context.unique_session
self.announcement = self.context.base_config.get("announcement", "欢迎新人!")
self.client = CQHTTP(
host=self.context.base_config.get("gocq_host", "127.0.0.1"),
port=self.context.base_config.get("gocq_websocket_port", 6700),
http_port=self.context.base_config.get("gocq_http_port", 5700),
)
gocq_app = self.client
@gocq_app.receiver("GroupMessage")
async def _(app: CQHTTP, source: GroupMessage):
if self.context.base_config.get("gocq_react_group", True):
abm = self.convert_message(source)
await self.handle_msg(abm)
@gocq_app.receiver("FriendMessage")
async def _(app: CQHTTP, source: FriendMessage):
if self.context.base_config.get("gocq_react_friend", True):
abm = self.convert_message(source)
await self.handle_msg(abm)
@gocq_app.receiver("GroupMemberIncrease")
async def _(app: CQHTTP, source: GroupMemberIncrease):
if self.context.base_config.get("gocq_react_group_increase", True):
await app.sendGroupMessage(source.group_id, [
Plain(text=self.announcement)
])
@gocq_app.receiver("GuildMessage")
async def _(app: CQHTTP, source: GuildMessage):
if self.cc.get("gocq_react_guild", True):
abm = self.convert_message(source)
await self.handle_msg(abm)
def pre_check(self, message: AstrBotMessage) -> bool:
# if message chain contains Plain components or At components which points to self_id, return True
if message.type == MessageType.FRIEND_MESSAGE:
return True
for comp in message.message:
if isinstance(comp, At) and str(comp.qq) == message.self_id:
return True
# check nicks
if self.check_nick(message.message_str):
return True
return False
def run(self):
coro = self.client._run()
return coro
async def handle_msg(self, message: AstrBotMessage):
logger.info(
f"{message.sender.nickname}/{message.sender.user_id} -> {self.parse_message_outline(message)}")
assert isinstance(message.raw_message,
(GroupMessage, FriendMessage, GuildMessage))
# 判断是否响应消息
if not self.pre_check(message):
return
# 解析 session_id
if self.unique_session or message.type == MessageType.FRIEND_MESSAGE:
session_id = message.raw_message.user_id
elif message.type == MessageType.GROUP_MESSAGE:
session_id = message.raw_message.group_id
elif message.type == MessageType.GUILD_MESSAGE:
session_id = message.raw_message.channel_id
else:
session_id = message.raw_message.user_id
message.session_id = session_id
# 解析 role
sender_id = str(message.raw_message.user_id)
if sender_id == self.context.config_helper.get('admin_qq', '') or \
sender_id in self.context.config_helper.get('other_admins', []):
role = 'admin'
else:
role = 'member'
# construct astrbot message event
ame = AstrMessageEvent.from_astrbot_message(message, self.context, "gocq", session_id, role)
# transfer control to message handler
message_result = await self.message_handler.handle(ame)
if not message_result: return
await self.reply_msg(message, message_result.result_message)
if message_result.callback:
message_result.callback()
# 如果是等待回复的消息
if session_id in self.waiting and self.waiting[session_id] == '':
self.waiting[session_id] = message
async def reply_msg(self,
message: AstrBotMessage,
result_message: List[BaseMessageComponent]):
"""
回复用户唤醒机器人的消息。(被动回复)
"""
source = message.raw_message
res = result_message
assert isinstance(source,
(GroupMessage, FriendMessage, GuildMessage))
logger.info(
f"{source.user_id} <- {self.parse_message_outline(res)}")
if isinstance(res, str):
res = [Plain(text=res), ]
# if image mode, put all Plain texts into a new picture.
if self.context.config_helper.get("qq_pic_mode", False) and isinstance(res, list):
rendered_images = await self.convert_to_t2i_chain(res)
if rendered_images:
try:
await self._reply(source, rendered_images)
return
except BaseException as e:
logger.warn(traceback.format_exc())
logger.warn(f"以文本转图片的形式回复消息时发生错误: {e},将尝试默认方式。")
await self._reply(source, res)
async def _reply(self, source, message_chain: List[BaseMessageComponent]):
if isinstance(message_chain, str):
message_chain = [Plain(text=message_chain), ]
is_dict = isinstance(source, dict)
if source.type == "GuildMessage":
guild_id = source['guild_id'] if is_dict else source.guild_id
chan_id = source['channel_id'] if is_dict else source.channel_id
await self.client.sendGuildChannelMessage(guild_id, chan_id, message_chain)
elif source.type == "FriendMessage":
user_id = source['user_id'] if is_dict else source.user_id
await self.client.sendFriendMessage(user_id, message_chain)
elif source.type == "GroupMessage":
group_id = source['group_id'] if is_dict else source.group_id
# 过长时forward发送
plain_text_len = 0
image_num = 0
for i in message_chain:
if isinstance(i, Plain):
plain_text_len += len(i.text)
elif isinstance(i, Image):
image_num += 1
if plain_text_len > self.context.config_helper.get('qq_forward_threshold', 200):
# 删除At
for i in message_chain:
if isinstance(i, At):
message_chain.remove(i)
node = Node(message_chain)
node.uin = 123456
node.name = f"bot"
node.time = int(time.time())
nodes = [node]
await self.client.sendGroupForwardMessage(group_id, nodes)
return
await self.client.sendGroupMessage(group_id, message_chain)
async def send_msg(self, target: Dict[str, int], result_message: CommandResult):
'''
以主动的方式给用户、群或者频道发送一条消息。
`target` 接收一个 dict 类型的值引用。
- 要发给 QQ 下的某个用户,请添加 key `user_id`,值为 int 类型的 qq 号;
- 要发给某个群聊,请添加 key `group_id`,值为 int 类型的 qq 群号;
- 要发给某个频道,请添加 key `guild_id`, `channel_id`。均为 int 类型。
guild_id 不是频道号。
'''
await self._reply(target, result_message.message_chain)
def convert_message(self, message: Union[GroupMessage, FriendMessage, GuildMessage]) -> AstrBotMessage:
abm = AstrBotMessage()
abm.type = MessageType(message.type)
abm.raw_message = message
abm.message_id = message.message_id
plain_content = ""
for i in message.message:
if isinstance(i, Plain):
plain_content += i.text
abm.message_str = plain_content.strip()
if message.type == MessageItemType.GuildMessage:
abm.self_id = str(message.self_tiny_id)
else:
abm.self_id = str(message.self_id)
abm.sender = MessageMember(
str(message.sender.user_id),
str(message.sender.nickname)
)
abm.tag = "gocq"
abm.message = message.message
return abm
def wait_for_message(self, group_id) -> Union[GroupMessage, FriendMessage, GuildMessage]:
'''
等待下一条消息,超时 300s 后抛出异常
'''
self.waiting[group_id] = ''
cnt = 0
while True:
if group_id in self.waiting and self.waiting[group_id] != '':
# 去掉
ret = self.waiting[group_id]
del self.waiting[group_id]
return ret
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)

View File

@@ -1,226 +1,395 @@
import io
import botpy
from PIL import Image as PILImage
from botpy.message import Message, DirectMessage
import re
import asyncio
import aiohttp
from util import general_utils as gu
from botpy.types.message import Reference
from botpy import Client
import time
from ._platfrom import Platform
from ._nakuru_translation_layer import(
NakuruGuildMessage,
gocq_compatible_receive,
gocq_compatible_send
)
from typing import Union
import traceback
import asyncio
import botpy.message
import botpy.types
import botpy.types.message
from botpy.types.message import Reference, Media
from botpy import Client
from util.io import save_temp_img, download_image_by_url
from . import Platform
from type.astrbot_message import *
from type.message_event import *
from type.command import *
from typing import Union, List, Dict
from nakuru.entities.components import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
from astrbot.message.handler import MessageHandler
logger: Logger = LogManager.GetLogger(log_name='astrbot')
# QQ 机器人官方框架
class botClient(Client):
def set_platform(self, platform: 'QQOfficial'):
self.platform = platform
# 收到群消息
async def on_group_at_message_create(self, message: botpy.message.GroupMessage):
abm = self.platform._parse_from_qqofficial(message, MessageType.GROUP_MESSAGE)
await self.platform.handle_msg(abm)
# 收到频道消息
async def on_at_message_create(self, message: Message):
async def on_at_message_create(self, message: botpy.message.Message):
# 转换层
nakuru_guild_message = gocq_compatible_receive(message)
await self.platform.handle_msg(nakuru_guild_message, True)
abm = self.platform._parse_from_qqofficial(message, MessageType.GUILD_MESSAGE)
await self.platform.handle_msg(abm)
# 收到私聊消息
async def on_direct_message_create(self, message: DirectMessage):
async def on_direct_message_create(self, message: botpy.message.DirectMessage):
# 转换层
nakuru_guild_message = gocq_compatible_receive(message)
await self.platform.handle_msg(nakuru_guild_message, False)
abm = self.platform._parse_from_qqofficial(message, MessageType.FRIEND_MESSAGE)
await self.platform.handle_msg(abm)
# 收到 C2C 消息
async def on_c2c_message_create(self, message: botpy.message.C2CMessage):
abm = self.platform._parse_from_qqofficial(message, MessageType.FRIEND_MESSAGE)
await self.platform.handle_msg(abm)
class QQOfficial(Platform):
def __init__(self, cfg: dict, message_handler: callable, global_object) -> None:
super().__init__(message_handler)
def __init__(self, context: Context, message_handler: MessageHandler, test_mode = False) -> None:
super().__init__()
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.message_handler = message_handler
self.waiting: dict = {}
self.context = context
self.appid = context.base_config['qqbot']['appid']
self.token = context.base_config['qqbot']['token']
self.secret = context.base_config['qqbot_secret']
self.unique_session = context.unique_session
qq_group = context.base_config['qqofficial_enable_group_message']
self.cfg = cfg
self.appid = cfg['qqbot']['appid']
self.token = cfg['qqbot']['token']
self.secret = cfg['qqbot_secret']
self.unique_session = cfg['uniqueSessionMode']
self.logger: gu.Logger = global_object.logger
self.intents = botpy.Intents(
public_guild_messages=True,
direct_message=cfg['direct_message_mode']
)
if qq_group:
self.intents = botpy.Intents(
public_messages=True,
public_guild_messages=True,
direct_message=context.base_config['direct_message_mode']
)
else:
self.intents = botpy.Intents(
public_guild_messages=True,
direct_message=context.base_config['direct_message_mode']
)
self.client = botClient(
intents=self.intents,
bot_log=False
)
self.client.set_platform(self)
self.test_mode = test_mode
async def _parse_to_qqofficial(self, message: List[BaseMessageComponent], is_group: bool = False):
plain_text = ""
image_path = None # only one img supported
for i in message:
if isinstance(i, Plain):
plain_text += i.text
elif isinstance(i, Image) and not image_path:
if i.path:
image_path = i.path
elif i.file and i.file.startswith("base64://"):
img_data = base64.b64decode(i.file[9:])
image_path = save_temp_img(img_data)
elif i.file and i.file.startswith("http"):
# 如果是群消息,不需要下载
image_path = await download_image_by_url(i.file) if not is_group else i.file
return plain_text, image_path
def _parse_from_qqofficial(self, message: Union[botpy.message.Message, botpy.message.GroupMessage],
message_type: MessageType):
abm = AstrBotMessage()
abm.type = message_type
abm.timestamp = int(time.time())
abm.raw_message = message
abm.message_id = message.id
abm.tag = "qqchan"
msg: List[BaseMessageComponent] = []
if isinstance(message, botpy.message.GroupMessage) or isinstance(message, botpy.message.C2CMessage):
if isinstance(message, botpy.message.GroupMessage):
abm.sender = MessageMember(
message.author.member_openid,
""
)
else:
abm.sender = MessageMember(
message.author.user_openid,
""
)
abm.message_str = message.content.strip()
abm.self_id = "unknown_selfid"
msg.append(Plain(abm.message_str))
if message.attachments:
for i in message.attachments:
if i.content_type.startswith("image"):
url = i.url
if not url.startswith("http"):
url = "https://"+url
img = Image.fromURL(url)
msg.append(img)
abm.message = msg
elif isinstance(message, botpy.message.Message) or isinstance(message, botpy.message.DirectMessage):
try:
abm.self_id = str(message.mentions[0].id)
except:
abm.self_id = ""
plain_content = message.content.replace(
"<@!"+str(abm.self_id)+">", "").strip()
msg.append(Plain(plain_content))
if message.attachments:
for i in message.attachments:
if i.content_type.startswith("image"):
url = i.url
if not url.startswith("http"):
url = "https://"+url
img = Image.fromURL(url)
msg.append(img)
abm.message = msg
abm.message_str = plain_content
abm.sender = MessageMember(
str(message.author.id),
str(message.author.username)
)
else:
raise ValueError(f"Unknown message type: {message_type}")
return abm
def run(self):
try:
self.loop.run_until_complete(self.client.run(
appid=self.appid,
return self.client.start(
appid=self.appid,
secret=self.secret
))
)
except BaseException as e:
print(e)
# 早期的 qq-botpy 版本使用 token 登录。
logger.error(traceback.format_exc())
self.client = botClient(
intents=self.intents,
bot_log=False
)
self.client.set_platform(self)
self.client.run(
appid=self.appid,
return self.client.start(
appid=self.appid,
token=self.token
)
async def handle_msg(self, message: NakuruGuildMessage, is_group: bool):
async def handle_msg(self, message: AstrBotMessage):
assert isinstance(message.raw_message, (botpy.message.Message,
botpy.message.GroupMessage, botpy.message.DirectMessage, botpy.message.C2CMessage))
is_group = message.type != MessageType.FRIEND_MESSAGE
_t = "/私聊" if not is_group else ""
self.logger.log(f"{message.sender.nickname}({message.sender.tiny_id}{_t}) -> {self.parse_message_outline(message)}", tag="QQ_OFFICIAL")
logger.info(
f"{message.sender.nickname}({message.sender.user_id}{_t}) -> {self.parse_message_outline(message)}")
# 解析出 session_id
if self.unique_session or not is_group:
session_id = message.sender.user_id
else:
session_id = message.channel_id
if message.type == MessageType.GUILD_MESSAGE:
session_id = message.raw_message.channel_id
elif message.type == MessageType.GROUP_MESSAGE:
session_id = str(message.raw_message.group_openid)
else:
session_id = str(message.raw_message.author.id)
message.session_id = session_id
# 解析出 role
sender_id = str(message.sender.tiny_id)
if sender_id == self.cfg['admin_qqchan'] or \
sender_id in self.cfg['other_admins']:
sender_id = message.sender.user_id
if sender_id == self.context.config_helper.get('admin_qqchan', None) or \
sender_id in self.context.config_helper.get('other_admins', None):
role = 'admin'
else:
role = 'member'
message_result = await self.message_handler(
message=message,
session_id=session_id,
role=role,
platform='qqchan'
)
if message_result is None:
# construct astrbot message event
ame = AstrMessageEvent.from_astrbot_message(message, self.context, "qqchan", session_id, role)
message_result = await self.message_handler.handle(ame)
if not message_result:
return
await self.reply_msg(is_group, message, message_result.result_message)
if message_result.callback is not None:
ret = await self.reply_msg(message, message_result.result_message)
if message_result.callback:
message_result.callback()
# 如果是等待回复的消息
if session_id in self.waiting and self.waiting[session_id] == '':
self.waiting[session_id] = message
return ret
async def reply_msg(self,
is_group: bool,
message: NakuruGuildMessage,
res: Union[str, list]):
async def reply_msg(self,
message: AstrBotMessage,
result_message: List[BaseMessageComponent]):
'''
回复频道消息
'''
self.logger.log(f"{message.sender.nickname}({message.sender.tiny_id}) <- {self.parse_message_outline(res)}", tag="QQ_OFFICIAL")
source = message.raw_message
assert isinstance(source, (botpy.message.Message,
botpy.message.GroupMessage, botpy.message.DirectMessage, botpy.message.C2CMessage))
logger.info(
f"{message.sender.nickname}({message.sender.user_id}) <- {self.parse_message_outline(result_message)}")
plain_text = ''
image_path = ''
msg_ref = None
if isinstance(res, list):
plain_text, image_path = gocq_compatible_send(res)
elif isinstance(res, str):
plain_text = res
if self.cfg['qq_pic_mode']:
# 文本转图片,并且加上原来的图片
if plain_text != '' or image_path != '':
if image_path is not None and image_path != '':
if image_path.startswith("http"):
plain_text += "\n\n" + "![](" + image_path + ")"
else:
plain_text += "\n\n" + "![](file:///" + image_path + ")"
image_path = gu.create_markdown_image("".join(plain_text))
plain_text = ""
else:
if image_path is not None and image_path != '':
msg_ref = None
if image_path.startswith("http"):
async with aiohttp.ClientSession() as session:
async with session.get(image_path) as response:
if response.status == 200:
image = PILImage.open(io.BytesIO(await response.read()))
image_path = gu.save_temp_img(image)
if message.raw_message is not None and image_path == '': # file_image与message_reference不能同时传入
msg_ref = Reference(message_id=message.raw_message.id, ignore_get_message_error=False)
rendered_images = []
if self.context.config_helper.get("qq_pic_mode", False) and isinstance(result_message, list):
rendered_images = await self.convert_to_t2i_chain(result_message)
if isinstance(result_message, list):
plain_text, image_path = await self._parse_to_qqofficial(result_message, message.type == MessageType.GROUP_MESSAGE)
else:
plain_text = result_message
if source and not image_path: # file_image与message_reference不能同时传入
msg_ref = Reference(message_id=source.id,
ignore_get_message_error=False)
# 到这里,我们得到了 plain_textimage_pathmsg_ref
data = {
'content': plain_text,
'msg_id': message.message_id,
'message_reference': msg_ref
}
if is_group:
data['channel_id'] = str(message.channel_id)
else:
data['guild_id'] = str(message.guild_id)
if image_path != '':
if isinstance(message.raw_message, botpy.message.GroupMessage):
data['group_openid'] = str(source.group_openid)
elif isinstance(message.raw_message, botpy.message.Message):
data['channel_id'] = source.channel_id
elif isinstance(message.raw_message, botpy.message.DirectMessage):
data['guild_id'] = source.guild_id
elif isinstance(message.raw_message, botpy.message.C2CMessage):
data['openid'] = source.author.user_openid
if image_path:
data['file_image'] = image_path
if rendered_images:
# 文转图
_data = data.copy()
_data['content'] = ''
_data['file_image'] = rendered_images[0].file
_data['message_reference'] = None
try:
return await self._reply(**_data)
except BaseException as e:
logger.warn(traceback.format_exc())
logger.warn(f"以文本转图片的形式回复消息时发生错误: {e},将尝试默认方式。")
try:
await self._send_wrapper(**data)
return await self._reply(**data)
except BaseException as e:
print(e)
logger.error(traceback.format_exc())
# 分割过长的消息
if "msg over length" in str(e):
split_res = []
split_res.append(plain_text[:len(plain_text)//2])
split_res.append(plain_text[:len(plain_text)//2])
split_res.append(plain_text[len(plain_text)//2:])
for i in split_res:
data['content'] = i
await self._send_wrapper(**data)
return await self._reply(**data)
else:
# 发送qq信息
try:
# 防止被qq频道过滤消息
plain_text = plain_text.replace(".", " . ")
await self._send_wrapper(**data)
return await self._reply(**data)
except BaseException as e:
try:
data['content'] = str.join(" ", plain_text)
await self._send_wrapper(**data)
return await self._reply(**data)
except BaseException as e:
plain_text = re.sub(r'(https|http)?:\/\/(\w|\.|\/|\?|\=|\&|\%)*\b', '[被隐藏的链接]', str(e), flags=re.MULTILINE)
plain_text = re.sub(
r'(https|http)?:\/\/(\w|\.|\/|\?|\=|\&|\%)*\b', '[被隐藏的链接]', str(e), flags=re.MULTILINE)
plain_text = plain_text.replace(".", "·")
data['content'] = plain_text
await self._send_wrapper(**data)
async def _send_wrapper(self, **kwargs):
if 'channel_id' in kwargs:
return await self._reply(**data)
async def _reply(self, **kwargs):
if 'group_openid' in kwargs or 'openid' in kwargs:
# QQ群组消息
if 'file_image' in kwargs and kwargs['file_image']:
file_image_path = kwargs['file_image'].replace("file:///", "")
if file_image_path:
if file_image_path.startswith("http"):
image_url = file_image_path
else:
logger.debug(f"上传图片: {file_image_path}")
image_url = await self.context.image_uploader.upload_image(file_image_path)
logger.debug(f"上传成功: {image_url}")
if 'group_openid' in kwargs:
media = await self.client.api.post_group_file(kwargs['group_openid'], 1, image_url)
elif 'openid' in kwargs:
media = await self.client.api.post_c2c_file(kwargs['openid'], 1, image_url)
del kwargs['file_image']
kwargs['media'] = media
logger.debug(f"发送群图片: {media}")
kwargs['msg_type'] = 7 # 富媒体
if self.test_mode:
return kwargs
if 'group_openid' in kwargs:
await self.client.api.post_group_message(**kwargs)
elif 'openid' in kwargs:
await self.client.api.post_c2c_message(**kwargs)
elif 'channel_id' in kwargs:
# 频道消息
if 'file_image' in kwargs and kwargs['file_image']:
kwargs['file_image'] = kwargs['file_image'].replace("file:///", "")
# 频道消息发图只支持本地
if kwargs['file_image'].startswith("http"):
kwargs['file_image'] = await download_image_by_url(kwargs['file_image'])
if self.test_mode:
return kwargs
await self.client.api.post_message(**kwargs)
else:
elif 'guild_id' in kwargs:
# 频道私聊消息
if 'file_image' in kwargs and kwargs['file_image']:
kwargs['file_image'] = kwargs['file_image'].replace("file:///", "")
if kwargs['file_image'].startswith("http"):
kwargs['file_image'] = await download_image_by_url(kwargs['file_image'])
if self.test_mode:
return kwargs
await self.client.api.post_dms(**kwargs)
else:
raise ValueError("Unknown target type.")
async def send_msg(self, channel_id: int, message_chain: list, message_id: int = None):
async def send_msg(self, target: Dict[str, str], result_message: CommandResult):
'''
推送消息, 如果有 message_id那么就是回复消息。
以主动的方式给频道用户、群、频道或者消息列表用户QQ用户发送一条消息。
`target` 接收一个 dict 类型的值引用。
- 如果目标是 QQ 群,请添加 key `group_openid`。
- 如果目标是 频道消息,请添加 key `channel_id`。
- 如果目标是 频道私聊,请添加 key `guild_id`。
- 如果目标是 QQ 用户,请添加 key `openid`。
'''
_n = NakuruGuildMessage()
_n.channel_id = channel_id
_n.message_id = message_id
await self.reply_msg(_n, message_chain)
plain_text, image_path = await self._parse_to_qqofficial(result_message.message_chain)
async def send(self, message_obj, message_chain: list):
'''
发送信息。内容同 reply_msg。
'''
await self.reply_msg(message_obj, message_chain)
payload = {
'content': plain_text,
**target
}
if image_path:
payload['file_image'] = image_path
await self._reply(**payload)
def wait_for_message(self, channel_id: int) -> NakuruGuildMessage:
def wait_for_message(self, channel_id: int) -> AstrBotMessage:
'''
等待指定 channel_id 的下一条信息,超时 300s 后抛出异常
'''
@@ -235,4 +404,4 @@ class QQOfficial(Platform):
cnt += 1
if cnt > 300:
raise Exception("等待消息超时。")
time.sleep(1)
time.sleep(1)()

25
model/plugin/command.py Normal file
View File

@@ -0,0 +1,25 @@
from dataclasses import dataclass
from type.register import RegisteredPlugins
from typing import List, Union, Callable
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot')
@dataclass
class CommandRegisterRequest():
command_name: str
description: str
priority: int
handler: Callable
plugin_name: str = None
class PluginCommandBridge():
def __init__(self, cached_plugins: RegisteredPlugins):
self.plugin_commands_waitlist: List[CommandRegisterRequest] = []
self.cached_plugins = cached_plugins
def register_command(self, plugin_name, command_name, description, priority, handler):
self.plugin_commands_waitlist.append(CommandRegisterRequest(command_name, description, priority, handler, plugin_name))

256
model/plugin/manager.py Normal file
View File

@@ -0,0 +1,256 @@
import inspect
import os
import sys
import traceback
import uuid
import shutil
import yaml
from util.updator.plugin_updator import PluginUpdator
from util.io import remove_dir, download_file
from types import ModuleType
from type.types import Context
from type.plugin import *
from type.register import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot')
class PluginManager():
def __init__(self, context: Context):
self.updator = PluginUpdator()
self.plugin_store_path = self.updator.get_plugin_store_path()
self.context = context
def get_classes(self, arg: ModuleType):
classes = []
clsmembers = inspect.getmembers(arg, inspect.isclass)
for (name, _) in clsmembers:
if name.lower().endswith("plugin") or name.lower() == "main":
classes.append(name)
break
return classes
def get_modules(self, path):
modules = []
dirs = os.listdir(path)
# 遍历文件夹,找到 main.py 或者和文件夹同名的文件
for d in dirs:
if os.path.isdir(os.path.join(path, d)):
if os.path.exists(os.path.join(path, d, "main.py")):
module_str = 'main'
elif os.path.exists(os.path.join(path, d, d + ".py")):
module_str = d
else:
print(f"插件 {d} 未找到 main.py 或者 {d}.py跳过。")
continue
if os.path.exists(os.path.join(path, d, "main.py")) or os.path.exists(os.path.join(path, d, d + ".py")):
modules.append({
"pname": d,
"module": module_str,
"module_path": os.path.join(path, d, module_str)
})
return modules
def get_plugin_modules(self):
plugins = []
try:
plugin_dir = self.plugin_store_path
if os.path.exists(plugin_dir):
plugins = self.get_modules(plugin_dir)
return plugins
except BaseException as e:
raise e
def check_plugin_dept_update(self, target_plugin: str = None):
plugin_dir = self.plugin_store_path
if not os.path.exists(plugin_dir):
return False
to_update = []
if target_plugin:
to_update.append(target_plugin)
else:
for p in self.context.cached_plugins:
to_update.append(p.root_dir_name)
for p in to_update:
plugin_path = os.path.join(plugin_dir, p)
if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
pth = os.path.join(plugin_path, "requirements.txt")
logger.info(f"正在检查更新插件 {p} 的依赖: {pth}")
self.update_plugin_dept(os.path.join(plugin_path, "requirements.txt"))
def update_plugin_dept(self, path):
mirror = "https://mirrors.aliyun.com/pypi/simple/"
py = sys.executable
os.system(f"{py} -m pip install -r {path} -i {mirror} --quiet")
def install_plugin(self, repo_url: str):
ppath = self.plugin_store_path
# we no longer use Git anymore :)
# Repo.clone_from(repo_url, to_path=plugin_path, branch='master')
plugin_path = self.updator.update(repo_url)
with open(os.path.join(plugin_path, "REPO"), "w", encoding='utf-8') as f:
f.write(repo_url)
ok, err = self.plugin_reload()
if not ok:
raise Exception(err)
def download_from_repo_url(self, target_path: str, repo_url: str):
repo_namespace = repo_url.split("/")[-2:]
author = repo_namespace[0]
repo = repo_namespace[1]
logger.info(f"正在下载插件 {repo} ...")
release_url = f"https://api.github.com/repos/{author}/{repo}/releases"
releases = self.updator.fetch_release_info(url=release_url)
if not releases:
# download from the default branch directly.
logger.warn(f"未在插件 {author}/{repo} 中找到任何发布版本,将从默认分支下载。")
release_url = f"https://github.com/{author}/{repo}/archive/refs/heads/master.zip"
else:
release_url = releases[0]['zipball_url']
download_file(release_url, target_path + ".zip")
def get_registered_plugin(self, plugin_name: str) -> RegisteredPlugin:
for p in self.context.cached_plugins:
if p.metadata.plugin_name == plugin_name:
return p
def uninstall_plugin(self, plugin_name: str):
plugin = self.get_registered_plugin(plugin_name)
if not plugin:
raise Exception("插件不存在。")
root_dir_name = plugin.root_dir_name
ppath = self.plugin_store_path
self.context.cached_plugins.remove(plugin)
if not remove_dir(os.path.join(ppath, root_dir_name)):
raise Exception("移除插件成功,但是删除插件文件夹失败。您可以手动删除该文件夹,位于 addons/plugins/ 下。")
def update_plugin(self, plugin_name: str):
plugin = self.get_registered_plugin(plugin_name)
if not plugin:
raise Exception("插件不存在。")
self.updator.update(plugin)
def plugin_reload(self):
cached_plugins = self.context.cached_plugins
plugins = self.get_plugin_modules()
if plugins is None:
return False, "未找到任何插件模块"
fail_rec = ""
registered_map = {}
for p in cached_plugins:
registered_map[p.module_path] = None
for plugin in plugins:
try:
p = plugin['module']
module_path = plugin['module_path']
root_dir_name = plugin['pname']
# self.check_plugin_dept_update(cached_plugins, root_dir_name)
module = __import__("addons.plugins." +
root_dir_name + "." + p, fromlist=[p])
cls = self.get_classes(module)
try:
# 尝试传入 ctx
obj = getattr(module, cls[0])(context=self.context)
except:
obj = getattr(module, cls[0])()
metadata = None
plugin_path = os.path.join(self.plugin_store_path, root_dir_name)
metadata = self.load_plugin_metadata(plugin_path=plugin_path, plugin_obj=obj)
logger.info(f"插件 {metadata.plugin_name}({metadata.author}) 加载成功。")
if module_path not in registered_map:
cached_plugins.append(RegisteredPlugin(
metadata=metadata,
plugin_instance=obj,
module=module,
module_path=module_path,
root_dir_name=root_dir_name
))
except BaseException as e:
traceback.print_exc()
fail_rec += f"加载{p}插件出现问题,原因 {str(e)}\n"
if not fail_rec:
return True, None
else:
return False, fail_rec
def install_plugin_from_file(self, zip_file_path: str):
# try to unzip
temp_dir = os.path.join(os.path.dirname(zip_file_path), str(uuid.uuid4()))
self.updator.unzip_file(zip_file_path, temp_dir)
# check if the plugin has metadata.yaml
if not os.path.exists(os.path.join(temp_dir, "metadata.yaml")):
remove_dir(temp_dir)
raise Exception("插件缺少 metadata.yaml 文件。")
metadata = self.load_plugin_metadata(temp_dir)
plugin_name = metadata.plugin_name
if not plugin_name:
remove_dir(temp_dir)
raise Exception("插件 metadata.yaml 文件中 name 字段为空。")
plugin_name = self.updator.format_name(plugin_name)
ppath = self.plugin_store_path
plugin_path = os.path.join(ppath, plugin_name)
if os.path.exists(plugin_path):
remove_dir(plugin_path)
# move to the target path
shutil.move(temp_dir, plugin_path)
if metadata.repo:
with open(os.path.join(plugin_path, "REPO"), "w", encoding='utf-8') as f:
f.write(metadata.repo)
# remove the temp dir
remove_dir(temp_dir)
ok, err = self.plugin_reload()
if not ok:
raise Exception(err)
def load_plugin_metadata(self, plugin_path: str, plugin_obj = None) -> PluginMetadata:
metadata = None
if not os.path.exists(plugin_path):
raise Exception("插件不存在。")
if os.path.exists(os.path.join(plugin_path, "metadata.yaml")):
with open(os.path.join(plugin_path, "metadata.yaml"), "r", encoding='utf-8') as f:
metadata = yaml.safe_load(f)
elif plugin_obj:
# 使用 info() 函数
metadata = plugin_obj.info()
if isinstance(metadata, dict):
if 'name' not in metadata or 'desc' not in metadata or 'version' not in metadata or 'author' not in metadata:
raise Exception("插件元数据信息不完整。")
metadata = PluginMetadata(
plugin_name=metadata['name'],
plugin_type=PluginType.COMMON if 'plugin_type' not in metadata else PluginType(metadata['plugin_type']),
author=metadata['author'],
desc=metadata['desc'],
version=metadata['version'],
repo=metadata['repo'] if 'repo' in metadata else None
)
return metadata

View File

@@ -5,87 +5,114 @@ import time
import tiktoken
import threading
import traceback
import base64
from openai import AsyncOpenAI
from openai.types.images_response import ImagesResponse
from openai.types.chat.chat_completion import ChatCompletion
from openai._exceptions import *
from cores.database.conn import dbConn
from astrbot.persist.helper import dbConn
from model.provider.provider import Provider
from util import general_utils as gu
from util.cmd_config import CmdConfig
from util.general_utils import Logger
from SparkleLogging.utils.core import LogManager
from logging import Logger
from typing import List, Dict
from type.types import Context
logger: Logger = LogManager.GetLogger(log_name='astrbot')
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
MODELS = {
"gpt-4o": 128000,
"gpt-4o-2024-05-13": 128000,
"gpt-4-turbo": 128000,
"gpt-4-turbo-2024-04-09": 128000,
"gpt-4-turbo-preview": 128000,
"gpt-4-0125-preview": 128000,
"gpt-4-1106-preview": 128000,
"gpt-4-vision-preview": 128000,
"gpt-4-1106-vision-preview": 128000,
"gpt-4": 8192,
"gpt-4-0613": 8192,
"gpt-4-32k": 32768,
"gpt-4-32k-0613": 32768,
"gpt-3.5-turbo-0125": 16385,
"gpt-3.5-turbo": 16385,
"gpt-3.5-turbo-1106": 16385,
"gpt-3.5-turbo-instruct": 4096,
"gpt-3.5-turbo-16k": 16385,
"gpt-3.5-turbo-0613": 16385,
"gpt-3.5-turbo-16k-0613": 16385,
}
class ProviderOpenAIOfficial(Provider):
def __init__(self, cfg):
self.cc = CmdConfig()
self.logger = Logger()
def __init__(self, context: Context) -> None:
super().__init__()
self.key_list = []
# 如果 cfg['key'] 中有长度为 1 的字符串,那么是格式错误,直接报错
for key in cfg['key']:
if len(key) == 1:
raise BaseException("检查到了长度为 1 的Key。配置文件中的 openai.key 处的格式错误 (符号 - 的后面要加空格)。")
if cfg['key'] != '' and cfg['key'] != None:
self.key_list = cfg['key']
if len(self.key_list) == 0:
raise Exception("您打开了 OpenAI 模型服务,但是未填写 key。请前往填写。")
os.makedirs("data/openai", exist_ok=True)
self.cc = CmdConfig
self.key_data_path = "data/openai/keys.json"
self.api_keys = []
self.chosen_api_key = None
self.base_url = None
self.keys_data = {} # 记录超额
self.key_stat = {}
for k in self.key_list:
self.key_stat[k] = {'exceed': False, 'used': 0}
cfg = context.base_config['openai']
if cfg['key']: self.api_keys = cfg['key']
if cfg['api_base']: self.base_url = cfg['api_base']
if not self.api_keys:
logger.warn("看起来你没有添加 OpenAI 的 API 密钥OpenAI LLM 能力将不会启用。")
else:
self.chosen_api_key = self.api_keys[0]
for key in self.api_keys:
self.keys_data[key] = True
self.api_base = None
if 'api_base' in cfg and cfg['api_base'] != 'none' and cfg['api_base'] != '':
self.api_base = cfg['api_base']
self.logger.log(f"设置 api_base 为: {self.api_base}", tag="OpenAI")
# 创建 OpenAI Client
self.client = AsyncOpenAI(
api_key=self.key_list[0],
base_url=self.api_base
api_key=self.chosen_api_key,
base_url=self.base_url
)
self.openai_model_configs: dict = cfg['chatGPTConfigs']
self.logger.log(f'加载 OpenAI Chat Configs: {self.openai_model_configs}', tag="OpenAI")
self.openai_configs = cfg
# 会话缓存
self.session_dict = {}
# 最大缓存token
self.max_tokens = cfg['total_tokens_limit']
# 历史记录持久化间隔时间
self.history_dump_interval = 20
self.enc = tiktoken.get_encoding("cl100k_base")
self.model_configs: Dict = cfg['chatGPTConfigs']
super().set_curr_model(self.model_configs['model'])
self.image_generator_model_configs: Dict = self.cc.get('openai_image_generate', None)
self.session_memory: Dict[str, List] = {} # 会话记忆
self.session_memory_lock = threading.Lock()
self.max_tokens = self.model_configs['max_tokens'] # 上下文窗口大小
logger.info("正在载入分词器 cl100k_base...")
self.tokenizer = tiktoken.get_encoding("cl100k_base") # todo: 根据 model 切换分词器
logger.info("分词器载入完成。")
self.DEFAULT_PERSONALITY = context.default_personality
self.curr_personality = self.DEFAULT_PERSONALITY
self.session_personality = {} # 记录了某个session是否已设置人格。
# 从 SQLite DB 读取历史记录
try:
db1 = dbConn()
for session in db1.get_all_session():
self.session_dict[session[0]] = json.loads(session[1])['data']
self.logger.log("读取历史记录成功。", tag="OpenAI")
self.session_memory_lock.acquire()
self.session_memory[session[0]] = json.loads(session[1])['data']
self.session_memory_lock.release()
except BaseException as e:
self.logger.log("读取历史记录失败,但不影响使用。", level=gu.LEVEL_ERROR, tag="OpenAI")
# 创建转储定时器线程
logger.warn(f"读取 OpenAI LLM 对话历史记录 失败{e}。仍可正常使用。")
# 定时保存历史记录
threading.Thread(target=self.dump_history, daemon=True).start()
# 人格
self.curr_personality = {}
# 转储历史记录
def dump_history(self):
'''
转储历史记录
'''
time.sleep(10)
db = dbConn()
while True:
try:
# print("转储历史记录...")
for key in self.session_dict:
data = self.session_dict[key]
for key in self.session_memory:
data = self.session_memory[key]
data_json = {
'data': data
}
@@ -93,310 +120,389 @@ class ProviderOpenAIOfficial(Provider):
db.update_session(key, json.dumps(data_json))
else:
db.insert_session(key, json.dumps(data_json))
# print("转储历史记录完毕")
logger.debug("已保存 OpenAI 会话历史记录")
except BaseException as e:
print(e)
# 每隔10分钟转储一次
time.sleep(10*self.history_dump_interval)
finally:
time.sleep(10*60)
def personality_set(self, default_personality: dict, session_id: str):
if not default_personality: return
if session_id not in self.session_memory:
self.session_memory[session_id] = []
self.curr_personality = default_personality
self.session_personality = {} # 重置
encoded_prompt = self.tokenizer.encode(default_personality['prompt'])
tokens_num = len(encoded_prompt)
model = self.model_configs['model']
if model in MODELS and tokens_num > MODELS[model] - 500:
default_personality['prompt'] = self.tokenizer.decode(encoded_prompt[:MODELS[model] - 500])
new_record = {
"user": {
"role": "user",
"role": "system",
"content": default_personality['prompt'],
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
'usage_tokens': 0, # 到该条目的总 token 数
'single-tokens': 0 # 该条目的 token 数
}
self.session_dict[session_id].append(new_record)
async def text_chat(self, prompt,
session_id = None,
image_url = None,
function_call=None,
extra_conf: dict = None,
default_personality: dict = None):
if session_id is None:
session_id = "unknown"
if "unknown" in self.session_dict:
del self.session_dict["unknown"]
# 会话机制
if session_id not in self.session_dict:
self.session_dict[session_id] = []
self.session_memory[session_id].append(new_record)
async def encode_image_bs64(self, image_url: str) -> str:
'''
将图片转换为 base64
'''
if image_url.startswith("http"):
image_url = await gu.download_image_by_url(image_url)
if len(self.session_dict[session_id]) == 0:
# 设置默认人格
if default_personality is not None:
self.personality_set(default_personality, session_id)
with open(image_url, "rb") as f:
image_bs64 = base64.b64encode(f.read()).decode()
return "data:image/jpeg;base64," + image_bs64
# 使用 tictoken 截断消息
_encoded_prompt = self.enc.encode(prompt)
if self.openai_model_configs['max_tokens'] < len(_encoded_prompt):
prompt = self.enc.decode(_encoded_prompt[:int(self.openai_model_configs['max_tokens']*0.80)])
self.logger.log(f"注意,有一部分 prompt 文本由于超出 token 限制而被截断。", level=gu.LEVEL_WARNING, tag="OpenAI")
cache_data_list, new_record, req = self.wrap(prompt, session_id, image_url)
self.logger.log(f"cache: {str(cache_data_list)}", level=gu.LEVEL_DEBUG, tag="OpenAI")
self.logger.log(f"request: {str(req)}", level=gu.LEVEL_DEBUG, tag="OpenAI")
retry = 0
response = None
err = ''
# 截断倍率
truncate_rate = 0.75
use_gpt4v = False
for i in req:
if isinstance(i['content'], list):
use_gpt4v = True
break
if image_url is not None:
use_gpt4v = True
if use_gpt4v:
conf = self.openai_model_configs.copy()
conf['model'] = 'gpt-4-vision-preview'
else:
conf = self.openai_model_configs
if extra_conf is not None:
conf.update(extra_conf)
while retry < 10:
try:
if function_call is None:
response = await self.client.chat.completions.create(
messages=req,
**conf
)
else:
response = await self.client.chat.completions.create(
messages=req,
tools = function_call,
**conf
)
break
except Exception as e:
traceback.print_exc()
if 'Invalid content type. image_url is only supported by certain models.' in str(e):
raise e
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
self.logger.log("当前 Key 已超额或异常, 正在切换", level=gu.LEVEL_WARNING, tag="OpenAI")
self.key_stat[self.client.api_key]['exceed'] = True
is_switched = self.handle_switch_key()
if not is_switched:
raise e
retry -= 1
elif 'maximum context length' in str(e):
self.logger.log("token 超限, 清空对应缓存,并进行消息截断", tag="OpenAI")
self.session_dict[session_id] = []
prompt = prompt[:int(len(prompt)*truncate_rate)]
truncate_rate -= 0.05
cache_data_list, new_record, req = self.wrap(prompt, session_id)
elif 'Limit: 3 / min. Please try again in 20s.' in str(e) or "OpenAI response error" in str(e):
time.sleep(30)
async def retrieve_context(self, session_id: str):
'''
根据 session_id 获取保存的 OpenAI 格式的上下文
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
# 转换为 openai 要求的格式
context = []
is_lvm = await self.is_lvm()
for record in self.session_memory[session_id]:
if "user" in record and record['user']:
if not is_lvm and "content" in record['user'] and isinstance(record['user']['content'], list):
logger.warn(f"由于当前模型 {self.model_configs['model']}不支持视觉,将忽略上下文中的图片输入。如果一直弹出此警告,可以尝试 reset 指令。")
continue
else:
self.logger.log(str(e), level=gu.LEVEL_ERROR, tag="OpenAI")
time.sleep(2)
err = str(e)
retry += 1
if retry >= 10:
self.logger.log(r"如果报错, 且您的机器在中国大陆内, 请确保您的电脑已经设置好代理软件(梯子), 并在配置文件设置了系统代理地址。详见 https://github.com/Soulter/QQChannelChatGPT/wiki", tag="OpenAI")
raise BaseException("连接出错: "+str(err))
assert isinstance(response, ChatCompletion)
self.logger.log(f"OPENAI RESPONSE: {response.usage}", level=gu.LEVEL_DEBUG, tag="OpenAI")
context.append(record['user'])
if "AI" in record and record['AI']:
context.append(record['AI'])
# 结果分类
choice = response.choices[0]
if choice.message.content != None:
# 文本形式
chatgpt_res = str(choice.message.content).strip()
elif choice.message.tool_calls != None and len(choice.message.tool_calls) > 0:
return context
async def is_lvm(self):
'''
是否是 LVM
'''
return self.model_configs['model'].startswith("gpt-4")
async def get_models(self):
try:
models = await self.client.models.list()
except NotFoundError as e:
bu = str(self.client.base_url)
self.client.base_url = bu + "/v1"
models = await self.client.models.list()
finally:
return filter(lambda x: x.id.startswith("gpt"), models.data)
async def assemble_context(self, session_id: str, prompt: str, image_url: str = None):
'''
组装上下文,并且根据当前上下文窗口大小截断
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
tokens_num = len(self.tokenizer.encode(prompt))
previous_total_tokens_num = 0 if not self.session_memory[session_id] else self.session_memory[session_id][-1]['usage_tokens']
message = {
"usage_tokens": previous_total_tokens_num + tokens_num,
"single_tokens": tokens_num,
"AI": None
}
if image_url:
user_content = {
"role": "user",
"content": [
{
"type": "text",
"text": prompt
},
{
"type": "image_url",
"image_url": {
"url": await self.encode_image_bs64(image_url)
}
}
]
}
else:
user_content = {
"role": "user",
"content": prompt
}
message["user"] = user_content
self.session_memory[session_id].append(message)
# 根据 模型的上下文窗口 淘汰掉多余的记录
curr_model = self.model_configs['model']
if curr_model in MODELS:
maxium_tokens_num = MODELS[curr_model] - 300 # 至少预留 300 给 completion
# if message['usage_tokens'] > maxium_tokens_num:
# 淘汰多余的记录,使得最终的 usage_tokens 不超过 maxium_tokens_num - 300
# contexts = self.session_memory[session_id]
# need_to_remove_idx = 0
# freed_tokens_num = contexts[0]['single-tokens']
# while freed_tokens_num < message['usage_tokens'] - maxium_tokens_num:
# need_to_remove_idx += 1
# freed_tokens_num += contexts[need_to_remove_idx]['single-tokens']
# # 更新之后的所有记录的 usage_tokens
# for i in range(len(contexts)):
# if i > need_to_remove_idx:
# contexts[i]['usage_tokens'] -= freed_tokens_num
# logger.debug(f"淘汰上下文记录 {need_to_remove_idx+1} 条,释放 {freed_tokens_num} 个 token。当前上下文总 token 为 {contexts[-1]['usage_tokens']}。")
# self.session_memory[session_id] = contexts[need_to_remove_idx+1:]
while len(self.session_memory[session_id]) and self.session_memory[session_id][-1]['usage_tokens'] > maxium_tokens_num:
self.pop_record(session_id)
async def pop_record(self, session_id: str, pop_system_prompt: bool = False):
'''
弹出第一条记录
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
if len(self.session_memory[session_id]) == 0:
return None
for i in range(len(self.session_memory[session_id])):
# 检查是否是 system prompt
if not pop_system_prompt and self.session_memory[session_id][i]['user']['role'] == "system":
# 如果只有一个 system prompt才不删掉
f = False
for j in range(i+1, len(self.session_memory[session_id])):
if self.session_memory[session_id][j]['user']['role'] == "system":
f = True
break
if not f:
continue
record = self.session_memory[session_id].pop(i)
break
# 更新之后所有记录的 usage_tokens
for i in range(len(self.session_memory[session_id])):
self.session_memory[session_id][i]['usage_tokens'] -= record['single-tokens']
logger.debug(f"淘汰上下文记录 1 条,释放 {record['single-tokens']} 个 token。当前上下文总 token 为 {self.session_memory[session_id][-1]['usage_tokens']}")
return record
async def text_chat(self,
prompt: str,
session_id: str,
image_url: None=None,
tools: None=None,
extra_conf: Dict = None,
**kwargs
) -> str:
super().accu_model_stat()
if not session_id:
session_id = "unknown"
if "unknown" in self.session_memory:
del self.session_memory["unknown"]
if session_id not in self.session_memory:
self.session_memory[session_id] = []
if session_id not in self.session_personality or not self.session_personality[session_id]:
self.personality_set(self.curr_personality, session_id)
self.session_personality[session_id] = True
# 如果 prompt 超过了最大窗口,截断。
# 1. 可以保证之后 pop 的时候不会出现问题
# 2. 可以保证不会超过最大 token 数
_encoded_prompt = self.tokenizer.encode(prompt)
curr_model = self.model_configs['model']
if curr_model in MODELS and len(_encoded_prompt) > MODELS[curr_model] - 300:
_encoded_prompt = _encoded_prompt[:MODELS[curr_model] - 300]
prompt = self.tokenizer.decode(_encoded_prompt)
# 组装上下文,并且根据当前上下文窗口大小截断
await self.assemble_context(session_id, prompt, image_url)
# 获取上下文openai 格式
contexts = await self.retrieve_context(session_id)
conf = self.model_configs
if extra_conf: conf.update(extra_conf)
# start request
retry = 0
rate_limit_retry = 0
while retry < 3 or rate_limit_retry < 5:
logger.debug(conf)
logger.debug(contexts)
if tools:
completion_coro = self.client.chat.completions.create(
messages=contexts,
tools=tools,
**conf
)
else:
completion_coro = self.client.chat.completions.create(
messages=contexts,
**conf
)
try:
completion = await completion_coro
break
except AuthenticationError as e:
api_key = self.chosen_api_key[10:] + "..."
logger.error(f"OpenAI API Key {api_key} 验证错误。详细原因:{e}。正在切换到下一个可用的 Key如果有的话")
self.keys_data[self.chosen_api_key] = False
ok = await self.switch_to_next_key()
if ok: continue
else: raise Exception("所有 OpenAI API Key 目前都不可用。")
except BadRequestError as e:
logger.warn(f"OpenAI 请求异常:{e}")
if "image_url is only supported by certain models." in str(e):
raise Exception(f"当前模型 { self.model_configs['model'] } 不支持图片输入,请更换模型。")
retry += 1
except RateLimitError as e:
if "You exceeded your current quota" in str(e):
self.keys_data[self.chosen_api_key] = False
ok = await self.switch_to_next_key()
if ok: continue
else: raise Exception("所有 OpenAI API Key 目前都不可用。")
logger.error(f"OpenAI API Key {self.chosen_api_key} 达到请求速率限制或者官方服务器当前超载。详细原因:{e}")
await self.switch_to_next_key()
rate_limit_retry += 1
time.sleep(1)
except Exception as e:
retry += 1
if retry >= 3:
logger.error(traceback.format_exc())
raise Exception(f"OpenAI 请求失败:{e}。重试次数已达到上限。")
if "maximum context length" in str(e):
logger.warn(f"OpenAI 请求失败:{e}。上下文长度超过限制。尝试弹出最早的记录然后重试。")
self.pop_record(session_id)
logger.warning(traceback.format_exc())
logger.warning(f"OpenAI 请求失败:{e}。重试第 {retry} 次。")
time.sleep(1)
assert isinstance(completion, ChatCompletion)
logger.debug(f"openai completion: {completion.usage}")
choice = completion.choices[0]
usage_tokens = completion.usage.total_tokens
completion_tokens = completion.usage.completion_tokens
self.session_memory[session_id][-1]['usage_tokens'] = usage_tokens
self.session_memory[session_id][-1]['single_tokens'] += completion_tokens
if choice.message.content:
# 返回文本
completion_text = str(choice.message.content).strip()
elif choice.message.tool_calls and choice.message.tool_calls:
# tools call (function calling)
return choice.message.tool_calls[0].function
self.key_stat[self.client.api_key]['used'] += response.usage.total_tokens
current_usage_tokens = response.usage.total_tokens
# 超过指定tokens 尽可能的保留最多的条目直到小于max_tokens
if current_usage_tokens > self.max_tokens:
t = current_usage_tokens
index = 0
while t > self.max_tokens:
if index >= len(cache_data_list):
break
# 保留人格信息
if cache_data_list[index]['type'] != 'personality':
t -= int(cache_data_list[index]['single_tokens'])
del cache_data_list[index]
else:
index += 1
# 删除完后更新相关字段
self.session_dict[session_id] = cache_data_list
# 添加新条目进入缓存的prompt
new_record['AI'] = {
'role': 'assistant',
'content': chatgpt_res,
self.session_memory[session_id][-1]['AI'] = {
"role": "assistant",
"content": completion_text
}
new_record['usage_tokens'] = current_usage_tokens
if len(cache_data_list) > 0:
new_record['single_tokens'] = current_usage_tokens - int(cache_data_list[-1]['usage_tokens'])
else:
new_record['single_tokens'] = current_usage_tokens
cache_data_list.append(new_record)
self.session_dict[session_id] = cache_data_list
return chatgpt_res
async def image_chat(self, prompt, img_num = 1, img_size = "1024x1024"):
retry = 0
image_url = ''
image_generate_configs = self.cc.get("openai_image_generate", None)
while retry < 5:
try:
response: ImagesResponse = await self.client.images.generate(
prompt=prompt,
**image_generate_configs
)
image_url = []
for i in range(img_num):
image_url.append(response.data[i].url)
break
except Exception as e:
self.logger.log(str(e), level=gu.LEVEL_ERROR)
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(
e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
self.logger.log("当前 Key 已超额或者不正常, 正在切换", level=gu.LEVEL_WARNING, tag="OpenAI")
self.key_stat[self.client.api_key]['exceed'] = True
is_switched = self.handle_switch_key()
if not is_switched:
raise e
elif 'Your request was rejected as a result of our safety system.' in str(e):
self.logger.log("您的请求被 OpenAI 安全系统拒绝, 请稍后再试", level=gu.LEVEL_WARNING, tag="OpenAI")
raise e
else:
retry += 1
if retry >= 5:
raise BaseException("连接超时")
return image_url
async def forget(self, session_id = None) -> bool:
if session_id is None:
return completion_text
async def switch_to_next_key(self):
'''
切换到下一个 API Key
'''
if not self.api_keys:
logger.error("OpenAI API Key 不存在。")
return False
self.session_dict[session_id] = []
for key in self.keys_data:
if self.keys_data[key]:
# 没超额
self.chosen_api_key = key
self.client.api_key = key
logger.info(f"OpenAI 切换到 API Key {key[:10]}... 成功。")
return True
return False
async def image_generate(self, prompt: str, session_id: str = None, **kwargs) -> str:
'''
生成图片
'''
retry = 0
conf = self.image_generator_model_configs
super().accu_model_stat(model=conf['model'])
if not conf:
logger.error("OpenAI 图片生成模型配置不存在。")
raise Exception("OpenAI 图片生成模型配置不存在。")
while retry < 3:
try:
images_response = await self.client.images.generate(
prompt=prompt,
**conf
)
image_url = images_response.data[0].url
return image_url
except Exception as e:
retry += 1
if retry >= 3:
logger.error(traceback.format_exc())
raise Exception(f"OpenAI 图片生成请求失败:{e}。重试次数已达到上限。")
logger.warning(f"OpenAI 图片生成请求失败:{e}。重试第 {retry} 次。")
time.sleep(1)
async def forget(self, session_id=None, keep_system_prompt: bool=False) -> bool:
if session_id is None: return False
self.session_memory[session_id] = []
if keep_system_prompt:
self.personality_set(self.curr_personality, session_id)
else:
self.curr_personality = self.DEFAULT_PERSONALITY
return True
def get_prompts_by_cache_list(self, cache_data_list, divide=False, paging=False, size=5, page=1):
def dump_contexts_page(self, session_id: str, size=5, page=1,):
'''
获取缓存的会话
'''
prompts = ""
if paging:
page_begin = (page-1)*size
page_end = page*size
if page_begin < 0:
page_begin = 0
if page_end > len(cache_data_list):
page_end = len(cache_data_list)
cache_data_list = cache_data_list[page_begin:page_end]
for item in cache_data_list:
prompts += str(item['user']['role']) + ":\n" + str(item['user']['content']) + "\n"
prompts += str(item['AI']['role']) + ":\n" + str(item['AI']['content']) + "\n"
# contexts_str = ""
# for i, key in enumerate(self.session_memory):
# if i < (page-1)*size or i >= page*size:
# continue
# contexts_str += f"Session ID: {key}\n"
# for record in self.session_memory[key]:
# if "user" in record:
# contexts_str += f"User: {record['user']['content']}\n"
# if "AI" in record:
# contexts_str += f"AI: {record['AI']['content']}\n"
# contexts_str += "---\n"
contexts_str = ""
if session_id in self.session_memory:
for record in self.session_memory[session_id]:
if "user" in record and record['user']:
text = record['user']['content'][:100] + "..." if len(record['user']['content']) > 100 else record['user']['content']
contexts_str += f"User: {text}\n"
if "AI" in record and record['AI']:
text = record['AI']['content'][:100] + "..." if len(record['AI']['content']) > 100 else record['AI']['content']
contexts_str += f"Assistant: {text}\n"
else:
contexts_str = "会话 ID 不存在。"
if divide:
prompts += "----------\n"
return prompts
return contexts_str, len(self.session_memory[session_id])
def wrap(self, prompt, session_id, image_url = None):
if image_url is not None:
prompt = [
{
"type": "text",
"text": prompt
},
{
"type": "image_url",
"image_url": {
"url": image_url
}
}
]
# 获得缓存信息
context = self.session_dict[session_id]
new_record = {
"user": {
"role": "user",
"content": prompt,
},
"AI": {},
'type': "common",
'usage_tokens': 0,
}
req_list = []
for i in context:
if 'user' in i:
req_list.append(i['user'])
if 'AI' in i:
req_list.append(i['AI'])
req_list.append(new_record['user'])
return context, new_record, req_list
def set_model(self, model: str):
self.model_configs['model'] = model
self.cc.put_by_dot_str("openai.chatGPTConfigs.model", model)
super().set_curr_model(model)
def handle_switch_key(self):
is_all_exceed = True
for key in self.key_stat:
if key == None or self.key_stat[key]['exceed']:
continue
is_all_exceed = False
self.client.api_key = key
self.logger.log(f"切换到 Key: {key}(已使用 token: {self.key_stat[key]['used']})", level=gu.LEVEL_INFO, tag="OpenAI")
break
if is_all_exceed:
self.logger.log("所有 Key 已超额", level=gu.LEVEL_CRITICAL, tag="OpenAI")
return False
return True
def get_configs(self):
return self.openai_configs
def get_key_stat(self):
return self.key_stat
def get_key_list(self):
return self.key_list
def get_curr_key(self):
return self.client.api_key
def set_key(self, key):
self.client.api_key = key
# 添加key
def append_key(self, key, sponsor):
self.key_list.append(key)
self.key_stat[key] = {'exceed': False, 'used': 0, 'sponsor': sponsor}
return self.model_configs
# 检查key是否可用
async def check_key(self, key):
client_ = AsyncOpenAI(
api_key=key,
base_url=self.api_base
)
messages = [{"role": "user", "content": "please just echo `test`"}]
await client_.chat.completions.create(
messages=messages,
**self.openai_model_configs
)
return True
def get_keys_data(self):
return self.keys_data
def get_curr_key(self):
return self.chosen_api_key
def set_key(self, key):
self.client.api_key = key

View File

@@ -1,9 +1,32 @@
from collections import defaultdict
class Provider:
async def text_chat(self,
prompt: str,
session_id: str,
image_url: None,
function_call: None,
def __init__(self) -> None:
self.model_stat = defaultdict(int) # 用于记录 LLM Model 使用数据
self.curr_model_name = "unknown"
def reset_model_stat(self):
self.model_stat.clear()
def set_curr_model(self, model_name: str):
self.curr_model_name = model_name
def get_curr_model(self):
'''
返回当前正在使用的 LLM
'''
return self.curr_model_name
def accu_model_stat(self, model: str = None):
if not model:
model = self.get_curr_model()
self.model_stat[model] += 1
async def text_chat(self,
prompt: str,
session_id: str,
image_url: None = None,
tools: None = None,
extra_conf: dict = None,
default_personality: dict = None,
**kwargs) -> str:
@@ -11,25 +34,25 @@ class Provider:
[require]
prompt: 提示词
session_id: 会话id
[optional]
image_url: 图片url识图
function_call: 函数调用
tools: 函数调用工具
extra_conf: 额外配置
default_personality: 默认人格
'''
raise NotImplementedError
raise NotImplementedError()
async def image_generate(self, prompt, session_id, **kwargs) -> str:
'''
[require]
prompt: 提示词
session_id: 会话id
'''
raise NotImplementedError
raise NotImplementedError()
async def forget(self, session_id = None) -> bool:
async def forget(self, session_id=None) -> bool:
'''
重置会话
'''
raise NotImplementedError
raise NotImplementedError()

View File

@@ -1,224 +0,0 @@
from revChatGPT.V1 import Chatbot
from revChatGPT import typings
from model.provider.provider import Provider
from util import general_utils as gu
from util import cmd_config as cc
import time
class ProviderRevChatGPT(Provider):
def __init__(self, config, base_url = None):
if base_url == "":
base_url = None
self.rev_chatgpt: list[dict] = []
self.cc = cc.CmdConfig()
for i in range(0, len(config['account'])):
try:
gu.log(f"创建逆向ChatGPT负载{str(i+1)}中...", level=gu.LEVEL_INFO, tag="RevChatGPT")
if isinstance(config['account'][i], str):
# 默认是 access_token
rev_account_config = {
'access_token': config['account'][i],
}
else:
if 'password' in config['account'][i]:
gu.log(f"创建逆向ChatGPT负载{str(i+1)}失败: 已不支持账号密码登录请使用access_token方式登录。", level=gu.LEVEL_ERROR, tag="RevChatGPT")
continue
rev_account_config = {
'access_token': config['account'][i]['access_token'],
}
if self.cc.get("rev_chatgpt_model") != "":
rev_account_config['model'] = self.cc.get("rev_chatgpt_model")
if len(self.cc.get("rev_chatgpt_plugin_ids")) > 0:
rev_account_config['plugin_ids'] = self.cc.get("rev_chatgpt_plugin_ids")
if self.cc.get("rev_chatgpt_PUID") != "":
rev_account_config['PUID'] = self.cc.get("rev_chatgpt_PUID")
if len(self.cc.get("rev_chatgpt_unverified_plugin_domains")) > 0:
rev_account_config['unverified_plugin_domains'] = self.cc.get("rev_chatgpt_unverified_plugin_domains")
cb = Chatbot(config=rev_account_config, base_url=base_url)
# cb.captcha_solver = self.__captcha_solver
# 后八位c
g_id = rev_account_config['access_token'][-8:]
revstat = {
'id': g_id,
'obj': cb,
'busy': False,
'user': []
}
self.rev_chatgpt.append(revstat)
except BaseException as e:
gu.log(f"创建逆向ChatGPT负载{str(i+1)}失败: {str(e)}", level=gu.LEVEL_ERROR, tag="RevChatGPT")
def forget(self, session_id = None) -> bool:
for i in self.rev_chatgpt:
for user in i['user']:
if session_id == user['id']:
try:
i['obj'].reset_chat()
return True
except BaseException as e:
gu.log(f"重置RevChatGPT失败。原因: {str(e)}", level=gu.LEVEL_ERROR, tag="RevChatGPT")
return False
return False
def get_revchatgpt(self) -> list:
return self.rev_chatgpt
def request_text(self, prompt: str, bot) -> str:
resp = ''
err_count = 0
retry_count = 5
while err_count < retry_count:
try:
for data in bot.ask(prompt):
resp = data["message"]
break
except typings.Error as e:
if e.code == typings.ErrorType.INVALID_ACCESS_TOKEN_ERROR:
raise e
if e.code == typings.ErrorType.EXPIRED_ACCESS_TOKEN_ERROR:
raise e
if e.code == typings.ErrorType.PROHIBITED_CONCURRENT_QUERY_ERROR:
raise e
if "Your authentication token has expired. Please try signing in again." in str(e):
raise e
if "The message you submitted was too long" in str(e):
raise e
if "You've reached our limit of messages per hour." in str(e):
raise e
if "Rate limited by proxy" in str(e):
gu.log(f"触发请求频率限制, 60秒后自动重试。", level=gu.LEVEL_WARNING, tag="RevChatGPT")
time.sleep(60)
err_count += 1
gu.log(f"请求异常: {str(e)},正在重试。({str(err_count)})", level=gu.LEVEL_WARNING, tag="RevChatGPT")
if err_count >= retry_count:
raise e
except BaseException as e:
err_count += 1
gu.log(f"请求异常: {str(e)},正在重试。({str(err_count)})", level=gu.LEVEL_WARNING, tag="RevChatGPT")
if err_count >= retry_count:
raise e
if resp == '':
resp = "RevChatGPT请求异常。"
# print("[RevChatGPT] "+str(resp))
return resp
def text_chat(self, prompt,
session_id = None,
image_url = None,
function_call=None,
extra_conf: dict = None,
default_personality: dict = None) -> str:
# 选择一个人少的账号。
selected_revstat = None
min_revstat = None
min_ = None
new_user = False
conversation_id = ''
parent_id = ''
for revstat in self.rev_chatgpt:
for user in revstat['user']:
if session_id == user['id']:
selected_revstat = revstat
conversation_id = user['conversation_id']
parent_id = user['parent_id']
break
if min_ is None:
min_ = len(revstat['user'])
min_revstat = revstat
elif len(revstat['user']) < min_:
min_ = len(revstat['user'])
min_revstat = revstat
# if session_id in revstat['user']:
# selected_revstat = revstat
# break
if selected_revstat is None:
selected_revstat = min_revstat
selected_revstat['user'].append({
'id': session_id,
'conversation_id': '',
'parent_id': ''
})
new_user = True
gu.log(f"选择账号{str(selected_revstat)}", tag="RevChatGPT", level=gu.LEVEL_DEBUG)
while selected_revstat['busy']:
gu.log(f"账号忙碌,等待中...", tag="RevChatGPT", level=gu.LEVEL_DEBUG)
time.sleep(1)
selected_revstat['busy'] = True
if not new_user:
# 非新用户,则使用其专用的会话
selected_revstat['obj'].conversation_id = conversation_id
selected_revstat['obj'].parent_id = parent_id
else:
# 新用户,则使用新的会话
selected_revstat['obj'].reset_chat()
res = ''
err_msg = ''
err_cnt = 0
while err_cnt < 15:
try:
res = self.request_text(prompt, selected_revstat['obj'])
selected_revstat['busy'] = False
# 记录新用户的会话
if new_user:
i = 0
for user in selected_revstat['user']:
if user['id'] == session_id:
selected_revstat['user'][i]['conversation_id'] = selected_revstat['obj'].conversation_id
selected_revstat['user'][i]['parent_id'] = selected_revstat['obj'].parent_id
break
i += 1
return res.strip()
except BaseException as e:
if "Your authentication token has expired. Please try signing in again." in str(e):
raise Exception(f"此账号(access_token后8位为{selected_revstat['id']})的access_token已过期请重新获取或者切换账号。")
if "The message you submitted was too long" in str(e):
raise Exception("发送的消息太长,请分段发送。")
if "You've reached our limit of messages per hour." in str(e):
raise Exception("触发RevChatGPT请求频率限制。请1小时后再试或者切换账号。")
gu.log(f"请求异常: {str(e)}", level=gu.LEVEL_WARNING, tag="RevChatGPT")
err_cnt += 1
time.sleep(3)
raise Exception(f'回复失败。原因:{err_msg}。如果您设置了多个账号,可以使用/switch指令切换账号。输入/switch查看详情。')
# while self.is_all_busy():
# time.sleep(1)
# res = ''
# err_msg = ''
# cursor = 0
# for revstat in self.rev_chatgpt:
# cursor += 1
# if not revstat['busy']:
# try:
# revstat['busy'] = True
# res = self.request_text(prompt, revstat['obj'])
# revstat['busy'] = False
# return res.strip()
# # todo: 细化错误管理
# except BaseException as e:
# revstat['busy'] = False
# gu.log(f"请求出现问题: {str(e)}", level=gu.LEVEL_WARNING, tag="RevChatGPT")
# err_msg += f"账号{cursor} - 错误原因: {str(e)}"
# continue
# else:
# err_msg += f"账号{cursor} - 错误原因: 忙碌"
# continue
# raise Exception(f'回复失败。错误跟踪:{err_msg}')
def is_all_busy(self) -> bool:
for revstat in self.rev_chatgpt:
if not revstat['busy']:
return False
return True

View File

@@ -1,18 +1,19 @@
pydantic~=1.10.4
aiohttp
requests
openai~=1.2.3
openai
qq-botpy
chardet~=5.1.0
Pillow~=9.4.0
GitPython~=3.1.31
Pillow
nakuru-project
beautifulsoup4
googlesearch-python
tiktoken
readability-lxml
revChatGPT~=6.8.6
baidu-aip~=4.16.9
baidu-aip
websockets
flask
psutil
lxml_html_clean
SparkleLogging
aiocqhttp

Binary file not shown.

37
type/astrbot_message.py Normal file
View File

@@ -0,0 +1,37 @@
import time
from enum import Enum
from typing import List
from dataclasses import dataclass
from nakuru.entities.components import BaseMessageComponent
class MessageType(Enum):
GROUP_MESSAGE = 'GroupMessage' # 群组形式的消息
FRIEND_MESSAGE = 'FriendMessage' # 私聊、好友等单聊消息
GUILD_MESSAGE = 'GuildMessage' # 频道消息
@dataclass
class MessageMember():
user_id: str # 发送者id
nickname: str = None
class AstrBotMessage():
'''
AstrBot 的消息对象
'''
tag: str # 消息来源标签
type: MessageType # 消息类型
self_id: str # 机器人的识别id
session_id: str # 会话id
message_id: str # 消息id
sender: MessageMember # 发送者
message: List[BaseMessageComponent] # 消息链使用 Nakuru 的消息链格式
message_str: str # 最直观的纯文本消息字符串
raw_message: object
timestamp: int # 消息时间戳
def __init__(self) -> None:
self.timestamp = int(time.time())
def __str__(self) -> str:
return str(self.__dict__)

76
type/command.py Normal file
View File

@@ -0,0 +1,76 @@
from typing import Union, List, Callable
from dataclasses import dataclass
from nakuru.entities.components import Plain, Image
@dataclass
class CommandItem():
'''
用来描述单个指令
'''
command_name: Union[str, tuple] # 指令名
callback: Callable # 回调函数
description: str # 描述
origin: str # 注册来源
class CommandResult():
'''
用于在Command中返回多个值
'''
def __init__(self, hit: bool = True, success: bool = True, message_chain: list = [], command_name: str = "unknown_command") -> None:
self.hit = hit
self.success = success
self.message_chain = message_chain
self.command_name = command_name
self.is_use_t2i = None # default
def message(self, message: str):
'''
快捷回复消息。
CommandResult().message("Hello, world!")
'''
self.message_chain = [Plain(message), ]
return self
def error(self, message: str):
'''
快捷回复消息。
CommandResult().error("Hello, world!")
'''
self.success = False
self.message_chain = [Plain(message), ]
return self
def url_image(self, url: str):
'''
快捷回复图片(网络url的格式)。
CommandResult().image("https://example.com/image.jpg")
'''
self.message_chain = [Image.fromURL(url), ]
return self
def file_image(self, path: str):
'''
快捷回复图片(本地文件路径的格式)。
CommandResult().image("image.jpg")
'''
self.message_chain = [Image.fromFileSystem(path), ]
return self
# def use_t2i(self, use_t2i: bool):
# '''
# 设置是否使用文本转图片服务。如果不设置,则跟随用户的设置。
# CommandResult().use_t2i(False)
# '''
# self.is_use_t2i = use_t2i
# return self
def _result_tuple(self):
return (self.success, self.message_chain, self.command_name)

1
type/config.py Normal file
View File

@@ -0,0 +1 @@
VERSION = '3.3.5'

52
type/message_event.py Normal file
View File

@@ -0,0 +1,52 @@
from typing import List, Union, Optional
from dataclasses import dataclass
from type.register import RegisteredPlatform
from type.types import Context
from type.astrbot_message import AstrBotMessage
class AstrMessageEvent():
def __init__(self,
message_str: str,
message_obj: AstrBotMessage,
platform: RegisteredPlatform,
role: str,
context: Context,
session_id: str = None):
'''
AstrBot 消息事件。
`message_str`: 纯消息字符串
`message_obj`: AstrBotMessage 对象
`platform`: 平台对象
`role`: 角色,`admin` or `member`
`context`: 全局对象
`session_id`: 会话id
'''
self.context = context
self.message_str = message_str
self.message_obj = message_obj
self.platform = platform
self.role = role
self.session_id = session_id
def from_astrbot_message(message: AstrBotMessage,
context: Context,
platform_name: str,
session_id: str,
role: str = "member"):
ame = AstrMessageEvent(message.message_str,
message,
context.find_platform(platform_name),
role,
context,
session_id)
return ame
@dataclass
class MessageResult():
result_message: Union[str, list]
is_command_call: Optional[bool] = False
use_t2i: Optional[bool] = None # None 为跟随用户设置
callback: Optional[callable] = None

53
type/plugin.py Normal file
View File

@@ -0,0 +1,53 @@
from enum import Enum
from types import ModuleType
from typing import List
from dataclasses import dataclass
class PluginType(Enum):
PLATFORM = 'platform' # 平台类插件。
LLM = 'llm' # 大语言模型类插件
COMMON = 'common' # 其他插件
@dataclass
class PluginMetadata:
'''
插件的元数据。
'''
# required
plugin_name: str
plugin_type: PluginType
author: str # 插件作者
desc: str # 插件简介
version: str # 插件版本
# optional
repo: str = None # 插件仓库地址
def __str__(self) -> str:
return f"PluginMetadata({self.plugin_name}, {self.plugin_type}, {self.desc}, {self.version}, {self.repo})"
@dataclass
class RegisteredPlugin:
'''
注册在 AstrBot 中的插件。
'''
metadata: PluginMetadata
plugin_instance: object
module_path: str
module: ModuleType
root_dir_name: str
trig_cnt: int = 0
def reset_trig_cnt(self):
self.trig_cnt = 0
def trig(self):
self.trig_cnt += 1
def __str__(self) -> str:
return f"RegisteredPlugin({self.metadata}, {self.module_path}, {self.root_dir_name})"
RegisteredPlugins = List[RegisteredPlugin]

27
type/register.py Normal file
View File

@@ -0,0 +1,27 @@
from model.provider.provider import Provider as LLMProvider
from model.platform import Platform
from type.plugin import *
from typing import List
from dataclasses import dataclass
@dataclass
class RegisteredPlatform:
'''
注册在 AstrBot 中的平台。平台应当实现 Platform 接口。
'''
platform_name: str
platform_instance: Platform
origin: str = None # 注册来源
def __str__(self) -> str:
return self.platform_name
@dataclass
class RegisteredLLM:
'''
注册在 AstrBot 中的大语言模型调用。大语言模型应当实现 LLMProvider 接口。
'''
llm_name: str
llm_instance: LLMProvider
origin: str = None # 注册来源

86
type/types.py Normal file
View File

@@ -0,0 +1,86 @@
import asyncio
from asyncio import Task
from type.register import *
from typing import List, Awaitable
from logging import Logger
from util.cmd_config import CmdConfig
from util.t2i.renderer import TextToImageRenderer
from util.updator.astrbot_updator import AstrBotUpdator
from util.image_uploader import ImageUploader
from util.updator.plugin_updator import PluginUpdator
from model.plugin.command import PluginCommandBridge
from model.provider.provider import Provider
class Context:
'''
存放一些公用的数据,用于在不同模块(如core与command)之间传递
'''
def __init__(self):
self.logger: Logger = None
self.base_config: dict = None # 配置(期望启动机器人后是不变的)
self.config_helper: CmdConfig = None
self.cached_plugins: List[RegisteredPlugin] = [] # 缓存的插件
self.platforms: List[RegisteredPlatform] = []
self.llms: List[RegisteredLLM] = []
self.default_personality: dict = None
self.unique_session = False # 独立会话
self.version: str = None # 机器人版本
self.nick = None # gocq 的唤醒词
self.stat = {}
self.t2i_mode = False
self.web_search = False # 是否开启了网页搜索
self.reply_prefix = ""
self.updator: AstrBotUpdator = None
self.plugin_updator: PluginUpdator = None
self.metrics_uploader = None
self.plugin_command_bridge = PluginCommandBridge(self.cached_plugins)
self.image_renderer = TextToImageRenderer()
self.image_uploader = ImageUploader()
self.message_handler = None # see astrbot/message/handler.py
self.ext_tasks: List[Task] = []
def register_commands(self,
plugin_name: str,
command_name: str,
description: str,
priority: int,
handler: callable):
'''
注册插件指令。
`plugin_name`: 插件名,注意需要和你的 metadata 中的一致。
`command_name`: 指令名,如 "help"。不需要带前缀。
`description`: 指令描述。
`priority`: 优先级越高,越先被处理。合理的优先级应该在 1-10 之间。
`handler`: 指令处理函数。函数参数message: AstrMessageEvent, context: Context
'''
self.plugin_command_bridge.register_command(plugin_name, command_name, description, priority, handler)
def register_task(self, coro: Awaitable, task_name: str):
'''
注册任务。适用于需要长时间运行的插件。
`coro`: 协程对象
`task_name`: 任务名,用于标识任务。自定义即可。
'''
task = asyncio.create_task(coro, name=task_name)
self.ext_tasks.append(task)
def register_provider(self, llm_name: str, provider: Provider, origin: str = ''):
'''
注册一个提供 LLM 资源的 Provider。
`provider`: Provider 对象。即你的实现需要继承 Provider 类。至少应该实现 text_chat() 方法。
'''
self.llms.append(RegisteredLLM(llm_name, provider, origin))
def find_platform(self, platform_name: str) -> RegisteredPlatform:
for platform in self.platforms:
if platform_name == platform.platform_name:
return platform
raise ValueError("couldn't find the platform you specified")

View File

@@ -3,30 +3,35 @@ import json
import util.general_utils as gu
import time
class FuncCallJsonFormatError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class FuncNotFoundError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class FuncCall():
def __init__(self, provider) -> None:
self.func_list = []
self.provider = provider
def add_func(self, name: str = None, func_args: list = None, desc: str = None, func_obj = None) -> None:
def add_func(self, name: str = None, func_args: list = None, desc: str = None, func_obj=None) -> None:
if name == None or func_args == None or desc == None or func_obj == None:
raise FuncCallJsonFormatError("name, func_args, desc must be provided.")
raise FuncCallJsonFormatError(
"name, func_args, desc must be provided.")
params = {
"type": "object", # hardcore here
"type": "object", # hardcore here
"properties": {}
}
for param in func_args:
@@ -51,7 +56,7 @@ class FuncCall():
"description": f["description"],
})
return json.dumps(_l, indent=intent, ensur_ascii=False)
def get_func(self) -> list:
_l = []
for f in self.func_list:
@@ -64,8 +69,8 @@ class FuncCall():
}
})
return _l
def func_call(self, question, func_definition, is_task = False, tasks = None, taskindex = -1, is_summary = True, session_id = None):
def func_call(self, question, func_definition, is_task=False, tasks=None, taskindex=-1, is_summary=True, session_id=None):
funccall_prompt = """
我正实现function call功能该功能旨在让你变成给定的问题到给定的函数的解析器意味着你不是创造函数
@@ -117,10 +122,11 @@ class FuncCall():
_c = 0
while _c < 3:
try:
res = self.provider.text_chat(prompt, session_id)
res = self.provider.text_chat(prompt=prompt, session_id=session_id)
if res.find('```') != -1:
res = res[res.find('```json') + 7: res.rfind('```')]
gu.log("REVGPT func_call json result", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
gu.log("REVGPT func_call json result",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
print(res)
res = json.loads(res)
break
@@ -151,11 +157,13 @@ class FuncCall():
func_target = func["func_obj"]
break
if func_target == None:
raise FuncNotFoundError(f"Request function {func_name} not found.")
raise FuncNotFoundError(
f"Request function {func_name} not found.")
t_res = str(func_target(**args))
invoke_func_res += f"{func_name} 调用结果:\n```\n{t_res}\n```\n"
invoke_func_res_list.append(invoke_func_res)
gu.log(f"[FUNC| {func_name} invoked]", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
gu.log(f"[FUNC| {func_name} invoked]",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
# print(str(t_res))
if is_summary:
@@ -179,14 +187,18 @@ class FuncCall():
_c = 0
while _c < 5:
try:
res = self.provider.text_chat(after_prompt, session_id)
res = self.provider.text_chat(prompt=after_prompt, session_id=session_id)
# 截取```之间的内容
gu.log("DEBUG BEGIN", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
gu.log(
"DEBUG BEGIN", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
print(res)
gu.log("DEBUG END", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
gu.log(
"DEBUG END", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
if res.find('```') != -1:
res = res[res.find('```json') + 7: res.rfind('```')]
gu.log("REVGPT after_func_call json result", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
res = res[res.find('```json') +
7: res.rfind('```')]
gu.log("REVGPT after_func_call json result",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
after_prompt_res = res
after_prompt_res = json.loads(after_prompt_res)
break
@@ -197,7 +209,8 @@ class FuncCall():
if "The message you submitted was too long" in str(e):
# 如果返回的内容太长了,那么就截取一部分
time.sleep(3)
invoke_func_res = invoke_func_res[:int(len(invoke_func_res) / 2)]
invoke_func_res = invoke_func_res[:int(
len(invoke_func_res) / 2)]
after_prompt = """
函数返回以下内容"""+invoke_func_res+"""
请以AI助手的身份结合返回的内容对用户提问做详细全面的回答
@@ -218,11 +231,13 @@ class FuncCall():
if "func_call_again" in after_prompt_res and after_prompt_res["func_call_again"]:
# 如果需要重新调用函数
# 重新调用函数
gu.log("REVGPT func_call_again", bg=gu.BG_COLORS["purple"], fg=gu.FG_COLORS["white"])
gu.log("REVGPT func_call_again",
bg=gu.BG_COLORS["purple"], fg=gu.FG_COLORS["white"])
res = self.func_call(question, func_definition)
return res, True
gu.log("REVGPT func callback:", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
gu.log("REVGPT func callback:",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
# print(after_prompt_res["res"])
return after_prompt_res["res"], True
else:
@@ -230,8 +245,3 @@ class FuncCall():
else:
# print(res["res"])
return res["res"], False

183
util/agent/web_searcher.py Normal file
View File

@@ -0,0 +1,183 @@
import traceback
import random
import json
import asyncio
import aiohttp
import os
from readability import Document
from bs4 import BeautifulSoup
from openai.types.chat.chat_completion_message_tool_call import Function
from util.agent.func_call import FuncCall
from util.websearch.config import HEADERS, USER_AGENTS
from util.websearch.bing import Bing
from util.websearch.sogo import Sogo
from util.websearch.google import Google
from model.provider.provider import Provider
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot')
bing_search = Bing()
sogo_search = Sogo()
google = Google()
proxy = os.environ.get("HTTPS_PROXY", None)
def tidy_text(text: str) -> str:
'''
清理文本,去除空格、换行符等
'''
return text.strip().replace("\n", " ").replace("\r", " ").replace(" ", " ")
# def special_fetch_zhihu(link: str) -> str:
# '''
# function-calling 函数, 用于获取知乎文章的内容
# '''
# response = requests.get(link, headers=HEADERS)
# response.encoding = "utf-8"
# soup = BeautifulSoup(response.text, "html.parser")
# if "zhuanlan.zhihu.com" in link:
# r = soup.find(class_="Post-RichTextContainer")
# else:
# r = soup.find(class_="List-item").find(class_="RichContent-inner")
# if r is None:
# print("debug: zhihu none")
# raise Exception("zhihu none")
# return tidy_text(r.text)
async def search_from_bing(keyword: str) -> str:
'''
tools, 从 bing 搜索引擎搜索
'''
logger.info("web_searcher - search_from_bing: " + keyword)
results = []
try:
results = await google.search(keyword, 5)
except BaseException as e:
logger.error(f"google search error: {e}, try the next one...")
if len(results) == 0:
logger.debug("search google failed")
try:
results = await bing_search.search(keyword, 5)
except BaseException as e:
logger.error(f"bing search error: {e}, try the next one...")
if len(results) == 0:
logger.debug("search bing failed")
try:
results = await sogo_search.search(keyword, 5)
except BaseException as e:
logger.error(f"sogo search error: {e}")
if len(results) == 0:
logger.debug("search sogo failed")
return "没有搜索到结果"
ret = ""
idx = 1
for i in results:
logger.info(f"web_searcher - scraping web: {i.title} - {i.url}")
try:
site_result = await fetch_website_content(i.url)
except:
site_result = ""
site_result = site_result[:600] + "..." if len(site_result) > 600 else site_result
ret += f"{idx}. {i.title} \n{i.snippet}\n{site_result}\n\n"
idx += 1
return ret
async def fetch_website_content(url):
header = HEADERS
header.update({'User-Agent': random.choice(USER_AGENTS)})
async with aiohttp.ClientSession() as session:
async with session.get(url, headers=HEADERS, timeout=6, proxy=proxy) as response:
html = await response.text(encoding="utf-8")
doc = Document(html)
ret = doc.summary(html_partial=True)
soup = BeautifulSoup(ret, 'html.parser')
ret = tidy_text(soup.get_text())
return ret
async def web_search(prompt, provider: Provider, session_id, official_fc=False):
'''
official_fc: 使用官方 function-calling
'''
new_func_call = FuncCall(provider)
new_func_call.add_func("web_search", [{
"type": "string",
"name": "keyword",
"description": "搜索关键词"
}],
"通过搜索引擎搜索。如果问题需要获取近期、实时的消息,在网页上搜索(如天气、新闻或任何需要通过网页获取信息的问题),则调用此函数;如果没有,不要调用此函数。",
search_from_bing
)
new_func_call.add_func("fetch_website_content", [{
"type": "string",
"name": "url",
"description": "要获取内容的网页链接"
}],
"获取网页的内容。如果问题带有合法的网页链接并且用户有需求了解网页内容(例如: `帮我总结一下 https://github.com 的内容`), 就调用此函数。如果没有,不要调用此函数。",
fetch_website_content
)
has_func = False
function_invoked_ret = ""
if official_fc:
# we use official function-calling
result = await provider.text_chat(prompt=prompt, session_id=session_id, tools=new_func_call.get_func())
if isinstance(result, Function):
logger.debug(f"web_searcher - function-calling: {result}")
func_obj = None
for i in new_func_call.func_list:
if i["name"] == result.name:
func_obj = i["func_obj"]
break
if not func_obj:
return await provider.text_chat(prompt=prompt, session_id=session_id, ) + "\n(网页搜索失败, 此为默认回复)"
try:
args = json.loads(result.arguments)
function_invoked_ret = await func_obj(**args)
has_func = True
except BaseException as e:
traceback.print_exc()
return await provider.text_chat(prompt=prompt, session_id=session_id, ) + "\n(网页搜索失败, 此为默认回复)"
else:
return result
else:
# we use our own function-calling
try:
args = {
'question': prompt,
'func_definition': new_func_call.func_dump(),
'is_task': False,
'is_summary': False,
}
function_invoked_ret, has_func = await asyncio.to_thread(new_func_call.func_call, **args)
except BaseException as e:
res = await provider.text_chat(prompt) + "\n(网页搜索失败, 此为默认回复)"
return res
has_func = True
if has_func:
await provider.forget(session_id=session_id, )
summary_prompt = f"""
你是一个专业且高效的助手,你的任务是
1. 根据下面的相关材料对用户的问题 `{prompt}` 进行总结;
2. 简单地发表你对这个问题的简略看法。
# 例子
1. 从网上的信息来看,可以知道...我个人认为...你觉得呢?
2. 根据网上的最新信息,可以得知...我觉得...你怎么看?
# 限制
1. 限制在 200 字以内;
2. 请**直接输出总结**,不要输出多余的内容和提示语。
# 相关材料
{function_invoked_ret}"""
ret = await provider.text_chat(prompt=summary_prompt, session_id=session_id)
return ret
return function_invoked_ret

View File

@@ -2,7 +2,7 @@ import os
import json
from typing import Union
cpath = "cmd_config.json"
cpath = "data/cmd_config.json"
def check_exist():
if not os.path.exists(cpath):
@@ -21,13 +21,13 @@ class CmdConfig():
return d[key]
else:
return default
@staticmethod
def get_all():
check_exist()
with open(cpath, "r", encoding="utf-8-sig") as f:
return json.load(f)
@staticmethod
def put(key, value):
check_exist()
@@ -37,7 +37,7 @@ class CmdConfig():
with open(cpath, "w", encoding="utf-8-sig") as f:
json.dump(d, f, indent=4, ensure_ascii=False)
f.flush()
@staticmethod
def put_by_dot_str(key: str, value):
'''
@@ -58,11 +58,11 @@ class CmdConfig():
f.flush()
@staticmethod
def init_attributes(key: Union[str, list], init_val = ""):
def init_attributes(key: Union[str, list], init_val=""):
check_exist()
conf_str = ''
with open(cpath, "r", encoding="utf-8-sig") as f:
conf_str = f.read()
conf_str = f.read()
if conf_str.startswith(u'/ufeff'):
conf_str = conf_str.encode('utf8')[3:].decode('utf8')
d = json.loads(conf_str)
@@ -81,40 +81,3 @@ class CmdConfig():
with open(cpath, "w", encoding="utf-8-sig") as f:
json.dump(d, f, indent=4, ensure_ascii=False)
f.flush()
def init_astrbot_config_items():
# 加载默认配置
cc = CmdConfig()
cc.init_attributes("qq_forward_threshold", 200)
cc.init_attributes("qq_welcome", "欢迎加入本群!\n欢迎给https://github.com/Soulter/QQChannelChatGPT项目一个Star😊~\n输入help查看帮助~\n")
cc.init_attributes("qq_pic_mode", False)
cc.init_attributes("rev_chatgpt_model", "")
cc.init_attributes("rev_chatgpt_plugin_ids", [])
cc.init_attributes("rev_chatgpt_PUID", "")
cc.init_attributes("rev_chatgpt_unverified_plugin_domains", [])
cc.init_attributes("gocq_host", "127.0.0.1")
cc.init_attributes("gocq_http_port", 5700)
cc.init_attributes("gocq_websocket_port", 6700)
cc.init_attributes("gocq_react_group", True)
cc.init_attributes("gocq_react_guild", True)
cc.init_attributes("gocq_react_friend", True)
cc.init_attributes("gocq_react_group_increase", True)
cc.init_attributes("other_admins", [])
cc.init_attributes("CHATGPT_BASE_URL", "")
cc.init_attributes("qqbot_secret", "")
cc.init_attributes("admin_qq", "")
cc.init_attributes("nick_qq", ["!", "", "ai"])
cc.init_attributes("admin_qqchan", "")
cc.init_attributes("llm_env_prompt", "")
cc.init_attributes("llm_wake_prefix", "")
cc.init_attributes("default_personality_str", "")
cc.init_attributes("openai_image_generate", {
"model": "dall-e-3",
"size": "1024x1024",
"style": "vivid",
"quality": "standard",
})
cc.init_attributes("http_proxy", "")
cc.init_attributes("https_proxy", "")
cc.init_attributes("dashboard_username", "")
cc.init_attributes("dashboard_password", "")

147
util/config_utils.py Normal file
View File

@@ -0,0 +1,147 @@
import json, os
from util.cmd_config import CmdConfig
from type.config import VERSION
from type.types import Context
def init_configs():
'''
初始化必需的配置项
'''
cc = CmdConfig()
cc.init_attributes("qqbot", {
"enable": False,
"appid": "",
"token": "",
})
cc.init_attributes("gocqbot", {
"enable": False,
})
cc.init_attributes("uniqueSessionMode", False)
cc.init_attributes("dump_history_interval", 10)
cc.init_attributes("limit", {
"time": 60,
"count": 30,
})
cc.init_attributes("notice", "")
cc.init_attributes("direct_message_mode", True)
cc.init_attributes("reply_prefix", "")
cc.init_attributes("baidu_aip", {
"enable": False,
"app_id": "",
"api_key": "",
"secret_key": ""
})
cc.init_attributes("openai", {
"key": [],
"api_base": "",
"chatGPTConfigs": {
"model": "gpt-4o",
"max_tokens": 6000,
"temperature": 0.9,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
},
"total_tokens_limit": 10000,
})
cc.init_attributes("qq_forward_threshold", 200)
cc.init_attributes("qq_welcome", "")
cc.init_attributes("qq_pic_mode", True)
cc.init_attributes("gocq_host", "127.0.0.1")
cc.init_attributes("gocq_http_port", 5700)
cc.init_attributes("gocq_websocket_port", 6700)
cc.init_attributes("gocq_react_group", True)
cc.init_attributes("gocq_react_guild", True)
cc.init_attributes("gocq_react_friend", True)
cc.init_attributes("gocq_react_group_increase", True)
cc.init_attributes("other_admins", [])
cc.init_attributes("CHATGPT_BASE_URL", "")
cc.init_attributes("qqbot_secret", "")
cc.init_attributes("qqofficial_enable_group_message", False)
cc.init_attributes("admin_qq", "")
cc.init_attributes("nick_qq", ["!", "", "ai"])
cc.init_attributes("admin_qqchan", "")
cc.init_attributes("llm_env_prompt", "")
cc.init_attributes("llm_wake_prefix", "")
cc.init_attributes("default_personality_str", "")
cc.init_attributes("openai_image_generate", {
"model": "dall-e-3",
"size": "1024x1024",
"style": "vivid",
"quality": "standard",
})
cc.init_attributes("http_proxy", "")
cc.init_attributes("https_proxy", "")
cc.init_attributes("dashboard_username", "")
cc.init_attributes("dashboard_password", "")
# aiocqhttp 适配器
cc.init_attributes("aiocqhttp", {
"enable": False,
"ws_reverse_host": "",
"ws_reverse_port": 0,
})
def try_migrate_config():
'''
将 cmd_config.json 迁移至 data/cmd_config.json (如果存在的话)
'''
if os.path.exists("cmd_config.json"):
with open("cmd_config.json", "r", encoding="utf-8-sig") as f:
data = json.load(f)
with open("data/cmd_config.json", "w", encoding="utf-8-sig") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
try:
os.remove("cmd_config.json")
except Exception as e:
pass
def inject_to_context(context: Context):
'''
将配置注入到 Context 中。
this method returns all the configs
'''
cc = CmdConfig()
context.version = VERSION
context.base_config = cc.get_all()
cfg = context.base_config
if 'reply_prefix' in cfg:
# 适配旧版配置
if isinstance(cfg['reply_prefix'], dict):
context.reply_prefix = ""
cfg['reply_prefix'] = ""
cc.put("reply_prefix", "")
else:
context.reply_prefix = cfg['reply_prefix']
default_personality_str = cc.get("default_personality_str", "")
if default_personality_str == "":
context.default_personality = None
else:
context.default_personality = {
"name": "default",
"prompt": default_personality_str,
}
if 'uniqueSessionMode' in cfg and cfg['uniqueSessionMode']:
context.unique_session = True
else:
context.unique_session = False
nick_qq = cc.get("nick_qq", None)
if nick_qq == None:
nick_qq = ("/", )
if isinstance(nick_qq, str):
nick_qq = (nick_qq, )
if isinstance(nick_qq, list):
nick_qq = tuple(nick_qq)
context.nick = nick_qq
context.t2i_mode = cc.get("qq_pic_mode", True)
return cfg

Some files were not shown because too many files have changed in this diff Show More