Compare commits

...

58 Commits

Author SHA1 Message Date
Soulter
d52eb10ddd chore: remove large font files to shrink the source code size 2024-07-07 21:37:44 +08:00
Soulter
4b6dae71fc update: 更新默认插件 helloworld 2024-07-07 21:00:18 +08:00
Soulter
ddad30c22e feat: 支持本地上传插件 2024-07-07 20:59:12 +08:00
Soulter
77067c545c feat: 使用压缩包文件的更新方式 2024-07-07 18:26:58 +08:00
Soulter
465d283cad Update README.md 2024-06-23 11:23:17 +08:00
Soulter
05071144fb fix: 修复文转图相关问题 2024-06-09 08:56:52 -04:00
Soulter
a4e7904953 chore: clean codes 2024-06-03 20:40:18 -04:00
Soulter
986a8c7554 Update README.md 2024-06-03 21:18:53 +08:00
Soulter
9272843b77 Update README.md 2024-06-03 21:18:00 +08:00
Soulter
542d4bc703 typo: fix t2i typo 2024-06-03 08:47:51 -04:00
Soulter
e3640fdac9 perf: 优化update、help等指令的输出效果 2024-06-03 08:33:17 -04:00
Soulter
f64ab4b190 chore: 移除了一些过时的方法 2024-06-03 05:54:40 -04:00
Soulter
bd571e1577 feat: 提供新的文本转图片样式 2024-06-03 05:51:44 -04:00
Soulter
e4a5cbd893 prof: 改善加载插件时的稳定性 2024-06-03 00:20:56 -04:00
Soulter
7a9fd7fd1e fix: 修复报配置文件未找到的问题 2024-06-02 23:14:48 -04:00
Soulter
d9b60108db Update README.md 2024-05-30 18:11:57 +08:00
Soulter
8455c8b4ed Update README.md 2024-05-30 18:03:59 +08:00
Soulter
5c2e7099fc Update README.md 2024-05-26 21:38:32 +08:00
Soulter
1fd1d55895 Update config.py 2024-05-26 21:31:26 +08:00
Soulter
5ce4137e75 fix: 修复model指令 2024-05-26 21:15:33 +08:00
Soulter
d49179541e feat: 给插件的init方法传入 ctx 2024-05-26 21:10:19 +08:00
Soulter
676f258981 perf: 重启后终止子进程 2024-05-26 21:09:23 +08:00
Soulter
fa44749240 fix: 修复相对路径导致的windows启动器无法安装依赖的问题 2024-05-26 18:15:25 +08:00
Soulter
6c856f9da2 fix(typo): 修复插件注册器的一个typo导致无法注册消息平台插件的问题 2024-05-26 18:07:07 +08:00
Soulter
e8773cea7f fix: 修复配置文件没有有效迁移的问题 2024-05-25 20:59:37 +08:00
Soulter
4d36ffcb08 fix: 优化插件的结果处理 2024-05-25 18:46:38 +08:00
Soulter
c653e492c4 Merge pull request #164 from Soulter/stat-upload-perf
/models 指令优化
2024-05-25 18:35:56 +08:00
Soulter
f08de1f404 perf: 添加 models 指令到帮助中 2024-05-25 18:34:08 +08:00
Soulter
1218691b61 perf: model 指令放宽限制,支持输入自定义模型。设置模型后持久化保存。 2024-05-25 18:29:01 +08:00
Soulter
61fc27ff79 Merge pull request #163 from Soulter/stat-upload-perf
优化统计记录数据结构
2024-05-25 18:28:08 +08:00
Soulter
123ee24f7e fix: stat perf 2024-05-25 18:01:16 +08:00
Soulter
52c9045a28 feat: 优化了统计信息数据结构 2024-05-25 17:47:41 +08:00
Soulter
f00f1e8933 fix: 画图报错 2024-05-24 13:33:02 +08:00
Soulter
8da4433e57 chore: 更改相关字段 2024-05-21 08:44:05 +08:00
Soulter
7babb87934 perf: 更改库的加载顺序 2024-05-21 08:41:46 +08:00
Soulter
f67b171385 perf: 数据库迁移至 data 目录下 2024-05-19 17:10:11 +08:00
Soulter
1780d1355d perf: 将内部pip全部更换为阿里云镜像; 插件依赖更新逻辑优化 2024-05-19 16:45:08 +08:00
Soulter
5a3390e4f3 fix: force update 2024-05-19 16:06:47 +08:00
Soulter
337d96b41d Merge pull request #160 from Soulter/dev_default_openai_refactor
优化自带的 OpenAI LLM 交互, 人格, 网页搜索
2024-05-19 15:23:19 +08:00
Soulter
38a1dfea98 fix: web content scraper add proxy 2024-05-19 15:08:22 +08:00
Soulter
fbef73aeec fix: websearch encoding set to utf-8 2024-05-19 14:42:28 +08:00
Soulter
d6214c2b7c fix: web search 2024-05-19 12:55:54 +08:00
Soulter
d58c86f6fc perf: websearch 优化;项目结构调整 2024-05-19 12:46:07 +08:00
Soulter
ea34c20198 perf: 优化人格和LVM的处理过程 2024-05-18 10:34:35 +08:00
Soulter
934ca94e62 refactor: 重写 LLM OpenAI 模块 2024-05-17 22:56:44 +08:00
Soulter
1775327c2e chore: refact openai official 2024-05-17 09:07:11 +08:00
Soulter
707fcad8b4 feat: gpt 模型列表查看指令 models 2024-05-17 00:06:49 +08:00
Soulter
f143c5afc6 fix: 修复 plugin v 子指令报错的问题 2024-05-16 23:11:07 +08:00
Soulter
99f94b2611 fix: 修复无法调用某些指令的问题 2024-05-16 23:04:47 +08:00
Soulter
e39c1f9116 remove: 移除自动更换多模态模型的功能 2024-05-16 22:46:50 +08:00
Soulter
235e0b9b8f fix: gocq logging 2024-05-09 13:24:31 +08:00
Soulter
d5a9bed8a4 fix(updator): IterableList object has no
attribute origin
2024-05-08 19:18:21 +08:00
Soulter
d7dc8a7612 chore: 添加一些日志;更新版本 2024-05-08 19:12:23 +08:00
Soulter
08cd3ca40c perf: 更好的日志输出;
fix: 修复可视化面板刷新404
2024-05-08 19:01:36 +08:00
Soulter
a13562dcea fix: 修复启动器启动加载带有配置的插件时提示配置文件缺失的问题 2024-05-08 16:28:30 +08:00
Soulter
d7a0c0d1d0 Update requirements.txt 2024-05-07 15:58:51 +08:00
Soulter
c0729b2d29 fix: 修复插件重载相关问题 2024-04-22 19:04:15 +08:00
Soulter
a80f474290 fix: 修复更新插件时的报错 2024-04-22 18:36:56 +08:00
77 changed files with 2582 additions and 2152 deletions

2
.gitignore vendored
View File

@@ -7,6 +7,6 @@ configs/config.yaml
**/.DS_Store **/.DS_Store
temp temp
cmd_config.json cmd_config.json
addons/plugins/
data/* data/*
cookies.json cookies.json
logs/

175
README.md
View File

@@ -1,180 +1,51 @@
<p align="center"> <p align="center">
<img src="https://github.com/Soulter/AstrBot/assets/37870767/b1686114-f3aa-4963-b07f-28bf83dc0a10" alt="QQChannelChatGPT" width="200" /> <img width="806" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/c6f057d9-46d7-4144-8116-00a962941746">
</p> </p>
<div align="center"> <div align="center">
# AstrBot
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/Soulter/AstrBot)](https://github.com/Soulter/AstrBot/releases/latest) [![GitHub release (latest by date)](https://img.shields.io/github/v/release/Soulter/AstrBot)](https://github.com/Soulter/AstrBot/releases/latest)
<img src="https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/34412545-2e37-400f-bedc-42348713ac1f.svg" alt="wakatime">
<img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="python"> <img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="python">
<a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg"/></a> <a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg"/></a>
<a href="https://qm.qq.com/cgi-bin/qm/qr?k=EYGsuUTfe00_iOu9JTXS7_TEpMkXOvwv&jump_from=webapi&authKey=uUEMKCROfsseS+8IzqPjzV3y1tzy4AkykwTib2jNkOFdzezF9s9XknqnIaf3CDft"> <a href="https://qm.qq.com/cgi-bin/qm/qr?k=EYGsuUTfe00_iOu9JTXS7_TEpMkXOvwv&jump_from=webapi&authKey=uUEMKCROfsseS+8IzqPjzV3y1tzy4AkykwTib2jNkOFdzezF9s9XknqnIaf3CDft">
<img alt="Static Badge" src="https://img.shields.io/badge/QQ群-322154837-purple"> <img alt="Static Badge" src="https://img.shields.io/badge/QQ群-322154837-purple">
</a> </a>
<img alt="Static Badge" src="https://img.shields.io/badge/频道-x42d56aki2-purple">
<a href="https://astrbot.soulter.top/center">项目部署</a> <a href="https://astrbot.soulter.top/center">项目部署</a>
<a href="https://github.com/Soulter/QQChannelChatGPT/issues">问题提交</a> <a href="https://github.com/Soulter/AstrBot/issues">问题提交</a>
<a href="https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91">插件开发(最少只需 25 行)</a> <a href="https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91">插件开发</a>
</div> </div>
## 🤔您可能想了解的 ## 🛠️ 功能
- **如何部署?** [帮助文档](https://astrbot.soulter.top/center/docs/%E9%83%A8%E7%BD%B2/%E9%80%9A%E8%BF%87Docker%E9%83%A8%E7%BD%B2) (部署不成功欢迎进群捞人解决<3)
- **go-cqhttp启动不成功报登录失败** [在这里搜索解决方法](https://github.com/Mrs4s/go-cqhttp/issues)
- **程序闪退/机器人启动不成功** [提交issue或加群反馈](https://github.com/Soulter/QQChannelChatGPT/issues)
- **如何开启 ChatGPTClaudeHuggingChat 等语言模型** [查看帮助](https://astrbot.soulter.top/center/docs/%E4%BD%BF%E7%94%A8/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B)
## 🧩功能: 🌍 支持的消息平台
- QQ 群、QQ 频道OneBot、QQ 官方接口)
最近功能
1. 可视化面板
2. Docker 一键部署项目[链接](https://astrbot.soulter.top/center/docs/%E9%83%A8%E7%BD%B2/%E9%80%9A%E8%BF%87Docker%E9%83%A8%E7%BD%B2)
🌍支持的消息平台/接口
- go-cqhttpQQQQ频道
- QQ 官方机器人接口
- Telegram由 [astrbot_plugin_telegram](https://github.com/Soulter/astrbot_plugin_telegram) 插件支持) - Telegram由 [astrbot_plugin_telegram](https://github.com/Soulter/astrbot_plugin_telegram) 插件支持)
- WeChat(微信) (由 [astrbot_plugin_vchat](https://github.com/z2z63/astrbot_plugin_vchat) 插件支持)
🌍支持的AI语言模型一览 🌍 支持的大模型一览:
**文字模型/图片理解** - OpenAI GPT、DallE 系列
- Claude由[LLMs插件](https://github.com/Soulter/llms)支持)
- HuggingChat由[LLMs插件](https://github.com/Soulter/llms)支持)
- Gemini由[LLMs插件](https://github.com/Soulter/llms)支持)
- OpenAI GPT-3原生支持 🌍 机器人支持的能力一览:
- OpenAI GPT-3.5原生支持 - 大模型对话、人格、网页搜索
- OpenAI GPT-4原生支持 - 可视化管理面板
- Claude免费[LLMs插件](https://github.com/Soulter/llms)支持 - 同时处理多平台消息
- HuggingChat免费[LLMs插件](https://github.com/Soulter/llms)支持 - 精确到个人的会话隔离
- Gemini免费[LLMs插件](https://github.com/Soulter/llms)支持 - 插件支持
- 文本转图片回复Markdown
**图片生成** ## 🧩 插件支持
- OpenAI Dalle 接口
- NovelAI/Naifu (免费[AIDraw插件](https://github.com/Soulter/aidraw)支持)
🌍机器人支持的能力一览 有关插件的使用和列表请移步:[AstrBot 文档 - 插件](https://astrbot.soulter.top/center/docs/%E4%BD%BF%E7%94%A8/%E6%8F%92%E4%BB%B6)
- 可视化面板beta
- 同时部署机器人到 QQ QQ 频道
- 大模型对话
- 大模型网页搜索能力 **(目前仅支持OpenAI系模型最新版本下使用 web on 指令打开)**
- 插件在QQ或QQ频道聊天框内输入 `plugin` 了解详情
- 回复文字图片渲染以图片markdown格式回复**大幅度降低被风控概率**需手动在`cmd_config.json`内开启qq_pic_mode
- 人格设置
- 关键词回复
- 热更新更新本项目时**仅需**在QQ或QQ频道聊天框内输入`update latest r`
- Windows一键部署 https://github.com/Soulter/QQChatGPTLauncher/releases/latest
<!--
### 基本功能
<details>
<summary>✅ 回复符合上下文</summary>
- 程序向API发送近多次对话内容模型根据上下文生成回复
- 你可在`configs/config.yaml`中修改`total_token_limit`来近似控制缓存大小。
</details>
<details>
<summary>✅ 超额自动切换</summary>
- 超额时程序自动切换openai的key方便快捷
</details>
<details>
<summary>✅ 支持统计频道、消息数量等信息</summary>
- 实现了简单的统计功能
</details>
<details>
<summary>✅ 多并发处理,回复速度快</summary>
- 使用了协程理论最高可以支持每个子频道每秒回复5条信息
</details>
<details>
<summary>✅ 持久化转储历史记录,重启不丢失</summary>
- 使用内置的sqlite数据库存储历史记录到本地
- 方式为定时转储,可在`config.yaml`下修改`dump_history_interval`来修改间隔时间,单位为分钟。
</details>
<details>
<summary>✅ 支持多种指令控制</summary>
- 详见下方`指令功能`
</details>
<details>
<summary>✅ 官方API稳定</summary>
- 不使用ChatGPT逆向接口而使用官方API接口稳定方便。
- QQ频道机器人框架为QQ官方开源的框架稳定。
</details> -->
<!-- > 关于tokentoken就相当于是AI中的单词数但是不等于单词数`text-davinci-003`模型中最大可以支持`4097`个token。在发送信息时这个机器人会将用户的历史聊天记录打包发送给ChatGPT因此`token`也会相应的累加为了保证聊天的上下文的逻辑性就有了缓存token。 -->
### 🛠️ 插件支持
本项目支持接入插件。
> 使用`plugin i 插件GitHub链接`即可安装。
部分插件:
- `LLMS`: https://github.com/Soulter/llms | Claude, HuggingChat 大语言模型接入。
- `GoodPlugins`: https://github.com/Soulter/goodplugins | 随机动漫图片、搜番、喜报生成器等等
- `sysstat`: https://github.com/Soulter/sysstatqcbot | 查看系统状态
- `BiliMonitor`: https://github.com/Soulter/BiliMonitor | 订阅B站动态
- `liferestart`: https://github.com/Soulter/liferestart | 人生重开模拟器
## ✨ Demo
<img width="900" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/824d1ff3-7b85-481c-b795-8e62dedb9fd7"> <img width="900" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/824d1ff3-7b85-481c-b795-8e62dedb9fd7">
<!--
### 指令
#### OpenAI官方API
在频道内需要先`@`机器人之后再输入指令在QQ中暂时需要在消息前加上`ai `,不需要@
- `/reset`重置prompt
- `/his`查看历史记录(每个用户都有独立的会话)
- `/his [页码数]`查看不同页码的历史记录。例如`/his 2`查看第2页
- `/token`查看当前缓存的总token数
- `/count` 查看统计
- `/status` 查看chatGPT的配置
- `/help` 查看帮助
- `/key` 动态添加key
- `/set` 人格设置面板
- `/keyword nihao 你好` 设置关键词回复。nihao->你好
- `/画` 画画
#### 逆向ChatGPT库语言模型
- `/gpt` 切换为OpenAI官方API
* 切换模型指令支持临时回复。如`/a 你好`将会临时使用一次bing模型 -->
<!--
## 🙇‍感谢
本项目使用了一下项目:
[ChatGPT by acheong08](https://github.com/acheong08/ChatGPT)
[EdgeGPT by acheong08](https://github.com/acheong08/EdgeGPT)
[go-cqhttp by Mrs4s](https://github.com/Mrs4s/go-cqhttp)
[nakuru-project by Lxns-Network](https://github.com/Lxns-Network/nakuru-project) -->

View File

@@ -1 +1 @@
import{x as i,o as l,c as _,w as s,a as e,f as a,J as m,V as c,b as t,t as u,ad as p,B as n,ae as o,j as f}from"./index-dc96e1be.js";const b={class:"text-h3"},h={class:"d-flex align-center"},g={class:"d-flex align-center"},V=i({__name:"BaseBreadcrumb",props:{title:String,breadcrumbs:Array,icon:String},setup(d){const r=d;return(x,B)=>(l(),_(c,{class:"page-breadcrumb mb-1 mt-1"},{default:s(()=>[e(a,{cols:"12",md:"12"},{default:s(()=>[e(m,{variant:"outlined",elevation:"0",class:"px-4 py-3 withbg"},{default:s(()=>[e(c,{"no-gutters":"",class:"align-center"},{default:s(()=>[e(a,{md:"5"},{default:s(()=>[t("h3",b,u(r.title),1)]),_:1}),e(a,{md:"7",sm:"12",cols:"12"},{default:s(()=>[e(p,{items:r.breadcrumbs,class:"text-h5 justify-md-end pa-1"},{divider:s(()=>[t("div",h,[e(n(o),{size:"17"})])]),prepend:s(()=>[e(f,{size:"small",icon:"mdi-home",class:"text-secondary mr-2"}),t("div",g,[e(n(o),{size:"17"})])]),_:1},8,["items"])]),_:1})]),_:1})]),_:1})]),_:1})]),_:1}))}});export{V as _}; import{x as i,o as l,c as _,w as s,a as e,f as a,J as m,V as c,b as t,t as u,ae as p,B as n,af as o,j as f}from"./index-5ac7c267.js";const b={class:"text-h3"},h={class:"d-flex align-center"},g={class:"d-flex align-center"},V=i({__name:"BaseBreadcrumb",props:{title:String,breadcrumbs:Array,icon:String},setup(d){const r=d;return(x,B)=>(l(),_(c,{class:"page-breadcrumb mb-1 mt-1"},{default:s(()=>[e(a,{cols:"12",md:"12"},{default:s(()=>[e(m,{variant:"outlined",elevation:"0",class:"px-4 py-3 withbg"},{default:s(()=>[e(c,{"no-gutters":"",class:"align-center"},{default:s(()=>[e(a,{md:"5"},{default:s(()=>[t("h3",b,u(r.title),1)]),_:1}),e(a,{md:"7",sm:"12",cols:"12"},{default:s(()=>[e(p,{items:r.breadcrumbs,class:"text-h5 justify-md-end pa-1"},{divider:s(()=>[t("div",h,[e(n(o),{size:"17"})])]),prepend:s(()=>[e(f,{size:"small",icon:"mdi-home",class:"text-secondary mr-2"}),t("div",g,[e(n(o),{size:"17"})])]),_:1},8,["items"])]),_:1})]),_:1})]),_:1})]),_:1})]),_:1}))}});export{V as _};

View File

@@ -1 +1 @@
import{x as e,o as a,c as t,w as o,a as s,B as n,Z as r,W as c}from"./index-dc96e1be.js";const f=e({__name:"BlankLayout",setup(p){return(u,_)=>(a(),t(c,null,{default:o(()=>[s(n(r))]),_:1}))}});export{f as default}; import{x as e,o as a,c as t,w as o,a as s,B as n,Z as r,W as c}from"./index-5ac7c267.js";const f=e({__name:"BlankLayout",setup(p){return(u,_)=>(a(),t(c,null,{default:o(()=>[s(n(r))]),_:1}))}});export{f as default};

View File

@@ -1 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-e31f96f8.js";import{_}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as p,D as a,o as r,s,a as e,w as t,f as o,V as i,F as n,u as g,c as h,a0 as b,e as x,t as y}from"./index-dc96e1be.js";const P=p({__name:"ColorPage",setup(C){const c=a({title:"Colors Page"}),d=a([{title:"Utilities",disabled:!1,href:"#"},{title:"Colors",disabled:!0,href:"#"}]),u=a(["primary","lightprimary","secondary","lightsecondary","info","success","accent","warning","error","darkText","lightText","borderLight","inputBorder","containerBg"]);return(V,k)=>(r(),s(n,null,[e(m,{title:c.value.title,breadcrumbs:d.value},null,8,["title","breadcrumbs"]),e(i,null,{default:t(()=>[e(o,{cols:"12",md:"12"},{default:t(()=>[e(_,{title:"Color Palette"},{default:t(()=>[e(i,null,{default:t(()=>[(r(!0),s(n,null,g(u.value,(l,f)=>(r(),h(o,{md:"3",cols:"12",key:f},{default:t(()=>[e(b,{rounded:"md",class:"align-center justify-center d-flex",height:"100",width:"100%",color:l},{default:t(()=>[x("class: "+y(l),1)]),_:2},1032,["color"])]),_:2},1024))),128))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{P as default}; import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as p,D as a,o as r,s,a as e,w as t,f as o,V as i,F as n,u as g,c as h,a0 as b,e as x,t as y}from"./index-5ac7c267.js";const P=p({__name:"ColorPage",setup(C){const c=a({title:"Colors Page"}),d=a([{title:"Utilities",disabled:!1,href:"#"},{title:"Colors",disabled:!0,href:"#"}]),u=a(["primary","lightprimary","secondary","lightsecondary","info","success","accent","warning","error","darkText","lightText","borderLight","inputBorder","containerBg"]);return(V,k)=>(r(),s(n,null,[e(m,{title:c.value.title,breadcrumbs:d.value},null,8,["title","breadcrumbs"]),e(i,null,{default:t(()=>[e(o,{cols:"12",md:"12"},{default:t(()=>[e(_,{title:"Color Palette"},{default:t(()=>[e(i,null,{default:t(()=>[(r(!0),s(n,null,g(u.value,(l,f)=>(r(),h(o,{md:"3",cols:"12",key:f},{default:t(()=>[e(b,{rounded:"md",class:"align-center justify-center d-flex",height:"100",width:"100%",color:l},{default:t(()=>[x("class: "+y(l),1)]),_:2},1032,["color"])]),_:2},1024))),128))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{P as default};

View File

@@ -1 +1 @@
import{o as l,s as o,u as c,c as n,w as u,Q as g,b as s,R as k,F as t,ab as h,O as p,t as m,a as V,ac as f,i as C,q as x,k as v,A as U}from"./index-dc96e1be.js";import{_ as w}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";const S={__name:"ConfigDetailCard",props:{config:Array},setup(d){return(y,B)=>(l(!0),o(t,null,c(d.config,r=>(l(),n(w,{key:r.name,title:r.name,style:{"margin-bottom":"16px"}},{default:u(()=>[g(s("a",null,"No data",512),[[k,d.config.length===0]]),(l(!0),o(t,null,c(r.body,e=>(l(),o(t,null,[e.config_type==="item"?(l(),o(t,{key:0},[e.val_type==="bool"?(l(),n(h,{key:0,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,color:"primary",inset:""},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="str"?(l(),n(p,{key:1,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="int"?(l(),n(p,{key:2,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="list"?(l(),o(t,{key:3},[s("span",null,m(e.name),1),V(f,{modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,chips:"",clearable:"",label:"请添加",multiple:"","prepend-icon":"mdi-tag-multiple-outline"},{selection:u(({attrs:a,item:i,select:b,selected:_})=>[V(C,x(a,{"model-value":_,closable:"",onClick:b,"onClick:close":D=>y.remove(i)}),{default:u(()=>[s("strong",null,m(i),1)]),_:2},1040,["model-value","onClick","onClick:close"])]),_:2},1032,["modelValue","onUpdate:modelValue"])],64)):v("",!0)],64)):e.config_type==="divider"?(l(),n(U,{key:1,style:{"margin-top":"8px","margin-bottom":"8px"}})):v("",!0)],64))),256))]),_:2},1032,["title"]))),128))}};export{S as _}; import{o as l,s as o,u as c,c as n,w as u,Q as g,b as d,R as k,F as t,ac as h,O as p,t as m,a as V,ad as f,i as C,q as x,k as v,A as U}from"./index-5ac7c267.js";import{_ as w}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";const S={__name:"ConfigDetailCard",props:{config:Array},setup(s){return(y,B)=>(l(!0),o(t,null,c(s.config,r=>(l(),n(w,{key:r.name,title:r.name,style:{"margin-bottom":"16px"}},{default:u(()=>[g(d("a",null,"No data",512),[[k,s.config.length===0]]),(l(!0),o(t,null,c(r.body,e=>(l(),o(t,null,[e.config_type==="item"?(l(),o(t,{key:0},[e.val_type==="bool"?(l(),n(h,{key:0,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,color:"primary",inset:""},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="str"?(l(),n(p,{key:1,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="int"?(l(),n(p,{key:2,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="list"?(l(),o(t,{key:3},[d("span",null,m(e.name),1),V(f,{modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,chips:"",clearable:"",label:"请添加",multiple:"","prepend-icon":"mdi-tag-multiple-outline"},{selection:u(({attrs:a,item:i,select:b,selected:_})=>[V(C,x(a,{"model-value":_,closable:"",onClick:b,"onClick:close":D=>y.remove(i)}),{default:u(()=>[d("strong",null,m(i),1)]),_:2},1040,["model-value","onClick","onClick:close"])]),_:2},1032,["modelValue","onUpdate:modelValue"])],64)):v("",!0)],64)):e.config_type==="divider"?(l(),n(U,{key:1,style:{"margin-top":"8px","margin-bottom":"8px"}})):v("",!0)],64))),256))]),_:2},1032,["title"]))),128))}};export{S as _};

View File

@@ -1 +1 @@
import{_ as y}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as h,o,c as u,w as t,a,a8 as b,b as c,K as x,e as f,t as g,G as V,A as w,L as S,a9 as $,J as B,s as _,d as v,F as d,u as p,f as G,V as T,aa as j,T as l}from"./index-dc96e1be.js";import{_ as m}from"./ConfigDetailCard-8467c848.js";const D={class:"d-sm-flex align-center justify-space-between"},C=h({__name:"ConfigGroupCard",props:{title:String},setup(e){const s=e;return(i,n)=>(o(),u(B,{variant:"outlined",elevation:"0",class:"withbg",style:{width:"50%"}},{default:t(()=>[a(b,{style:{padding:"10px 20px"}},{default:t(()=>[c("div",D,[a(x,null,{default:t(()=>[f(g(s.title),1)]),_:1}),a(V)])]),_:1}),a(w),a(S,null,{default:t(()=>[$(i.$slots,"default")]),_:3})]),_:3}))}}),I={style:{display:"flex","flex-direction":"row","justify-content":"space-between","align-items":"center","margin-bottom":"12px"}},N={style:{display:"flex","flex-direction":"row"}},R={style:{"margin-right":"10px",color:"black"}},F={style:{color:"#222"}},k=h({__name:"ConfigGroupItem",props:{title:String,desc:String,btnRoute:String,namespace:String},setup(e){const s=e;return(i,n)=>(o(),_("div",I,[c("div",N,[c("h3",R,g(s.title),1),c("p",F,g(s.desc),1)]),a(v,{to:s.btnRoute,color:"primary",class:"ml-2",style:{"border-radius":"10px"}},{default:t(()=>[f("配置")]),_:1},8,["to"])]))}}),L={style:{display:"flex","flex-direction":"row",padding:"16px",gap:"16px",width:"100%"}},P={name:"ConfigPage",components:{UiParentCard:y,ConfigGroupCard:C,ConfigGroupItem:k,ConfigDetailCard:m},data(){return{config_data:[],config_base:[],save_message_snack:!1,save_message:"",save_message_success:"",config_outline:[],namespace:""}},mounted(){this.getConfig()},methods:{switchConfig(e){l.get("/api/configs?namespace="+e).then(s=>{this.namespace=e,this.config_data=s.data.data,console.log(this.config_data)}).catch(s=>{save_message=s,save_message_snack=!0,save_message_success="error"})},getConfig(){l.get("/api/config_outline").then(e=>{this.config_outline=e.data.data,console.log(this.config_outline)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"}),l.get("/api/configs").then(e=>{this.config_base=e.data.data,console.log(this.config_data)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"})},updateConfig(){l.post("/api/configs",{base_config:this.config_base,config:this.config_data,namespace:this.namespace}).then(e=>{e.data.status==="success"?(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="success"):(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="error")}).catch(e=>{this.save_message=e,this.save_message_snack=!0,this.save_message_success="error"})}}},J=Object.assign(P,{setup(e){return(s,i)=>(o(),_(d,null,[a(T,null,{default:t(()=>[c("div",L,[(o(!0),_(d,null,p(s.config_outline,n=>(o(),u(C,{key:n.name,title:n.name},{default:t(()=>[(o(!0),_(d,null,p(n.body,r=>(o(),u(k,{title:r.title,desc:r.desc,namespace:r.namespace,onClick:U=>s.switchConfig(r.namespace)},null,8,["title","desc","namespace","onClick"]))),256))]),_:2},1032,["title"]))),128))]),a(G,{cols:"12",md:"12"},{default:t(()=>[a(m,{config:s.config_data},null,8,["config"]),a(m,{config:s.config_base},null,8,["config"])]),_:1})]),_:1}),a(v,{icon:"mdi-content-save",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary",onClick:s.updateConfig},null,8,["onClick"]),a(j,{timeout:2e3,elevation:"24",color:s.save_message_success,modelValue:s.save_message_snack,"onUpdate:modelValue":i[0]||(i[0]=n=>s.save_message_snack=n)},{default:t(()=>[f(g(s.save_message),1)]),_:1},8,["color","modelValue"])],64))}});export{J as default}; import{_ as b}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as h,o,c as u,w as t,a,a8 as y,b as c,K as x,e as f,t as g,G as V,A as w,L as S,a9 as $,J as B,s as _,d as v,F as d,u as p,f as G,V as T,ab as j,T as l}from"./index-5ac7c267.js";import{_ as m}from"./ConfigDetailCard-756c045d.js";const D={class:"d-sm-flex align-center justify-space-between"},C=h({__name:"ConfigGroupCard",props:{title:String},setup(e){const s=e;return(i,n)=>(o(),u(B,{variant:"outlined",elevation:"0",class:"withbg",style:{width:"50%"}},{default:t(()=>[a(y,{style:{padding:"10px 20px"}},{default:t(()=>[c("div",D,[a(x,null,{default:t(()=>[f(g(s.title),1)]),_:1}),a(V)])]),_:1}),a(w),a(S,null,{default:t(()=>[$(i.$slots,"default")]),_:3})]),_:3}))}}),I={style:{display:"flex","flex-direction":"row","justify-content":"space-between","align-items":"center","margin-bottom":"12px"}},N={style:{display:"flex","flex-direction":"row"}},R={style:{"margin-right":"10px",color:"black"}},F={style:{color:"#222"}},k=h({__name:"ConfigGroupItem",props:{title:String,desc:String,btnRoute:String,namespace:String},setup(e){const s=e;return(i,n)=>(o(),_("div",I,[c("div",N,[c("h3",R,g(s.title),1),c("p",F,g(s.desc),1)]),a(v,{to:s.btnRoute,color:"primary",class:"ml-2",style:{"border-radius":"10px"}},{default:t(()=>[f("配置")]),_:1},8,["to"])]))}}),L={style:{display:"flex","flex-direction":"row",padding:"16px",gap:"16px",width:"100%"}},P={name:"ConfigPage",components:{UiParentCard:b,ConfigGroupCard:C,ConfigGroupItem:k,ConfigDetailCard:m},data(){return{config_data:[],config_base:[],save_message_snack:!1,save_message:"",save_message_success:"",config_outline:[],namespace:""}},mounted(){this.getConfig()},methods:{switchConfig(e){l.get("/api/configs?namespace="+e).then(s=>{this.namespace=e,this.config_data=s.data.data,console.log(this.config_data)}).catch(s=>{save_message=s,save_message_snack=!0,save_message_success="error"})},getConfig(){l.get("/api/config_outline").then(e=>{this.config_outline=e.data.data,console.log(this.config_outline)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"}),l.get("/api/configs").then(e=>{this.config_base=e.data.data,console.log(this.config_data)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"})},updateConfig(){l.post("/api/configs",{base_config:this.config_base,config:this.config_data,namespace:this.namespace}).then(e=>{e.data.status==="success"?(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="success"):(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="error")}).catch(e=>{this.save_message=e,this.save_message_snack=!0,this.save_message_success="error"})}}},J=Object.assign(P,{setup(e){return(s,i)=>(o(),_(d,null,[a(T,null,{default:t(()=>[c("div",L,[(o(!0),_(d,null,p(s.config_outline,n=>(o(),u(C,{key:n.name,title:n.name},{default:t(()=>[(o(!0),_(d,null,p(n.body,r=>(o(),u(k,{title:r.title,desc:r.desc,namespace:r.namespace,onClick:U=>s.switchConfig(r.namespace)},null,8,["title","desc","namespace","onClick"]))),256))]),_:2},1032,["title"]))),128))]),a(G,{cols:"12",md:"12"},{default:t(()=>[a(m,{config:s.config_data},null,8,["config"]),a(m,{config:s.config_base},null,8,["config"])]),_:1})]),_:1}),a(v,{icon:"mdi-content-save",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary",onClick:s.updateConfig},null,8,["onClick"]),a(j,{timeout:2e3,elevation:"24",color:s.save_message_success,modelValue:s.save_message_snack,"onUpdate:modelValue":i[0]||(i[0]=n=>s.save_message_snack=n)},{default:t(()=>[f(g(s.save_message),1)]),_:1},8,["color","modelValue"])],64))}});export{J as default};

View File

@@ -1 +0,0 @@
import{_ as t}from"./_plugin-vue_export-helper-c27b6911.js";import{o,c,w as s,V as i,a as r,b as e,d as l,e as a,f as d}from"./index-dc96e1be.js";const n="/assets/img-error-bg-ab6474a0.svg",_="/assets/img-error-blue-2675a7a9.svg",m="/assets/img-error-text-a6aebfa0.svg",g="/assets/img-error-purple-edee3fbc.svg";const p={},u={class:"text-center"},f=e("div",{class:"CardMediaWrapper"},[e("img",{src:n,alt:"grid",class:"w-100"}),e("img",{src:_,alt:"grid",class:"CardMediaParts"}),e("img",{src:m,alt:"build",class:"CardMediaBuild"}),e("img",{src:g,alt:"build",class:"CardMediaBuild"})],-1),h=e("h1",{class:"text-h1"},"Something is wrong",-1),v=e("p",null,[e("small",null,[a("The page you are looking was moved, removed, "),e("br"),a("renamed, or might never exist! ")])],-1);function x(b,V){return o(),c(i,{"no-gutters":"",class:"h-100vh"},{default:s(()=>[r(d,{class:"d-flex align-center justify-center"},{default:s(()=>[e("div",u,[f,h,v,r(l,{variant:"flat",color:"primary",class:"mt-4",to:"/","prepend-icon":"mdi-home"},{default:s(()=>[a(" Home")]),_:1})])]),_:1})]),_:1})}const C=t(p,[["render",x]]);export{C as default};

View File

@@ -0,0 +1 @@
import{_ as a}from"./_plugin-vue_export-helper-c27b6911.js";import{o,c,w as s,V as i,a as t,b as e,d as l,e as r,f as d}from"./index-5ac7c267.js";const n="/assets/img-error-bg-41f65efa.svg",_="/assets/img-error-blue-f50c8e77.svg",m="/assets/img-error-text-630dc36d.svg",g="/assets/img-error-purple-b97a483b.svg";const p={},u={class:"text-center"},f=e("div",{class:"CardMediaWrapper"},[e("img",{src:n,alt:"grid",class:"w-100"}),e("img",{src:_,alt:"grid",class:"CardMediaParts"}),e("img",{src:m,alt:"build",class:"CardMediaBuild"}),e("img",{src:g,alt:"build",class:"CardMediaBuild"})],-1),h=e("h1",{class:"text-h1"},"Something is wrong",-1),v=e("p",null,[e("small",null,[r("The page you are looking was moved, removed, "),e("br"),r("renamed, or might never exist! ")])],-1);function x(b,V){return o(),c(i,{"no-gutters":"",class:"h-100vh"},{default:s(()=>[t(d,{class:"d-flex align-center justify-center"},{default:s(()=>[e("div",u,[f,h,v,t(l,{variant:"flat",color:"primary",class:"mt-4",to:"/","prepend-icon":"mdi-home"},{default:s(()=>[r(" Home")]),_:1})])]),_:1})]),_:1})}const C=a(p,[["render",x]]);export{C as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
import{_ as _t}from"./LogoDark.vue_vue_type_script_setup_true_lang-7df35c25.js";import{x as ke,af as we,r as Ot,ag as Vt,D as A,ah as Ne,a2 as P,B as I,ai as Q,aj as St,I as Be,ak as Ie,al as Et,am as jt,an as At,ao as G,y as wt,o as Re,c as tt,w as C,a as j,O as He,b as ge,ap as Ft,d as Pt,e as Ge,s as Ct,aq as Tt,t as Nt,k as Bt,U as It,f as Fe,N as Rt,V as Pe,J as Ye,L as kt}from"./index-dc96e1be.js";import{a as Mt}from"./md5-45627dcb.js";/** import{_ as _t}from"./LogoDark.vue_vue_type_script_setup_true_lang-d555e5be.js";import{x as ke,ag as we,r as Ot,ah as Vt,D as A,ai as Ne,a2 as P,B as I,aj as Q,ak as St,I as Be,al as Ie,am as Et,an as jt,ao as At,ap as G,y as wt,o as Re,c as tt,w as C,a as j,O as He,b as ge,aq as Ft,d as Pt,e as Ge,s as Ct,ar as Tt,t as Nt,k as Bt,U as It,f as Fe,N as Rt,V as Pe,J as Ye,L as kt}from"./index-5ac7c267.js";import{a as Mt}from"./md5-086248bf.js";/**
* vee-validate v4.11.3 * vee-validate v4.11.3
* (c) 2023 Abdelrahman Awad * (c) 2023 Abdelrahman Awad
* @license MIT * @license MIT

View File

@@ -1 +1 @@
import{av as _,x as d,D as n,o as c,s as m,a as f,w as p,Q as r,b as a,R as o,B as t,aw as h}from"./index-dc96e1be.js";const s={Sidebar_drawer:!0,Customizer_drawer:!1,mini_sidebar:!1,fontTheme:"Roboto",inputBg:!1},l=_({id:"customizer",state:()=>({Sidebar_drawer:s.Sidebar_drawer,Customizer_drawer:s.Customizer_drawer,mini_sidebar:s.mini_sidebar,fontTheme:"Poppins",inputBg:s.inputBg}),getters:{},actions:{SET_SIDEBAR_DRAWER(){this.Sidebar_drawer=!this.Sidebar_drawer},SET_MINI_SIDEBAR(e){this.mini_sidebar=e},SET_FONT(e){this.fontTheme=e}}}),u={class:"logo",style:{display:"flex","align-items":"center"}},b={style:{"font-size":"24px","font-weight":"1000"}},w={style:{"font-size":"20px","font-weight":"1000"}},S={style:{"font-size":"20px"}},z=d({__name:"LogoDark",setup(e){n("rgb(var(--v-theme-primary))"),n("rgb(var(--v-theme-secondary))");const i=l();return(g,B)=>(c(),m("div",u,[f(t(h),{to:"/",style:{"text-decoration":"none",color:"black"}},{default:p(()=>[r(a("span",b,"AstrBot 仪表盘",512),[[o,!t(i).mini_sidebar]]),r(a("span",w,"Astr",512),[[o,t(i).mini_sidebar]]),r(a("span",S,"Bot",512),[[o,t(i).mini_sidebar]])]),_:1})]))}});export{z as _,l as u}; import{aw as _,x as d,D as n,o as c,s as m,a as f,w as p,Q as r,b as a,R as o,B as t,ax as h}from"./index-5ac7c267.js";const s={Sidebar_drawer:!0,Customizer_drawer:!1,mini_sidebar:!1,fontTheme:"Roboto",inputBg:!1},l=_({id:"customizer",state:()=>({Sidebar_drawer:s.Sidebar_drawer,Customizer_drawer:s.Customizer_drawer,mini_sidebar:s.mini_sidebar,fontTheme:"Poppins",inputBg:s.inputBg}),getters:{},actions:{SET_SIDEBAR_DRAWER(){this.Sidebar_drawer=!this.Sidebar_drawer},SET_MINI_SIDEBAR(e){this.mini_sidebar=e},SET_FONT(e){this.fontTheme=e}}}),u={class:"logo",style:{display:"flex","align-items":"center"}},b={style:{"font-size":"24px","font-weight":"1000"}},w={style:{"font-size":"20px","font-weight":"1000"}},S={style:{"font-size":"20px"}},z=d({__name:"LogoDark",setup(e){n("rgb(var(--v-theme-primary))"),n("rgb(var(--v-theme-secondary))");const i=l();return(g,B)=>(c(),m("div",u,[f(t(h),{to:"/",style:{"text-decoration":"none",color:"black"}},{default:p(()=>[r(a("span",b,"AstrBot 仪表盘",512),[[o,!t(i).mini_sidebar]]),r(a("span",w,"Astr",512),[[o,t(i).mini_sidebar]]),r(a("span",S,"Bot",512),[[o,t(i).mini_sidebar]])]),_:1})]))}});export{z as _,l as u};

View File

@@ -1 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-e31f96f8.js";import{_ as i}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as n,D as a,o as c,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-dc96e1be.js";const p=["innerHTML"],v=n({__name:"MaterialIcons",setup(b){const s=a({title:"Material Icons"}),r=a('<iframe src="https://materialdesignicons.com/" frameborder="0" width="100%" height="1000"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Material Icons",disabled:!0,href:"#"}]);return(h,M)=>(c(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(i,{title:"Material Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,p)]),_:1})]),_:1})]),_:1})],64))}});export{v as default}; import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as i}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as n,D as a,o as c,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-5ac7c267.js";const p=["innerHTML"],v=n({__name:"MaterialIcons",setup(b){const s=a({title:"Material Icons"}),r=a('<iframe src="https://materialdesignicons.com/" frameborder="0" width="100%" height="1000"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Material Icons",disabled:!0,href:"#"}]);return(h,M)=>(c(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(i,{title:"Material Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,p)]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -1 +0,0 @@
import{_ as B}from"./LogoDark.vue_vue_type_script_setup_true_lang-7df35c25.js";import{x as y,D as o,o as b,s as U,a as e,w as a,b as n,B as $,d as u,f as d,A as _,e as f,V as r,O as m,ap as A,au as E,F,c as T,N as q,J as V,L as P}from"./index-dc96e1be.js";const z="/assets/social-google-a359a253.svg",N=["src"],S=n("span",{class:"ml-2"},"Sign up with Google",-1),D=n("h5",{class:"text-h5 text-center my-4 mb-8"},"Sign up with Email address",-1),G={class:"d-sm-inline-flex align-center mt-2 mb-7 mb-sm-0 font-weight-bold"},L=n("a",{href:"#",class:"ml-1 text-lightText"},"Terms and Condition",-1),O={class:"mt-5 text-right"},j=y({__name:"AuthRegister",setup(w){const c=o(!1),i=o(!1),p=o(""),v=o(""),g=o(),h=o(""),x=o(""),k=o([s=>!!s||"Password is required",s=>s&&s.length<=10||"Password must be less than 10 characters"]),C=o([s=>!!s||"E-mail is required",s=>/.+@.+\..+/.test(s)||"E-mail must be valid"]);function R(){g.value.validate()}return(s,l)=>(b(),U(F,null,[e(u,{block:"",color:"primary",variant:"outlined",class:"text-lightText googleBtn"},{default:a(()=>[n("img",{src:$(z),alt:"google"},null,8,N),S]),_:1}),e(r,null,{default:a(()=>[e(d,{class:"d-flex align-center"},{default:a(()=>[e(_,{class:"custom-devider"}),e(u,{variant:"outlined",class:"orbtn",rounded:"md",size:"small"},{default:a(()=>[f("OR")]),_:1}),e(_,{class:"custom-devider"})]),_:1})]),_:1}),D,e(E,{ref_key:"Regform",ref:g,"lazy-validation":"",action:"/dashboards/analytical",class:"mt-7 loginForm"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:h.value,"onUpdate:modelValue":l[0]||(l[0]=t=>h.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Firstname"},null,8,["modelValue"])]),_:1}),e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:x.value,"onUpdate:modelValue":l[1]||(l[1]=t=>x.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Lastname"},null,8,["modelValue"])]),_:1})]),_:1}),e(m,{modelValue:v.value,"onUpdate:modelValue":l[2]||(l[2]=t=>v.value=t),rules:C.value,label:"Email Address / Username",class:"mt-4 mb-4",required:"",density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary"},null,8,["modelValue","rules"]),e(m,{modelValue:p.value,"onUpdate:modelValue":l[3]||(l[3]=t=>p.value=t),rules:k.value,label:"Password",required:"",density:"comfortable",variant:"outlined",color:"primary","hide-details":"auto","append-icon":i.value?"mdi-eye":"mdi-eye-off",type:i.value?"text":"password","onClick:append":l[4]||(l[4]=t=>i.value=!i.value),class:"pwdInput"},null,8,["modelValue","rules","append-icon","type"]),n("div",G,[e(A,{modelValue:c.value,"onUpdate:modelValue":l[5]||(l[5]=t=>c.value=t),rules:[t=>!!t||"You must agree to continue!"],label:"Agree with?",required:"",color:"primary",class:"ms-n2","hide-details":""},null,8,["modelValue","rules"]),L]),e(u,{color:"secondary",block:"",class:"mt-2",variant:"flat",size:"large",onClick:l[6]||(l[6]=t=>R())},{default:a(()=>[f("Sign Up")]),_:1})]),_:1},512),n("div",O,[e(_),e(u,{variant:"plain",to:"/auth/login",class:"mt-2 text-capitalize mr-n2"},{default:a(()=>[f("Already have an account?")]),_:1})])],64))}});const I={class:"pa-7 pa-sm-12"},J=n("h2",{class:"text-secondary text-h2 mt-8"},"Sign up",-1),Y=n("h4",{class:"text-disabled text-h4 mt-3"},"Enter credentials to continue",-1),M=y({__name:"RegisterPage",setup(w){return(c,i)=>(b(),T(r,{class:"h-100vh","no-gutters":""},{default:a(()=>[e(d,{cols:"12",class:"d-flex align-center bg-lightprimary"},{default:a(()=>[e(q,null,{default:a(()=>[n("div",I,[e(r,{justify:"center"},{default:a(()=>[e(d,{cols:"12",lg:"10",xl:"6",md:"7"},{default:a(()=>[e(V,{elevation:"0",class:"loginBox"},{default:a(()=>[e(V,{variant:"outlined"},{default:a(()=>[e(P,{class:"pa-9"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",class:"text-center"},{default:a(()=>[e(B),J,Y]),_:1})]),_:1}),e(j)]),_:1})]),_:1})]),_:1})]),_:1})]),_:1})])]),_:1})]),_:1})]),_:1}))}});export{M as default};

View File

@@ -0,0 +1 @@
import{_ as B}from"./LogoDark.vue_vue_type_script_setup_true_lang-d555e5be.js";import{x as y,D as o,o as b,s as U,a as e,w as a,b as n,B as $,d as u,f as d,A as _,e as f,V as r,O as m,aq as q,av as A,F as E,c as F,N as T,J as V,L as P}from"./index-5ac7c267.js";const z="/assets/social-google-9b2fa67a.svg",N=["src"],S=n("span",{class:"ml-2"},"Sign up with Google",-1),D=n("h5",{class:"text-h5 text-center my-4 mb-8"},"Sign up with Email address",-1),G={class:"d-sm-inline-flex align-center mt-2 mb-7 mb-sm-0 font-weight-bold"},L=n("a",{href:"#",class:"ml-1 text-lightText"},"Terms and Condition",-1),O={class:"mt-5 text-right"},j=y({__name:"AuthRegister",setup(w){const c=o(!1),i=o(!1),p=o(""),v=o(""),g=o(),h=o(""),x=o(""),k=o([s=>!!s||"Password is required",s=>s&&s.length<=10||"Password must be less than 10 characters"]),C=o([s=>!!s||"E-mail is required",s=>/.+@.+\..+/.test(s)||"E-mail must be valid"]);function R(){g.value.validate()}return(s,l)=>(b(),U(E,null,[e(u,{block:"",color:"primary",variant:"outlined",class:"text-lightText googleBtn"},{default:a(()=>[n("img",{src:$(z),alt:"google"},null,8,N),S]),_:1}),e(r,null,{default:a(()=>[e(d,{class:"d-flex align-center"},{default:a(()=>[e(_,{class:"custom-devider"}),e(u,{variant:"outlined",class:"orbtn",rounded:"md",size:"small"},{default:a(()=>[f("OR")]),_:1}),e(_,{class:"custom-devider"})]),_:1})]),_:1}),D,e(A,{ref_key:"Regform",ref:g,"lazy-validation":"",action:"/dashboards/analytical",class:"mt-7 loginForm"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:h.value,"onUpdate:modelValue":l[0]||(l[0]=t=>h.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Firstname"},null,8,["modelValue"])]),_:1}),e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:x.value,"onUpdate:modelValue":l[1]||(l[1]=t=>x.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Lastname"},null,8,["modelValue"])]),_:1})]),_:1}),e(m,{modelValue:v.value,"onUpdate:modelValue":l[2]||(l[2]=t=>v.value=t),rules:C.value,label:"Email Address / Username",class:"mt-4 mb-4",required:"",density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary"},null,8,["modelValue","rules"]),e(m,{modelValue:p.value,"onUpdate:modelValue":l[3]||(l[3]=t=>p.value=t),rules:k.value,label:"Password",required:"",density:"comfortable",variant:"outlined",color:"primary","hide-details":"auto","append-icon":i.value?"mdi-eye":"mdi-eye-off",type:i.value?"text":"password","onClick:append":l[4]||(l[4]=t=>i.value=!i.value),class:"pwdInput"},null,8,["modelValue","rules","append-icon","type"]),n("div",G,[e(q,{modelValue:c.value,"onUpdate:modelValue":l[5]||(l[5]=t=>c.value=t),rules:[t=>!!t||"You must agree to continue!"],label:"Agree with?",required:"",color:"primary",class:"ms-n2","hide-details":""},null,8,["modelValue","rules"]),L]),e(u,{color:"secondary",block:"",class:"mt-2",variant:"flat",size:"large",onClick:l[6]||(l[6]=t=>R())},{default:a(()=>[f("Sign Up")]),_:1})]),_:1},512),n("div",O,[e(_),e(u,{variant:"plain",to:"/auth/login",class:"mt-2 text-capitalize mr-n2"},{default:a(()=>[f("Already have an account?")]),_:1})])],64))}});const I={class:"pa-7 pa-sm-12"},J=n("h2",{class:"text-secondary text-h2 mt-8"},"Sign up",-1),Y=n("h4",{class:"text-disabled text-h4 mt-3"},"Enter credentials to continue",-1),M=y({__name:"RegisterPage",setup(w){return(c,i)=>(b(),F(r,{class:"h-100vh","no-gutters":""},{default:a(()=>[e(d,{cols:"12",class:"d-flex align-center bg-lightprimary"},{default:a(()=>[e(T,null,{default:a(()=>[n("div",I,[e(r,{justify:"center"},{default:a(()=>[e(d,{cols:"12",lg:"10",xl:"6",md:"7"},{default:a(()=>[e(V,{elevation:"0",class:"loginBox"},{default:a(()=>[e(V,{variant:"outlined"},{default:a(()=>[e(P,{class:"pa-9"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",class:"text-center"},{default:a(()=>[e(B),J,Y]),_:1})]),_:1}),e(j)]),_:1})]),_:1})]),_:1})]),_:1})]),_:1})])]),_:1})]),_:1})]),_:1}))}});export{M as default};

View File

@@ -1 +1 @@
import{_ as c}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-e31f96f8.js";import{_ as f}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as m,D as s,o as l,s as r,a as e,w as a,f as i,V as o,F as d,u as _,J as p,X as b,b as h,t as g}from"./index-dc96e1be.js";const v=m({__name:"ShadowPage",setup(w){const n=s({title:"Shadow Page"}),u=s([{title:"Utilities",disabled:!1,href:"#"},{title:"Shadow",disabled:!0,href:"#"}]);return(V,x)=>(l(),r(d,null,[e(c,{title:n.value.title,breadcrumbs:u.value},null,8,["title","breadcrumbs"]),e(o,null,{default:a(()=>[e(i,{cols:"12",md:"12"},{default:a(()=>[e(f,{title:"Basic Shadow"},{default:a(()=>[e(o,{justify:"center"},{default:a(()=>[(l(),r(d,null,_(25,t=>e(i,{key:t,cols:"auto"},{default:a(()=>[e(p,{height:"100",width:"100",class:b(["mb-5",["d-flex justify-center align-center bg-primary",`elevation-${t}`]])},{default:a(()=>[h("div",null,g(t-1),1)]),_:2},1032,["class"])]),_:2},1024)),64))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{v as default}; import{_ as c}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as f}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as m,D as s,o as l,s as r,a as e,w as a,f as i,V as o,F as d,u as _,J as p,X as b,b as h,t as g}from"./index-5ac7c267.js";const v=m({__name:"ShadowPage",setup(w){const n=s({title:"Shadow Page"}),u=s([{title:"Utilities",disabled:!1,href:"#"},{title:"Shadow",disabled:!0,href:"#"}]);return(V,x)=>(l(),r(d,null,[e(c,{title:n.value.title,breadcrumbs:u.value},null,8,["title","breadcrumbs"]),e(o,null,{default:a(()=>[e(i,{cols:"12",md:"12"},{default:a(()=>[e(f,{title:"Basic Shadow"},{default:a(()=>[e(o,{justify:"center"},{default:a(()=>[(l(),r(d,null,_(25,t=>e(i,{key:t,cols:"auto"},{default:a(()=>[e(p,{height:"100",width:"100",class:b(["mb-5",["d-flex justify-center align-center bg-primary",`elevation-${t}`]])},{default:a(()=>[h("div",null,g(t-1),1)]),_:2},1032,["class"])]),_:2},1024)),64))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -1 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-e31f96f8.js";import{_ as n}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as c,D as a,o as i,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-dc96e1be.js";const b=["innerHTML"],w=c({__name:"TablerIcons",setup(p){const s=a({title:"Tabler Icons"}),r=a('<iframe src="https://tablericons.com/" frameborder="0" width="100%" height="600"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Tabler Icons",disabled:!0,href:"#"}]);return(h,T)=>(i(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(n,{title:"Tabler Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,b)]),_:1})]),_:1})]),_:1})],64))}});export{w as default}; import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as n}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as c,D as a,o as i,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-5ac7c267.js";const b=["innerHTML"],w=c({__name:"TablerIcons",setup(p){const s=a({title:"Tabler Icons"}),r=a('<iframe src="https://tablericons.com/" frameborder="0" width="100%" height="600"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Tabler Icons",disabled:!0,href:"#"}]);return(h,T)=>(i(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(n,{title:"Tabler Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,b)]),_:1})]),_:1})]),_:1})],64))}});export{w as default};

View File

@@ -1 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-e31f96f8.js";import{_ as v}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as f,o as i,c as g,w as e,a,a8 as y,K as b,e as w,t as d,A as C,L as V,a9 as L,J as _,D as o,s as h,f as k,b as t,F as x,u as B,X as H,V as T}from"./index-dc96e1be.js";const s=f({__name:"UiChildCard",props:{title:String},setup(r){const l=r;return(n,c)=>(i(),g(_,{variant:"outlined"},{default:e(()=>[a(y,{class:"py-3"},{default:e(()=>[a(b,{class:"text-h5"},{default:e(()=>[w(d(l.title),1)]),_:1})]),_:1}),a(C),a(V,null,{default:e(()=>[L(n.$slots,"default")]),_:3})]),_:3}))}}),D={class:"d-flex flex-column gap-1"},S={class:"text-caption pa-2 bg-lightprimary"},z=t("div",{class:"text-grey"},"Class",-1),N={class:"font-weight-medium"},$=t("div",null,[t("p",{class:"text-left"},"Left aligned on all viewport sizes."),t("p",{class:"text-center"},"Center aligned on all viewport sizes."),t("p",{class:"text-right"},"Right aligned on all viewport sizes."),t("p",{class:"text-sm-left"},"Left aligned on viewports SM (small) or wider."),t("p",{class:"text-right text-md-left"},"Left aligned on viewports MD (medium) or wider."),t("p",{class:"text-right text-lg-left"},"Left aligned on viewports LG (large) or wider."),t("p",{class:"text-right text-xl-left"},"Left aligned on viewports XL (extra-large) or wider.")],-1),M=t("div",{class:"d-flex justify-space-between flex-row"},[t("a",{href:"#",class:"text-decoration-none"},"Non-underlined link"),t("div",{class:"text-decoration-line-through"},"Line-through text"),t("div",{class:"text-decoration-overline"},"Overline text"),t("div",{class:"text-decoration-underline"},"Underline text")],-1),O=t("div",null,[t("p",{class:"text-high-emphasis"},"High-emphasis has an opacity of 87% in light theme and 100% in dark."),t("p",{class:"text-medium-emphasis"},"Medium-emphasis text and hint text have opacities of 60% in light theme and 70% in dark."),t("p",{class:"text-disabled"},"Disabled text has an opacity of 38% in light theme and 50% in dark.")],-1),j=f({__name:"TypographyPage",setup(r){const l=o({title:"Typography Page"}),n=o([["Heading 1","text-h1"],["Heading 2","text-h2"],["Heading 3","text-h3"],["Heading 4","text-h4"],["Heading 5","text-h5"],["Heading 6","text-h6"],["Subtitle 1","text-subtitle-1"],["Subtitle 2","text-subtitle-2"],["Body 1","text-body-1"],["Body 2","text-body-2"],["Button","text-button"],["Caption","text-caption"],["Overline","text-overline"]]),c=o([{title:"Utilities",disabled:!1,href:"#"},{title:"Typography",disabled:!0,href:"#"}]);return(U,F)=>(i(),h(x,null,[a(m,{title:l.value.title,breadcrumbs:c.value},null,8,["title","breadcrumbs"]),a(T,null,{default:e(()=>[a(k,{cols:"12",md:"12"},{default:e(()=>[a(v,{title:"Basic Typography"},{default:e(()=>[a(s,{title:"Heading"},{default:e(()=>[t("div",D,[(i(!0),h(x,null,B(n.value,([p,u])=>(i(),g(_,{variant:"outlined",key:p,class:"my-4"},{default:e(()=>[t("div",{class:H([u,"pa-2"])},d(p),3),t("div",S,[z,t("div",N,d(u),1)])]),_:2},1024))),128))])]),_:1}),a(s,{title:"Text-alignment",class:"mt-8"},{default:e(()=>[$]),_:1}),a(s,{title:"Decoration",class:"mt-8"},{default:e(()=>[M]),_:1}),a(s,{title:"Opacity",class:"mt-8"},{default:e(()=>[O]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{j as default}; import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-1875d383.js";import{_ as v}from"./UiParentCard.vue_vue_type_script_setup_true_lang-b40a2daa.js";import{x as f,o as i,c as g,w as e,a,a8 as y,K as b,e as w,t as d,A as C,L as V,a9 as L,J as _,D as o,s as h,f as k,b as t,F as x,u as B,X as H,V as T}from"./index-5ac7c267.js";const s=f({__name:"UiChildCard",props:{title:String},setup(r){const l=r;return(n,c)=>(i(),g(_,{variant:"outlined"},{default:e(()=>[a(y,{class:"py-3"},{default:e(()=>[a(b,{class:"text-h5"},{default:e(()=>[w(d(l.title),1)]),_:1})]),_:1}),a(C),a(V,null,{default:e(()=>[L(n.$slots,"default")]),_:3})]),_:3}))}}),D={class:"d-flex flex-column gap-1"},S={class:"text-caption pa-2 bg-lightprimary"},z=t("div",{class:"text-grey"},"Class",-1),N={class:"font-weight-medium"},$=t("div",null,[t("p",{class:"text-left"},"Left aligned on all viewport sizes."),t("p",{class:"text-center"},"Center aligned on all viewport sizes."),t("p",{class:"text-right"},"Right aligned on all viewport sizes."),t("p",{class:"text-sm-left"},"Left aligned on viewports SM (small) or wider."),t("p",{class:"text-right text-md-left"},"Left aligned on viewports MD (medium) or wider."),t("p",{class:"text-right text-lg-left"},"Left aligned on viewports LG (large) or wider."),t("p",{class:"text-right text-xl-left"},"Left aligned on viewports XL (extra-large) or wider.")],-1),M=t("div",{class:"d-flex justify-space-between flex-row"},[t("a",{href:"#",class:"text-decoration-none"},"Non-underlined link"),t("div",{class:"text-decoration-line-through"},"Line-through text"),t("div",{class:"text-decoration-overline"},"Overline text"),t("div",{class:"text-decoration-underline"},"Underline text")],-1),O=t("div",null,[t("p",{class:"text-high-emphasis"},"High-emphasis has an opacity of 87% in light theme and 100% in dark."),t("p",{class:"text-medium-emphasis"},"Medium-emphasis text and hint text have opacities of 60% in light theme and 70% in dark."),t("p",{class:"text-disabled"},"Disabled text has an opacity of 38% in light theme and 50% in dark.")],-1),j=f({__name:"TypographyPage",setup(r){const l=o({title:"Typography Page"}),n=o([["Heading 1","text-h1"],["Heading 2","text-h2"],["Heading 3","text-h3"],["Heading 4","text-h4"],["Heading 5","text-h5"],["Heading 6","text-h6"],["Subtitle 1","text-subtitle-1"],["Subtitle 2","text-subtitle-2"],["Body 1","text-body-1"],["Body 2","text-body-2"],["Button","text-button"],["Caption","text-caption"],["Overline","text-overline"]]),c=o([{title:"Utilities",disabled:!1,href:"#"},{title:"Typography",disabled:!0,href:"#"}]);return(U,F)=>(i(),h(x,null,[a(m,{title:l.value.title,breadcrumbs:c.value},null,8,["title","breadcrumbs"]),a(T,null,{default:e(()=>[a(k,{cols:"12",md:"12"},{default:e(()=>[a(v,{title:"Basic Typography"},{default:e(()=>[a(s,{title:"Heading"},{default:e(()=>[t("div",D,[(i(!0),h(x,null,B(n.value,([p,u])=>(i(),g(_,{variant:"outlined",key:p,class:"my-4"},{default:e(()=>[t("div",{class:H([u,"pa-2"])},d(p),3),t("div",S,[z,t("div",N,d(u),1)])]),_:2},1024))),128))])]),_:1}),a(s,{title:"Text-alignment",class:"mt-8"},{default:e(()=>[$]),_:1}),a(s,{title:"Decoration",class:"mt-8"},{default:e(()=>[M]),_:1}),a(s,{title:"Opacity",class:"mt-8"},{default:e(()=>[O]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{j as default};

View File

@@ -1 +1 @@
import{x as n,o,c as i,w as e,a,a8 as d,b as c,K as u,e as p,t as _,a9 as s,A as f,L as V,J as m}from"./index-dc96e1be.js";const C={class:"d-sm-flex align-center justify-space-between"},h=n({__name:"UiParentCard",props:{title:String},setup(l){const r=l;return(t,x)=>(o(),i(m,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(d,null,{default:e(()=>[c("div",C,[a(u,null,{default:e(()=>[p(_(r.title),1)]),_:1}),s(t.$slots,"action")])]),_:3}),a(f),a(V,null,{default:e(()=>[s(t.$slots,"default")]),_:3})]),_:3}))}});export{h as _}; import{x as n,o,c as i,w as e,a,a8 as d,b as c,K as u,e as p,t as _,a9 as s,A as f,L as V,J as m}from"./index-5ac7c267.js";const C={class:"d-sm-flex align-center justify-space-between"},h=n({__name:"UiParentCard",props:{title:String},setup(l){const r=l;return(t,x)=>(o(),i(m,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(d,null,{default:e(()=>[c("div",C,[a(u,null,{default:e(()=>[p(_(r.title),1)]),_:1}),s(t.$slots,"action")])]),_:3}),a(f),a(V,null,{default:e(()=>[s(t.$slots,"default")]),_:3})]),_:3}))}});export{h as _};

View File

Before

Width:  |  Height:  |  Size: 3.9 KiB

After

Width:  |  Height:  |  Size: 3.9 KiB

View File

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

Before

Width:  |  Height:  |  Size: 3.3 KiB

After

Width:  |  Height:  |  Size: 3.3 KiB

View File

Before

Width:  |  Height:  |  Size: 2.9 KiB

After

Width:  |  Height:  |  Size: 2.9 KiB

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
import{ar as K,as as Y,at as V}from"./index-dc96e1be.js";var C={exports:{}};const $={},k=Object.freeze(Object.defineProperty({__proto__:null,default:$},Symbol.toStringTag,{value:"Module"})),z=K(k);/** import{as as K,at as Y,au as V}from"./index-5ac7c267.js";var C={exports:{}};const $={},k=Object.freeze(Object.defineProperty({__proto__:null,default:$},Symbol.toStringTag,{value:"Module"})),z=K(k);/**
* [js-md5]{@link https://github.com/emn178/js-md5} * [js-md5]{@link https://github.com/emn178/js-md5}
* *
* @namespace md5 * @namespace md5

View File

Before

Width:  |  Height:  |  Size: 1.2 KiB

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@@ -11,7 +11,7 @@
href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&family=Poppins:wght@400;500;600;700&family=Roboto:wght@400;500;700&display=swap" href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&family=Poppins:wght@400;500;600;700&family=Roboto:wght@400;500;700&display=swap"
/> />
<title>AstrBot - 仪表盘</title> <title>AstrBot - 仪表盘</title>
<script type="module" crossorigin src="/assets/index-dc96e1be.js"></script> <script type="module" crossorigin src="/assets/index-5ac7c267.js"></script>
<link rel="stylesheet" href="/assets/index-0f1523f3.css"> <link rel="stylesheet" href="/assets/index-0f1523f3.css">
</head> </head>
<body> <body>

View File

@@ -11,6 +11,10 @@ import threading
import time import time
import asyncio import asyncio
from util.plugin_dev.api.v1.config import update_config from util.plugin_dev.api.v1.config import update_config
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
@dataclass @dataclass
@@ -28,7 +32,6 @@ class DashBoardHelper():
def __init__(self, global_object, config: dict): def __init__(self, global_object, config: dict):
self.loop = asyncio.new_event_loop() self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop) asyncio.set_event_loop(self.loop)
self.logger = global_object.logger
dashboard_data = global_object.dashboard_data dashboard_data = global_object.dashboard_data
dashboard_data.configs = { dashboard_data.configs = {
"data": [] "data": []
@@ -42,7 +45,6 @@ class DashBoardHelper():
@self.dashboard.register("post_configs") @self.dashboard.register("post_configs")
def on_post_configs(post_configs: dict): def on_post_configs(post_configs: dict):
try: try:
# self.logger.log(f"收到配置更新请求", gu.LEVEL_INFO, tag="可视化面板")
if 'base_config' in post_configs: if 'base_config' in post_configs:
self.save_config( self.save_config(
post_configs['base_config'], namespace='') # 基础配置 post_configs['base_config'], namespace='') # 基础配置
@@ -54,7 +56,6 @@ class DashBoardHelper():
threading.Thread(target=self.dashboard.shutdown_bot, threading.Thread(target=self.dashboard.shutdown_bot,
args=(2,), daemon=True).start() args=(2,), daemon=True).start()
except Exception as e: except Exception as e:
# self.logger.log(f"在保存配置时发生错误:{e}", gu.LEVEL_ERROR, tag="可视化面板")
raise e raise e
# 将 config.yaml、 中的配置解析到 dashboard_data.configs 中 # 将 config.yaml、 中的配置解析到 dashboard_data.configs 中
@@ -118,14 +119,14 @@ class DashBoardHelper():
) )
qq_gocq_platform_group = DashBoardConfig( qq_gocq_platform_group = DashBoardConfig(
config_type="group", config_type="group",
name="OneBot协议平台配置", name="go-cqhttp",
description="", description="",
body=[ body=[
DashBoardConfig( DashBoardConfig(
config_type="item", config_type="item",
val_type="bool", val_type="bool",
name="启用", name="启用",
description="支持cq-http、shamrock等目前仅支持QQ平台", description="",
value=config['gocqbot']['enable'], value=config['gocqbot']['enable'],
path="gocqbot.enable", path="gocqbot.enable",
), ),
@@ -470,7 +471,7 @@ class DashBoardHelper():
] ]
except Exception as e: except Exception as e:
self.logger.log(f"配置文件解析错误:{e}", gu.LEVEL_ERROR) logger.error(f"配置文件解析错误:{e}")
raise e raise e
def save_config(self, post_config: list, namespace: str): def save_config(self, post_config: list, namespace: str):

View File

@@ -1,22 +1,26 @@
from flask import Flask, request
from flask.logging import default_handler
from werkzeug.serving import make_server
from util import general_utils as gu
from dataclasses import dataclass
import logging
from cores.database.conn import dbConn
from util.cmd_config import CmdConfig
from util.updator import check_update, update_project, request_release_info
from cores.astrbot.types import *
import util.plugin_util as putil import util.plugin_util as putil
import websockets import websockets
import json import json
import threading import threading
import asyncio import asyncio
import os import os
import sys import uuid
import time import time
import traceback
from flask import Flask, request
from flask.logging import default_handler
from werkzeug.serving import make_server
from util import general_utils as gu
from dataclasses import dataclass
from persist.session import dbConn
from type.register import RegisteredPlugin
from typing import List
from util.cmd_config import CmdConfig
from util.updator import check_update, update_project, request_release_info, _reboot
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
@dataclass @dataclass
class DashBoardData(): class DashBoardData():
@@ -41,11 +45,8 @@ class AstrBotDashBoard():
self.dashboard_data: DashBoardData = global_object.dashboard_data self.dashboard_data: DashBoardData = global_object.dashboard_data
self.dashboard_be = Flask( self.dashboard_be = Flask(
__name__, static_folder="dist", static_url_path="/") __name__, static_folder="dist", static_url_path="/")
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
self.funcs = {} self.funcs = {}
self.cc = CmdConfig() self.cc = CmdConfig()
self.logger = global_object.logger
self.ws_clients = {} # remote_ip: ws self.ws_clients = {} # remote_ip: ws
# 启动 websocket 服务器 # 启动 websocket 服务器
self.ws_server = websockets.serve(self.__handle_msg, "0.0.0.0", 6186) self.ws_server = websockets.serve(self.__handle_msg, "0.0.0.0", 6186)
@@ -55,6 +56,22 @@ class AstrBotDashBoard():
# 返回页面 # 返回页面
return self.dashboard_be.send_static_file("index.html") return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/config")
def rt_config():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/logs")
def rt_logs():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/extension")
def rt_extension():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/dashboard/default")
def rt_dashboard():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.post("/api/authenticate") @self.dashboard_be.post("/api/authenticate")
def authenticate(): def authenticate():
username = self.cc.get("dashboard_username", "") username = self.cc.get("dashboard_username", "")
@@ -179,15 +196,40 @@ class AstrBotDashBoard():
post_data = request.json post_data = request.json
repo_url = post_data["url"] repo_url = post_data["url"]
try: try:
self.logger.log(f"正在安装插件 {repo_url}", tag="可视化面板") logger.info(f"正在安装插件 {repo_url}")
putil.install_plugin(repo_url, self.dashboard_data.plugins) putil.install_plugin(repo_url, global_object)
self.logger.log(f"安装插件 {repo_url} 成功", tag="可视化面板") logger.info(f"安装插件 {repo_url} 成功")
return Response( return Response(
status="success", status="success",
message="安装成功~", message="安装成功~",
data=None data=None
).__dict__ ).__dict__
except Exception as e: except Exception as e:
logger.error(f"/api/extensions/install: {traceback.format_exc()}")
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
@self.dashboard_be.post("/api/extensions/upload-install")
def upload_install_plugin():
try:
file = request.files['file']
print(file.filename)
logger.info(f"正在安装用户上传的插件 {file.filename}")
# save file to temp/
file_path = f"temp/{uuid.uuid4()}.zip"
file.save(file_path)
putil.install_plugin_from_file(file_path, global_object)
logger.info(f"安装插件 {file.filename} 成功")
return Response(
status="success",
message="安装成功~",
data=None
).__dict__
except Exception as e:
logger.error(f"/api/extensions/upload-install: {traceback.format_exc()}")
return Response( return Response(
status="error", status="error",
message=e.__str__(), message=e.__str__(),
@@ -199,16 +241,17 @@ class AstrBotDashBoard():
post_data = request.json post_data = request.json
plugin_name = post_data["name"] plugin_name = post_data["name"]
try: try:
self.logger.log(f"正在卸载插件 {plugin_name}", tag="可视化面板") logger.info(f"正在卸载插件 {plugin_name}")
putil.uninstall_plugin( putil.uninstall_plugin(
plugin_name, self.dashboard_data.plugins) plugin_name, global_object)
self.logger.log(f"卸载插件 {plugin_name} 成功", tag="可视化面板") logger.info(f"卸载插件 {plugin_name} 成功")
return Response( return Response(
status="success", status="success",
message="卸载成功~", message="卸载成功~",
data=None data=None
).__dict__ ).__dict__
except Exception as e: except Exception as e:
logger.error(f"/api/extensions/uninstall: {traceback.format_exc()}")
return Response( return Response(
status="error", status="error",
message=e.__str__(), message=e.__str__(),
@@ -220,15 +263,16 @@ class AstrBotDashBoard():
post_data = request.json post_data = request.json
plugin_name = post_data["name"] plugin_name = post_data["name"]
try: try:
self.logger.log(f"正在更新插件 {plugin_name}", tag="可视化面板") logger.info(f"正在更新插件 {plugin_name}")
putil.update_plugin(plugin_name, self.dashboard_data.plugins) putil.update_plugin(plugin_name, global_object)
self.logger.log(f"更新插件 {plugin_name} 成功", tag="可视化面板") logger.info(f"更新插件 {plugin_name} 成功")
return Response( return Response(
status="success", status="success",
message="更新成功~", message="更新成功~",
data=None data=None
).__dict__ ).__dict__
except Exception as e: except Exception as e:
logger.error(f"/api/extensions/update: {traceback.format_exc()}")
return Response( return Response(
status="error", status="error",
message=e.__str__(), message=e.__str__(),
@@ -257,6 +301,7 @@ class AstrBotDashBoard():
} }
).__dict__ ).__dict__
except Exception as e: except Exception as e:
logger.error(f"/api/check_update: {traceback.format_exc()}")
return Response( return Response(
status="error", status="error",
message=e.__str__(), message=e.__str__(),
@@ -272,8 +317,7 @@ class AstrBotDashBoard():
else: else:
latest = False latest = False
try: try:
update_project(request_release_info(latest), update_project(latest=latest, version=version)
latest=latest, version=version)
threading.Thread(target=self.shutdown_bot, args=(3,)).start() threading.Thread(target=self.shutdown_bot, args=(3,)).start()
return Response( return Response(
status="success", status="success",
@@ -281,6 +325,7 @@ class AstrBotDashBoard():
data=None data=None
).__dict__ ).__dict__
except Exception as e: except Exception as e:
logger.error(f"/api/update_project: {traceback.format_exc()}")
return Response( return Response(
status="error", status="error",
message=e.__str__(), message=e.__str__(),
@@ -328,8 +373,7 @@ class AstrBotDashBoard():
def shutdown_bot(self, delay_s: int): def shutdown_bot(self, delay_s: int):
time.sleep(delay_s) time.sleep(delay_s)
py = sys.executable _reboot()
os.execl(py, py, *sys.argv)
def _get_configs(self, namespace: str): def _get_configs(self, namespace: str):
if namespace == "": if namespace == "":
@@ -374,13 +418,13 @@ class AstrBotDashBoard():
}, },
{ {
"title": "QQ_OFFICIAL", "title": "QQ_OFFICIAL",
"desc": "QQ官方API,仅支持频道", "desc": "QQ官方API支持频道、群(需获得群权限)",
"namespace": "internal_platform_qq_official", "namespace": "internal_platform_qq_official",
"tag": "" "tag": ""
}, },
{ {
"title": "OneBot协议", "title": "go-cqhttp",
"desc": "支持cq-http、shamrock等目前仅支持QQ平台", "desc": "第三方 QQ 协议实现。支持频道、群",
"namespace": "internal_platform_qq_gocq", "namespace": "internal_platform_qq_gocq",
"tag": "" "tag": ""
} }
@@ -416,21 +460,29 @@ class AstrBotDashBoard():
return func return func
return decorator return decorator
async def get_log_history(self):
try:
with open("logs/astrbot-core/astrbot-core.log", "r", encoding="utf-8") as f:
return f.readlines()[-100:]
except Exception as e:
logger.warning(f"读取日志历史失败: {e.__str__()}")
return []
async def __handle_msg(self, websocket, path): async def __handle_msg(self, websocket, path):
address = websocket.remote_address address = websocket.remote_address
# self.logger.log(f"和 {address} 建立了 websocket 连接", tag="可视化面板")
self.ws_clients[address] = websocket self.ws_clients[address] = websocket
data = ''.join(self.logger.history).replace('\n', '\r\n') data = await self.get_log_history()
data = ''.join(data).replace('\n', '\r\n')
await websocket.send(data) await websocket.send(data)
while True: while True:
try: try:
msg = await websocket.recv() msg = await websocket.recv()
except websockets.exceptions.ConnectionClosedError: except websockets.exceptions.ConnectionClosedError:
# self.logger.log(f"和 {address} 的 websocket 连接已断开", tag="可视化面板") # logger.info(f"和 {address} 的 websocket 连接已断开")
del self.ws_clients[address] del self.ws_clients[address]
break break
except Exception as e: except Exception as e:
# self.logger.log(f"和 {path} 的 websocket 连接发生了错误: {e.__str__()}", tag="可视化面板") # logger.info(f"和 {path} 的 websocket 连接发生了错误: {e.__str__()}")
del self.ws_clients[address] del self.ws_clients[address]
break break
@@ -441,11 +493,12 @@ class AstrBotDashBoard():
def run(self): def run(self):
threading.Thread(target=self.run_ws_server, args=(self.loop,)).start() threading.Thread(target=self.run_ws_server, args=(self.loop,)).start()
self.logger.log("已启动 websocket 服务器", tag="可视化面板") logger.info("已启动 websocket 服务器")
ip_address = gu.get_local_ip_addresses() ip_address = gu.get_local_ip_addresses()
ip_str = f"http://{ip_address}:6185\n\thttp://localhost:6185" ip_str = f"http://{ip_address}:6185\n\thttp://localhost:6185"
self.logger.log( logger.info(
f"\n==================\n您可访问:\n\n\t{ip_str}\n\n来登录可视化面板,默认账号密码为空。\n注意: 所有配置项现已全量迁移至 cmd_config.json 文件下,可登录可视化面板在线修改配置。\n==================\n", tag="可视化面板") f"\n==================\n您可访问:\n\n\t{ip_str}\n\n来登录可视化面板,默认账号密码为空。\n注意: 所有配置项现已全量迁移至 cmd_config.json 文件下,可登录可视化面板在线修改配置。\n==================\n")
http_server = make_server( http_server = make_server(
'0.0.0.0', 6185, self.dashboard_be, threaded=True) '0.0.0.0', 6185, self.dashboard_be, threaded=True)
http_server.serve_forever() http_server.serve_forever()

View File

@@ -0,0 +1,10 @@
# helloworld
AstrBot 插件模板
A template plugin for AstrBot plugin feature
# 支持
[帮助文档](https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91/
)

View File

@@ -0,0 +1 @@
https://github.com/Soulter/helloworld

View File

@@ -1,168 +0,0 @@
import os
import shutil
from nakuru.entities.components import *
flag_not_support = False
try:
from util.plugin_dev.api.v1.config import *
from util.plugin_dev.api.v1.bot import (
AstrMessageEvent,
CommandResult,
)
except ImportError:
flag_not_support = True
print("导入接口失败。请升级到 AstrBot 最新版本。")
'''
注意改插件名噢格式XXXPlugin 或 Main
小提示:把此模板仓库 fork 之后 clone 到机器人文件夹下的 addons/plugins/ 目录下,然后用 Pycharm/VSC 等工具打开可获更棒的编程体验(自动补全等)
'''
class HelloWorldPlugin:
"""
初始化函数, 可以选择直接pass
"""
def __init__(self) -> None:
# 复制旧配置文件到 data 目录下。
if os.path.exists("keyword.json"):
shutil.move("keyword.json", "data/keyword.json")
self.keywords = {}
if os.path.exists("data/keyword.json"):
self.keywords = json.load(open("data/keyword.json", "r"))
else:
self.save_keyword()
"""
机器人程序会调用此函数。
返回规范: bool: 插件是否响应该消息 (所有的消息均会调用每一个载入的插件, 如果不响应, 则应返回 False)
Tuple: Non e或者长度为 3 的元组。如果不响应, 返回 None 如果响应, 第 1 个参数为指令是否调用成功, 第 2 个参数为返回的消息链列表, 第 3 个参数为指令名称
例子:一个名为"yuanshen"的插件;当接收到消息为“原神 可莉”, 如果不想要处理此消息则返回False, None如果想要处理但是执行失败了返回True, tuple([False, "请求失败。", "yuanshen"]) 执行成功了返回True, tuple([True, "结果文本", "yuanshen"])
"""
def run(self, ame: AstrMessageEvent):
if ame.message_str == "helloworld":
return CommandResult(
hit=True,
success=True,
message_chain=[Plain("Hello World!!")],
command_name="helloworld"
)
if ame.message_str.startswith("/keyword") or ame.message_str.startswith("keyword"):
return self.handle_keyword_command(ame)
ret = self.check_keyword(ame.message_str)
if ret:
return ret
return CommandResult(
hit=False,
success=False,
message_chain=None,
command_name=None
)
def handle_keyword_command(self, ame: AstrMessageEvent):
l = ame.message_str.split(" ")
# 获取图片
image_url = ""
for comp in ame.message_obj.message:
if isinstance(comp, Image) and image_url == "":
if comp.url is None:
image_url = comp.file
else:
image_url = comp.url
command_result = CommandResult(
hit=True,
success=False,
message_chain=None,
command_name="keyword"
)
if len(l) == 1 or (len(l) == 2 and image_url == ""):
ret = """【设置关键词回复】
示例:
1. keyword <触发词> <回复词>
keyword hi 你好
发送 hi 回复你好
* 回复词支持图片
2. keyword d <触发词>
keyword d hi
删除 hi 触发词产生的回复"""
command_result.success = True
command_result.message_chain = [Plain(ret)]
return command_result
elif len(l) == 3 and l[1] == "d":
if l[2] not in self.keywords:
command_result.message_chain = [Plain(f"关键词 {l[2]} 不存在")]
return command_result
self.keywords.pop(l[2])
self.save_keyword()
command_result.success = True
command_result.message_chain = [Plain("删除成功")]
return command_result
else:
self.keywords[l[1]] = {
"plain_text": " ".join(l[2:]),
"image_url": image_url
}
self.save_keyword()
command_result.success = True
command_result.message_chain = [Plain("设置成功")]
return command_result
def save_keyword(self):
json.dump(self.keywords, open(
"data/keyword.json", "w"), ensure_ascii=False)
def check_keyword(self, message_str: str):
for k in self.keywords:
if message_str == k:
plain_text = ""
if 'plain_text' in self.keywords[k]:
plain_text = self.keywords[k]['plain_text']
else:
plain_text = self.keywords[k]
image_url = ""
if 'image_url' in self.keywords[k]:
image_url = self.keywords[k]['image_url']
if image_url != "":
res = [Plain(plain_text), Image.fromURL(image_url)]
return CommandResult(
hit=True,
success=True,
message_chain=res,
command_name="keyword"
)
return CommandResult(
hit=True,
success=True,
message_chain=[Plain(plain_text)],
command_name="keyword"
)
"""
插件元信息。
当用户输入 plugin v 插件名称 时,会调用此函数,返回帮助信息。
返回参数要求(必填)dict{
"name": str, # 插件名称
"desc": str, # 插件简短描述
"help": str, # 插件帮助信息
"version": str, # 插件版本
"author": str, # 插件作者
"repo": str, # 插件仓库地址 [ 可选 ]
"homepage": str, # 插件主页 [ 可选 ]
}
"""
def info(self):
return {
"name": "helloworld",
"desc": "这是 AstrBot 的默认插件,支持关键词回复。",
"help": "输入 /keyword 查看关键词回复帮助。",
"version": "v1.3",
"author": "Soulter"
}

View File

@@ -0,0 +1,64 @@
import os
import shutil
from nakuru.entities.components import *
flag_not_support = False
try:
from util.plugin_dev.api.v1.config import *
from util.plugin_dev.api.v1.bot import (
AstrMessageEvent,
CommandResult,
)
except ImportError:
flag_not_support = True
print("导入接口失败。请升级到 AstrBot 最新版本。")
'''
注意改插件名噢格式XXXPlugin 或 Main
小提示:把此模板仓库 fork 之后 clone 到机器人文件夹下的 addons/plugins/ 目录下,然后用 Pycharm/VSC 等工具打开可获更棒的编程体验(自动补全等)
'''
class HelloWorldPlugin:
"""
初始化函数, 可以选择直接pass
"""
def __init__(self) -> None:
pass
"""
机器人程序会调用此函数。
"""
def run(self, ame: AstrMessageEvent):
if ame.message_str.startswith("helloworld"): # 如果消息文本以"helloworld"开头
return CommandResult(
hit=True, # 代表插件会响应此消息
success=True, # 插件响应类型为成功响应
message_chain=[Plain("Hello World!!")], # 消息链
command_name="helloworld" # 指令名
)
return CommandResult(
hit=False, # 插件不会响应此消息
success=False,
message_chain=None
)
"""
插件元信息。
当用户输入 plugin v 插件名称 时,会调用此函数,返回帮助信息。
返回参数要求(必填)dict{
"name": str, # 插件名称
"desc": str, # 插件简短描述
"help": str, # 插件帮助信息
"version": str, # 插件版本
"author": str, # 插件作者
"repo": str, # 插件仓库地址 [ 可选 ]
"homepage": str, # 插件主页 [ 可选 ]
}
"""
def info(self):
return {
"name": "helloworld",
"desc": "这是 AstrBot 的默认插件,支持关键词回复。",
"help": "输入 /keyword 查看关键词回复帮助。",
"version": "v1.3",
"author": "Soulter",
"repo": "https://github.com/Soulter/helloworld"
}

View File

@@ -0,0 +1,6 @@
name: helloworld # 这是你的插件的唯一识别名。
desc: 这是 AstrBot 的默认插件,支持关键词回复。 # 插件简短描述
help: 输入 /keyword 查看关键词回复帮助。 # 插件的帮助信息
version: v1.3 # 插件版本号。格式v1.1.1 或者 v1.1
author: Soulter # 作者
repo: https://github.com/Soulter/helloworld # 插件的仓库地址

View File

@@ -2,32 +2,35 @@ import re
import threading import threading
import asyncio import asyncio
import time import time
import aiohttp
import util.unfit_words as uw import util.unfit_words as uw
import os import os
import sys import sys
import io
import traceback import traceback
import util.function_calling.gplugin as gplugin import util.agent.web_searcher as web_searcher
import util.plugin_util as putil import util.plugin_util as putil
from PIL import Image as PILImage
from nakuru.entities.components import Plain, At, Image from nakuru.entities.components import Plain, At, Image
from addons.baidu_aip_judge import BaiduJudge from addons.baidu_aip_judge import BaiduJudge
from model.provider.provider import Provider from model.provider.provider import Provider
from model.command.command import Command from model.command.command import Command
from util import general_utils as gu from util import general_utils as gu
from util.general_utils import Logger, upload, run_monitor from util.general_utils import upload, run_monitor
from util.cmd_config import CmdConfig as cc from util.cmd_config import CmdConfig as cc
from util.cmd_config import init_astrbot_config_items from util.cmd_config import init_astrbot_config_items
from .types import * from type.types import GlobalObject
from type.register import *
from type.message import AstrBotMessage
from type.config import *
from addons.dashboard.helper import DashBoardHelper from addons.dashboard.helper import DashBoardHelper
from addons.dashboard.server import DashBoardData from addons.dashboard.server import DashBoardData
from cores.database.conn import dbConn from persist.session import dbConn
from model.platform._message_result import MessageResult from model.platform._message_result import MessageResult
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
# 用户发言频率 # 用户发言频率
user_frequency = {} user_frequency = {}
@@ -36,9 +39,6 @@ frequency_time = 60
# 计数默认值 # 计数默认值
frequency_count = 10 frequency_count = 10
# 版本
version = '3.1.12'
# 语言模型 # 语言模型
OPENAI_OFFICIAL = 'openai_official' OPENAI_OFFICIAL = 'openai_official'
NONE_LLM = 'none_llm' NONE_LLM = 'none_llm'
@@ -51,40 +51,26 @@ llm_wake_prefix = ""
# 百度内容审核实例 # 百度内容审核实例
baidu_judge = None baidu_judge = None
# CLI
PLATFORM_CLI = 'cli'
init_astrbot_config_items()
# 全局对象 # 全局对象
_global_object: GlobalObject = None _global_object: GlobalObject = None
logger: Logger = Logger()
# 语言模型选择
def privider_chooser(cfg): def privider_chooser(cfg):
l = [] l = []
if 'openai' in cfg and len(cfg['openai']['key']) > 0 and cfg['openai']['key'][0] is not None: if 'openai' in cfg and len(cfg['openai']['key']) and cfg['openai']['key'][0]:
l.append('openai_official') l.append('openai_official')
return l return l
def init():
''' '''
初始化机器人 初始化机器人
''' '''
def init(cfg):
global llm_instance, llm_command_instance global llm_instance, llm_command_instance
global baidu_judge, chosen_provider global baidu_judge, chosen_provider
global frequency_count, frequency_time global frequency_count, frequency_time
global _global_object global _global_object
global logger
# 迁移旧配置 init_astrbot_config_items()
gu.try_migrate_config(cfg)
# 使用新配置
cfg = cc.get_all() cfg = cc.get_all()
_event_loop = asyncio.new_event_loop() _event_loop = asyncio.new_event_loop()
@@ -92,10 +78,10 @@ def init(cfg):
# 初始化 global_object # 初始化 global_object
_global_object = GlobalObject() _global_object = GlobalObject()
_global_object.version = version _global_object.version = VERSION
_global_object.base_config = cfg _global_object.base_config = cfg
_global_object.logger = logger _global_object.logger = logger
logger.log("AstrBot v"+version, gu.LEVEL_INFO) logger.info("AstrBot v" + VERSION)
if 'reply_prefix' in cfg: if 'reply_prefix' in cfg:
# 适配旧版配置 # 适配旧版配置
@@ -105,12 +91,21 @@ def init(cfg):
cc.put("reply_prefix", "") cc.put("reply_prefix", "")
else: else:
_global_object.reply_prefix = cfg['reply_prefix'] _global_object.reply_prefix = cfg['reply_prefix']
default_personality_str = cc.get("default_personality_str", "")
if default_personality_str == "":
_global_object.default_personality = None
else:
_global_object.default_personality = {
"name": "default",
"prompt": default_personality_str,
}
# 语言模型提供商 # 语言模型提供商
logger.log("正在载入语言模型...", gu.LEVEL_INFO) logger.info("正在载入语言模型...")
prov = privider_chooser(cfg) prov = privider_chooser(cfg)
if OPENAI_OFFICIAL in prov: if OPENAI_OFFICIAL in prov:
logger.log("初始化OpenAI官方", gu.LEVEL_INFO) logger.info("初始化OpenAI官方")
if cfg['openai']['key'] is not None and cfg['openai']['key'] != [None]: if cfg['openai']['key'] is not None and cfg['openai']['key'] != [None]:
from model.provider.openai_official import ProviderOpenAIOfficial from model.provider.openai_official import ProviderOpenAIOfficial
from model.command.openai_official import CommandOpenAIOfficial from model.command.openai_official import CommandOpenAIOfficial
@@ -122,6 +117,11 @@ def init(cfg):
llm_name=OPENAI_OFFICIAL, llm_instance=llm_instance[OPENAI_OFFICIAL], origin="internal")) llm_name=OPENAI_OFFICIAL, llm_instance=llm_instance[OPENAI_OFFICIAL], origin="internal"))
chosen_provider = OPENAI_OFFICIAL chosen_provider = OPENAI_OFFICIAL
instance = llm_instance[OPENAI_OFFICIAL]
assert isinstance(instance, ProviderOpenAIOfficial)
instance.DEFAULT_PERSONALITY = _global_object.default_personality
instance.curr_personality = instance.DEFAULT_PERSONALITY
# 检查provider设置偏好 # 检查provider设置偏好
p = cc.get("chosen_provider", None) p = cc.get("chosen_provider", None)
if p is not None and p in llm_instance: if p is not None and p in llm_instance:
@@ -131,9 +131,9 @@ def init(cfg):
if 'baidu_aip' in cfg and 'enable' in cfg['baidu_aip'] and cfg['baidu_aip']['enable']: if 'baidu_aip' in cfg and 'enable' in cfg['baidu_aip'] and cfg['baidu_aip']['enable']:
try: try:
baidu_judge = BaiduJudge(cfg['baidu_aip']) baidu_judge = BaiduJudge(cfg['baidu_aip'])
logger.log("百度内容审核初始化成功", gu.LEVEL_INFO) logger.info("百度内容审核初始化成功")
except BaseException as e: except BaseException as e:
logger.log("百度内容审核初始化失败", gu.LEVEL_ERROR) logger.info("百度内容审核初始化失败")
threading.Thread(target=upload, args=( threading.Thread(target=upload, args=(
_global_object, ), daemon=True).start() _global_object, ), daemon=True).start()
@@ -151,10 +151,10 @@ def init(cfg):
else: else:
_global_object.unique_session = False _global_object.unique_session = False
except BaseException as e: except BaseException as e:
logger.log("独立会话配置错误: "+str(e), gu.LEVEL_ERROR) logger.info("独立会话配置错误: "+str(e))
nick_qq = cc.get("nick_qq", None) nick_qq = cc.get("nick_qq", None)
if nick_qq == None: if not nick_qq:
nick_qq = ("ai", "!", "") nick_qq = ("ai", "!", "")
if isinstance(nick_qq, str): if isinstance(nick_qq, str):
nick_qq = (nick_qq,) nick_qq = (nick_qq,)
@@ -166,45 +166,33 @@ def init(cfg):
global llm_wake_prefix global llm_wake_prefix
llm_wake_prefix = cc.get("llm_wake_prefix", "") llm_wake_prefix = cc.get("llm_wake_prefix", "")
logger.log("正在载入插件...", gu.LEVEL_INFO) logger.info("正在载入插件...")
# 加载插件 # 加载插件
_command = Command(None, _global_object) _command = Command(None, _global_object)
ok, err = putil.plugin_reload(_global_object.cached_plugins) ok, err = putil.plugin_reload(_global_object)
if ok: if ok:
logger.log( logger.info(
f"成功载入 {len(_global_object.cached_plugins)} 个插件", gu.LEVEL_INFO) f"成功载入 {len(_global_object.cached_plugins)} 个插件")
else: else:
logger.log(err, gu.LEVEL_ERROR) logger.error(err)
if chosen_provider is None: if chosen_provider is None:
llm_command_instance[NONE_LLM] = _command llm_command_instance[NONE_LLM] = _command
chosen_provider = NONE_LLM chosen_provider = NONE_LLM
logger.log("正在载入机器人消息平台", gu.LEVEL_INFO) logger.info("正在载入机器人消息平台")
# logger.log("提示:需要添加管理员 ID 才能使用 update/plugin 等指令),可在可视化面板添加。(如已添加可忽略)", gu.LEVEL_WARNING)
platform_str = ""
# GOCQ # GOCQ
if 'gocqbot' in cfg and cfg['gocqbot']['enable']: if 'gocqbot' in cfg and cfg['gocqbot']['enable']:
logger.log("启用 QQ_GOCQ 机器人消息平台", gu.LEVEL_INFO) logger.info("启用 QQ_GOCQ 机器人消息平台")
threading.Thread(target=run_gocq_bot, args=( threading.Thread(target=run_gocq_bot, args=(
cfg, _global_object), daemon=True).start() cfg, _global_object), daemon=True).start()
platform_str += "QQ_GOCQ,"
# QQ频道 # QQ频道
if 'qqbot' in cfg and cfg['qqbot']['enable'] and cfg['qqbot']['appid'] != None: if 'qqbot' in cfg and cfg['qqbot']['enable'] and cfg['qqbot']['appid'] != None:
logger.log("启用 QQ_OFFICIAL 机器人消息平台", gu.LEVEL_INFO) logger.info("启用 QQ_OFFICIAL 机器人消息平台")
threading.Thread(target=run_qqchan_bot, args=( threading.Thread(target=run_qqchan_bot, args=(
cfg, _global_object), daemon=True).start() cfg, _global_object), daemon=True).start()
platform_str += "QQ_OFFICIAL,"
default_personality_str = cc.get("default_personality_str", "")
if default_personality_str == "":
_global_object.default_personality = None
else:
_global_object.default_personality = {
"name": "default",
"prompt": default_personality_str,
}
# 初始化dashboard # 初始化dashboard
_global_object.dashboard_data = DashBoardData( _global_object.dashboard_data = DashBoardData(
stats={}, stats={},
@@ -221,22 +209,18 @@ def init(cfg):
threading.Thread(target=run_monitor, args=( threading.Thread(target=run_monitor, args=(
_global_object,), daemon=True).start() _global_object,), daemon=True).start()
logger.log( logger.info(
"如果有任何问题, 请在 https://github.com/Soulter/AstrBot 上提交 issue 或加群 322154837。", gu.LEVEL_INFO) "如果有任何问题, 请在 https://github.com/Soulter/AstrBot 上提交 issue 或加群 322154837。")
logger.log("请给 https://github.com/Soulter/AstrBot 点个 star。", gu.LEVEL_INFO) logger.info("请给 https://github.com/Soulter/AstrBot 点个 star。")
if platform_str == '': logger.info(f"🎉 项目启动完成")
platform_str = "(未启动任何平台,请前往面板添加)"
logger.log(f"🎉 项目启动完成")
dashboard_thread.join() dashboard_thread.join()
'''
运行 QQ_OFFICIAL 机器人
'''
def run_qqchan_bot(cfg: dict, global_object: GlobalObject): def run_qqchan_bot(cfg: dict, global_object: GlobalObject):
'''
运行 QQ_OFFICIAL 机器人
'''
try: try:
from model.platform.qq_official import QQOfficial from model.platform.qq_official import QQOfficial
qqchannel_bot = QQOfficial( qqchannel_bot = QQOfficial(
@@ -245,35 +229,30 @@ def run_qqchan_bot(cfg: dict, global_object: GlobalObject):
platform_name="qqchan", platform_instance=qqchannel_bot, origin="internal")) platform_name="qqchan", platform_instance=qqchannel_bot, origin="internal"))
qqchannel_bot.run() qqchannel_bot.run()
except BaseException as e: except BaseException as e:
logger.log("启动QQ频道机器人时出现错误, 原因如下: " + str(e), logger.error("启动 QQ 频道机器人时出现错误, 原因如下: " + str(e))
gu.LEVEL_CRITICAL, tag="QQ频道") logger.error(r"如果您是初次启动请前往可视化面板填写配置。详情请看https://astrbot.soulter.top/center/。")
logger.log(r"如果您是初次启动请前往可视化面板填写配置。详情请看https://astrbot.soulter.top/center/。" +
str(e), gu.LEVEL_CRITICAL)
'''
运行 QQ_GOCQ 机器人
'''
def run_gocq_bot(cfg: dict, _global_object: GlobalObject): def run_gocq_bot(cfg: dict, _global_object: GlobalObject):
'''
运行 QQ_GOCQ 机器人
'''
from model.platform.qq_gocq import QQGOCQ from model.platform.qq_gocq import QQGOCQ
noticed = False noticed = False
host = cc.get("gocq_host", "127.0.0.1") host = cc.get("gocq_host", "127.0.0.1")
port = cc.get("gocq_websocket_port", 6700) port = cc.get("gocq_websocket_port", 6700)
http_port = cc.get("gocq_http_port", 5700) http_port = cc.get("gocq_http_port", 5700)
logger.log( logger.info(
f"正在检查连接...host: {host}, ws port: {port}, http port: {http_port}", tag="QQ") f"正在检查连接...host: {host}, ws port: {port}, http port: {http_port}")
while True: while True:
if not gu.port_checker(port=port, host=host) or not gu.port_checker(port=http_port, host=host): if not gu.port_checker(port=port, host=host) or not gu.port_checker(port=http_port, host=host):
if not noticed: if not noticed:
noticed = True noticed = True
logger.log( logger.warning(
f"连接到{host}:{port}(或{http_port})失败。程序会每隔 5s 自动重试。", gu.LEVEL_CRITICAL, tag="QQ") f"连接到{host}:{port}(或{http_port})失败。程序会每隔 5s 自动重试。")
time.sleep(5) time.sleep(5)
else: else:
logger.log("检查完毕,未发现问题。", tag="QQ") logger.info("已连接到 gocq。")
break break
try: try:
qq_gocq = QQGOCQ(cfg=cfg, message_handler=oper_msg, qq_gocq = QQGOCQ(cfg=cfg, message_handler=oper_msg,
@@ -285,12 +264,10 @@ def run_gocq_bot(cfg: dict, _global_object: GlobalObject):
input("启动QQ机器人出现错误"+str(e)) input("启动QQ机器人出现错误"+str(e))
'''
检查发言频率
'''
def check_frequency(id) -> bool: def check_frequency(id) -> bool:
'''
检查发言频率
'''
ts = int(time.time()) ts = int(time.time())
if id in user_frequency: if id in user_frequency:
if ts-user_frequency[id]['time'] > frequency_time: if ts-user_frequency[id]['time'] > frequency_time:
@@ -316,7 +293,6 @@ async def record_message(platform: str, session_id: str):
db_inst.increment_stat_session(platform, session_id, 1) db_inst.increment_stat_session(platform, session_id, 1)
db_inst.increment_stat_message(curr_ts, 1) db_inst.increment_stat_message(curr_ts, 1)
db_inst.increment_stat_platform(curr_ts, platform, 1) db_inst.increment_stat_platform(curr_ts, platform, 1)
_global_object.cnt_total += 1
async def oper_msg(message: AstrBotMessage, async def oper_msg(message: AstrBotMessage,
@@ -332,11 +308,10 @@ async def oper_msg(message: AstrBotMessage,
platform: str 所注册的平台的名称如果没有注册将抛出一个异常 platform: str 所注册的平台的名称如果没有注册将抛出一个异常
""" """
global chosen_provider, _global_object global chosen_provider, _global_object
message_str = '' message_str = message.message_str
session_id = session_id
role = role
hit = False # 是否命中指令 hit = False # 是否命中指令
command_result = () # 调用指令返回的结果 command_result = () # 调用指令返回的结果
llm_result_str = ""
# 获取平台实例 # 获取平台实例
reg_platform: RegisteredPlatform = None reg_platform: RegisteredPlatform = None
@@ -345,44 +320,26 @@ async def oper_msg(message: AstrBotMessage,
reg_platform = p reg_platform = p
break break
if not reg_platform: if not reg_platform:
_global_object.logger.log(f"未找到平台 {platform} 的实例。", gu.LEVEL_ERROR)
raise Exception(f"未找到平台 {platform} 的实例。") raise Exception(f"未找到平台 {platform} 的实例。")
# 统计数据,如频道消息量 # 统计数据,如频道消息量
await record_message(platform, session_id) await record_message(platform, session_id)
for i in message.message: if not message_str:
if isinstance(i, Plain):
message_str += i.text.strip()
if message_str == "":
return MessageResult("Hi~") return MessageResult("Hi~")
# 检查发言频率 # 检查发言频率
if not check_frequency(message.sender.user_id): if not check_frequency(message.sender.user_id):
return MessageResult(f'你的发言超过频率限制(╯▔皿▔)╯。\n管理员设置{frequency_time}秒内只能提问{frequency_count}次。') return MessageResult(f'你的发言超过频率限制(╯▔皿▔)╯。\n管理员设置{frequency_time}秒内只能提问{frequency_count}次。')
# 检查是否是更换语言模型的请求
temp_switch = ""
if message_str.startswith('/gpt'):
target = chosen_provider
if message_str.startswith('/gpt'):
target = OPENAI_OFFICIAL
l = message_str.split(' ')
if len(l) > 1 and l[1] != "":
# 临时对话模式,先记录下之前的语言模型,回答完毕后再切回
temp_switch = chosen_provider
chosen_provider = target
message_str = l[1]
else:
chosen_provider = target
cc.put("chosen_provider", chosen_provider)
return MessageResult(f"已切换至【{chosen_provider}")
llm_result_str = ""
# check commands and plugins # check commands and plugins
message_str_no_wake_prefix = message_str
for wake_prefix in _global_object.nick: # nick: tuple
if message_str.startswith(wake_prefix):
message_str_no_wake_prefix = message_str.removeprefix(wake_prefix)
break
hit, command_result = await llm_command_instance[chosen_provider].check_command( hit, command_result = await llm_command_instance[chosen_provider].check_command(
message_str, message_str_no_wake_prefix,
session_id, session_id,
role, role,
reg_platform, reg_platform,
@@ -401,10 +358,10 @@ async def oper_msg(message: AstrBotMessage,
if not check: if not check:
return MessageResult(f"你的提问得到的回复未通过【百度AI内容审核】服务, 不予回复。\n\n{msg}") return MessageResult(f"你的提问得到的回复未通过【百度AI内容审核】服务, 不予回复。\n\n{msg}")
if chosen_provider == NONE_LLM: if chosen_provider == NONE_LLM:
logger.log("一条消息由于 Bot 未启动任何语言模型并且未触发指令而将被忽略。", gu.LEVEL_WARNING) logger.info("一条消息由于 Bot 未启动任何语言模型并且未触发指令而将被忽略。")
return return
try: try:
if llm_wake_prefix != "" and not message_str.startswith(llm_wake_prefix): if llm_wake_prefix and not message_str.startswith(llm_wake_prefix):
return return
# check image url # check image url
image_url = None image_url = None
@@ -422,55 +379,28 @@ async def oper_msg(message: AstrBotMessage,
message_str = message_str[3:] message_str = message_str[3:]
web_sch_flag = True web_sch_flag = True
else: else:
message_str += " " + cc.get("llm_env_prompt", "") message_str += "\n" + cc.get("llm_env_prompt", "")
if chosen_provider == OPENAI_OFFICIAL: if chosen_provider == OPENAI_OFFICIAL:
if _global_object.web_search or web_sch_flag: if _global_object.web_search or web_sch_flag:
official_fc = chosen_provider == OPENAI_OFFICIAL official_fc = chosen_provider == OPENAI_OFFICIAL
llm_result_str = await gplugin.web_search(message_str, llm_instance[chosen_provider], session_id, official_fc) llm_result_str = await web_searcher.web_search(message_str, llm_instance[chosen_provider], session_id, official_fc)
else: else:
llm_result_str = await llm_instance[chosen_provider].text_chat(message_str, session_id, image_url, default_personality=_global_object.default_personality) llm_result_str = await llm_instance[chosen_provider].text_chat(message_str, session_id, image_url)
llm_result_str = _global_object.reply_prefix + llm_result_str llm_result_str = _global_object.reply_prefix + llm_result_str
except BaseException as e: except BaseException as e:
logger.log(f"调用异常:{traceback.format_exc()}", gu.LEVEL_ERROR) logger.error(f"调用异常:{traceback.format_exc()}")
return MessageResult(f"调用语言模型例程时出现异常。原因: {str(e)}") return MessageResult(f"调用异常。详细原因:{str(e)}")
# 切换回原来的语言模型
if temp_switch != "":
chosen_provider = temp_switch
if hit: if hit:
# 有指令或者插件触发 # 有指令或者插件触发
# command_result 是一个元组:(指令调用是否成功, 指令返回的文本结果, 指令类型) # command_result 是一个元组:(指令调用是否成功, 指令返回的文本结果, 指令类型)
if command_result == None: if not command_result:
return return
command = command_result[2]
if command == "update latest r":
def update_restart():
py = sys.executable
os.execl(py, py, *sys.argv)
return MessageResult(command_result[1] + "\n\n即将自动重启。", callback=update_restart)
if not command_result[0]: if not command_result[0]:
return MessageResult(f"指令调用错误: \n{str(command_result[1])}") return MessageResult(f"指令调用错误: \n{str(command_result[1])}")
if isinstance(command_result[1], (list, str)):
# 画图指令 return MessageResult(command_result[1])
if isinstance(command_result[1], list) and len(command_result) == 3 and command == 'draw':
for i in command_result[1]:
# 保存到本地
async with aiohttp.ClientSession() as session:
async with session.get(i) as resp:
if resp.status == 200:
image = PILImage.open(io.BytesIO(await resp.read()))
return MessageResult([Image.fromFileSystem(gu.save_temp_img(image))])
# 其他指令
else:
try:
return MessageResult(command_result[1])
except BaseException as e:
return MessageResult(f"回复消息出错: {str(e)}")
return
# 敏感过滤 # 敏感过滤
# 过滤不合适的词 # 过滤不合适的词
@@ -482,7 +412,4 @@ async def oper_msg(message: AstrBotMessage,
if not check: if not check:
return MessageResult(f"你的提问得到的回复【百度内容审核】未通过,不予回复。\n\n{msg}") return MessageResult(f"你的提问得到的回复【百度内容审核】未通过,不予回复。\n\n{msg}")
# 发送信息 # 发送信息
try: return MessageResult(llm_result_str)
return MessageResult(llm_result_str)
except BaseException as e:
logger.log("回复消息错误: \n"+str(e), gu.LEVEL_ERROR)

View File

@@ -1,181 +0,0 @@
from model.provider.provider import Provider as LLMProvider
from model.platform._platfrom import Platform
from nakuru import (
GroupMessage,
FriendMessage,
GuildMessage,
)
from nakuru.entities.components import BaseMessageComponent
from typing import Union, List, ClassVar
from types import ModuleType
from enum import Enum
from dataclasses import dataclass
class MessageType(Enum):
GROUP_MESSAGE = 'GroupMessage' # 群组形式的消息
FRIEND_MESSAGE = 'FriendMessage' # 私聊、好友等单聊消息
GUILD_MESSAGE = 'GuildMessage' # 频道消息
@dataclass
class MessageMember():
user_id: str # 发送者id
nickname: str = None
class AstrBotMessage():
'''
AstrBot 的消息对象
'''
tag: str # 消息来源标签
type: MessageType # 消息类型
self_id: str # 机器人的识别id
session_id: str # 会话id
message_id: str # 消息id
sender: MessageMember # 发送者
message: List[BaseMessageComponent] # 消息链使用 Nakuru 的消息链格式
message_str: str # 最直观的纯文本消息字符串
raw_message: object
timestamp: int # 消息时间戳
def __str__(self) -> str:
return str(self.__dict__)
class PluginType(Enum):
PLATFORM = 'platfrom' # 平台类插件。
LLM = 'llm' # 大语言模型类插件
COMMON = 'common' # 其他插件
@dataclass
class PluginMetadata:
'''
插件的元数据。
'''
# required
plugin_name: str
plugin_type: PluginType
author: str # 插件作者
desc: str # 插件简介
version: str # 插件版本
# optional
repo: str = None # 插件仓库地址
def __str__(self) -> str:
return f"PluginMetadata({self.plugin_name}, {self.plugin_type}, {self.desc}, {self.version}, {self.repo})"
@dataclass
class RegisteredPlugin:
'''
注册在 AstrBot 中的插件。
'''
metadata: PluginMetadata
plugin_instance: object
module_path: str
module: ModuleType
root_dir_name: str
def __str__(self) -> str:
return f"RegisteredPlugin({self.metadata}, {self.module_path}, {self.root_dir_name})"
RegisteredPlugins = List[RegisteredPlugin]
@dataclass
class RegisteredPlatform:
'''
注册在 AstrBot 中的平台。平台应当实现 Platform 接口。
'''
platform_name: str
platform_instance: Platform
origin: str = None # 注册来源
@dataclass
class RegisteredLLM:
'''
注册在 AstrBot 中的大语言模型调用。大语言模型应当实现 LLMProvider 接口。
'''
llm_name: str
llm_instance: LLMProvider
origin: str = None # 注册来源
class GlobalObject:
'''
存放一些公用的数据,用于在不同模块(如core与command)之间传递
'''
version: str # 机器人版本
nick: str # 用户定义的机器人的别名
base_config: dict # config.json 中导出的配置
cached_plugins: List[RegisteredPlugin] # 加载的插件
platforms: List[RegisteredPlatform]
llms: List[RegisteredLLM]
web_search: bool # 是否开启了网页搜索
reply_prefix: str # 回复前缀
unique_session: bool # 是否开启了独立会话
cnt_total: int # 总消息数
default_personality: dict
dashboard_data = None
logger: None
def __init__(self):
self.nick = None # gocq 的昵称
self.base_config = None # config.yaml
self.cached_plugins = [] # 缓存的插件
self.web_search = False # 是否开启了网页搜索
self.reply_prefix = None
self.unique_session = False
self.cnt_total = 0
self.platforms = []
self.llms = []
self.default_personality = None
self.dashboard_data = None
self.stat = {}
class AstrMessageEvent():
'''
消息事件。
'''
context: GlobalObject # 一些公用数据
message_str: str # 纯消息字符串
message_obj: AstrBotMessage # 消息对象
platform: RegisteredPlatform # 来源平台
role: str # 基本身份。`admin` 或 `member`
session_id: int # 会话 id
def __init__(self,
message_str: str,
message_obj: AstrBotMessage,
platform: RegisteredPlatform,
role: str,
context: GlobalObject,
session_id: str = None):
self.context = context
self.message_str = message_str
self.message_obj = message_obj
self.platform = platform
self.role = role
self.session_id = session_id
class CommandResult():
'''
用于在Command中返回多个值
'''
def __init__(self, hit: bool, success: bool, message_chain: list, command_name: str = "unknown_command") -> None:
self.hit = hit
self.success = success
self.message_chain = message_chain
self.command_name = command_name
def _result_tuple(self):
return (self.success, self.message_chain, self.command_name)

161
main.py
View File

@@ -1,106 +1,107 @@
import os import os
import sys import sys
from pip._internal import main as pipmain
import warnings import warnings
import traceback import traceback
import threading import threading
from logging import Formatter, Logger
from util.cmd_config import CmdConfig, try_migrate_config
warnings.filterwarnings("ignore") warnings.filterwarnings("ignore")
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
logger: Logger = None
logo_tmpl = """
___ _______.___________..______ .______ ______ .___________.
/ \ / | || _ \ | _ \ / __ \ | |
/ ^ \ | (----`---| |----`| |_) | | |_) | | | | | `---| |----`
/ /_\ \ \ \ | | | / | _ < | | | | | |
/ _____ \ .----) | | | | |\ \----.| |_) | | `--' | | |
/__/ \__\ |_______/ |__| | _| `._____||______/ \______/ |__|
"""
def make_necessary_dirs():
'''
创建必要的目录。
'''
os.makedirs("data/config", exist_ok=True)
os.makedirs("temp", exist_ok=True)
def update_dept():
'''
更新依赖库。
'''
# 获取 Python 可执行文件路径
py = sys.executable
requirements_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "requirements.txt")
print(requirements_path)
# 更新依赖库
mirror = "https://mirrors.aliyun.com/pypi/simple/"
os.system(f"{py} -m pip install -r {requirements_path} -i {mirror}")
def main(): def main():
# config.yaml 配置文件加载和环境确认
try: try:
import cores.astrbot.core as qqBot import botpy, logging
import yaml import astrbot.core as bot_core
ymlfile = open(abs_path+"configs/config.yaml", 'r', encoding='utf-8') # delete qqbotpy's logger
cfg = yaml.safe_load(ymlfile) for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
except ImportError as import_error: except ImportError as import_error:
traceback.print_exc() logger.error(import_error)
print(import_error) logger.error("检测到一些依赖库没有安装。由于兼容性问题AstrBot 此版本将不会自动为您安装依赖库。请您先自行安装,然后重试。")
input("第三方库未完全安装完毕,请退出程序重试。") logger.info("如何安装?如果:")
logger.info("- Windows 启动器部署且使用启动器下载了 Python的在 launcher.exe 所在目录下的地址框输入 powershell然后执行 .\python\python.exe -m pip install .\AstrBot\requirements.txt")
logger.info("- Windows 启动器部署且使用自己之前下载的 Python的在 launcher.exe 所在目录下的地址框输入 powershell然后执行 python -m pip install .\AstrBot\requirements.txt")
logger.info("- 自行 clone 源码部署的python -m pip install -r requirements.txt")
logger.info("- 如果还不会,加群 322154837 ")
input("按任意键退出。")
exit()
except FileNotFoundError as file_not_found: except FileNotFoundError as file_not_found:
print(file_not_found) logger.error(file_not_found)
input("配置文件不存在,请检查是否已经下载配置文件。") input("配置文件不存在,请检查是否已经下载配置文件。")
exit()
except BaseException as e: except BaseException as e:
raise e logger.error(traceback.format_exc())
input("未知错误。")
# 设置代理
if 'http_proxy' in cfg and cfg['http_proxy'] != '':
os.environ['HTTP_PROXY'] = cfg['http_proxy']
if 'https_proxy' in cfg and cfg['https_proxy'] != '':
os.environ['HTTPS_PROXY'] = cfg['https_proxy']
os.environ['NO_PROXY'] = 'https://api.sgroup.qq.com'
# 检查并创建 temp 文件夹
if not os.path.exists(abs_path + "temp"):
os.mkdir(abs_path+"temp")
if not os.path.exists(abs_path + "data"):
os.mkdir(abs_path+"data")
if not os.path.exists(abs_path + "data/config"):
os.mkdir(abs_path+"data/config")
# 启动主程序cores/qqbot/core.py
qqBot.init(cfg)
def check_env(ch_mirror=False):
if not (sys.version_info.major == 3 and sys.version_info.minor >= 9):
print("请使用Python3.9+运行本项目")
input("按任意键退出...")
exit() exit()
if os.path.exists('requirements.txt'): # 启动主程序cores/qqbot/core.py
pth = 'requirements.txt' bot_core.init()
else:
pth = 'QQChannelChatGPT' + os.sep + 'requirements.txt'
print("正在检查或下载第三方库,请耐心等待...")
try:
if ch_mirror:
print("使用阿里云镜像")
pipmain(['install', '-r', pth, '-i',
'https://mirrors.aliyun.com/pypi/simple/'])
else:
pipmain(['install', '-r', pth])
except BaseException as e:
print(e)
while True:
res = input(
"安装失败。\n如报错ValueError: check_hostname requires server_hostname请尝试先关闭代理后重试。\n1.输入y回车重试\n2. 输入c回车使用国内镜像源下载\n3. 输入其他按键回车继续往下执行。")
if res == "y":
try:
pipmain(['install', '-r', pth])
break
except BaseException as e:
print(e)
continue
elif res == "c":
try:
pipmain(['install', '-r', pth, '-i',
'https://mirrors.aliyun.com/pypi/simple/'])
break
except BaseException as e:
print(e)
continue
else:
break
print("第三方库检查完毕。")
def check_env():
if not (sys.version_info.major == 3 and sys.version_info.minor >= 9):
logger.error("请使用 Python3.9+ 运行本项目。按任意键退出。")
input("")
exit()
if __name__ == "__main__": if __name__ == "__main__":
args = sys.argv update_dept()
make_necessary_dirs()
if '-cn' in args: try_migrate_config()
check_env(True) cc = CmdConfig()
else: http_proxy = cc.get("http_proxy")
check_env() https_proxy = cc.get("https_proxy")
if http_proxy:
os.environ['HTTP_PROXY'] = http_proxy
if https_proxy:
os.environ['HTTPS_PROXY'] = https_proxy
os.environ['NO_PROXY'] = 'https://api.sgroup.qq.com'
from SparkleLogging.utils.core import LogManager
logger = LogManager.GetLogger(
log_name='astrbot-core',
out_to_console=True,
custom_formatter=Formatter('[%(asctime)s| %(name)s - %(levelname)s|%(filename)s:%(lineno)d]: %(message)s', datefmt="%H:%M:%S")
)
logger.info(logo_tmpl)
logger.info(f"使用代理: {http_proxy}, {https_proxy}")
check_env()
t = threading.Thread(target=main, daemon=True) t = threading.Thread(target=main, daemon=True)
t.start() t.start()
try: try:
t.join() t.join()
except KeyboardInterrupt as e: except KeyboardInterrupt as e:
print("退出 AstrBot。") logger.info("退出 AstrBot。")
exit() exit()

View File

@@ -11,19 +11,20 @@ from nakuru.entities.components import (
Image Image
) )
from util import general_utils as gu from util import general_utils as gu
from util.image_render.helper import text_to_image_base
from model.provider.provider import Provider from model.provider.provider import Provider
from util.cmd_config import CmdConfig as cc from util.cmd_config import CmdConfig as cc
from util.general_utils import Logger from type.message import *
from cores.astrbot.types import ( from type.types import GlobalObject
GlobalObject, from type.command import *
AstrMessageEvent, from type.plugin import *
PluginType, from type.register import *
CommandResult,
RegisteredPlugin,
RegisteredPlatform
)
from typing import List, Tuple from typing import List
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
PLATFORM_QQCHAN = 'qqchan' PLATFORM_QQCHAN = 'qqchan'
PLATFORM_GOCQ = 'gocq' PLATFORM_GOCQ = 'gocq'
@@ -35,7 +36,6 @@ class Command:
def __init__(self, provider: Provider, global_object: GlobalObject = None): def __init__(self, provider: Provider, global_object: GlobalObject = None):
self.provider = provider self.provider = provider
self.global_object = global_object self.global_object = global_object
self.logger: Logger = global_object.logger
async def check_command(self, async def check_command(self,
message, message,
@@ -65,6 +65,8 @@ class Command:
result = await plugin.plugin_instance.run(ame) result = await plugin.plugin_instance.run(ame)
else: else:
result = await asyncio.to_thread(plugin.plugin_instance.run, ame) result = await asyncio.to_thread(plugin.plugin_instance.run, ame)
if not result:
continue
if isinstance(result, CommandResult): if isinstance(result, CommandResult):
hit = result.hit hit = result.hit
res = result._result_tuple() res = result._result_tuple()
@@ -74,6 +76,8 @@ class Command:
else: else:
raise TypeError("插件返回值格式错误。") raise TypeError("插件返回值格式错误。")
if hit: if hit:
plugin.trig()
logger.debug("hit plugin: " + plugin.metadata.plugin_name)
return True, res return True, res
except TypeError as e: except TypeError as e:
# 参数不匹配,尝试使用旧的参数方案 # 参数不匹配,尝试使用旧的参数方案
@@ -85,23 +89,25 @@ class Command:
if hit: if hit:
return True, res return True, res
except BaseException as e: except BaseException as e:
self.logger.log( logger.error(
f"{plugin.metadata.plugin_name} 插件异常,原因: {str(e)}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。", level=gu.LEVEL_WARNING) f"{plugin.metadata.plugin_name} 插件异常,原因: {str(e)}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。")
except BaseException as e: except BaseException as e:
self.logger.log( logger.error(
f"{plugin.metadata.plugin_name} 插件异常,原因: {str(e)}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。", level=gu.LEVEL_WARNING) f"{plugin.metadata.plugin_name} 插件异常,原因: {str(e)}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。")
if self.command_start_with(message, "nick"): if self.command_start_with(message, "nick"):
return True, self.set_nick(message, platform, role) return True, self.set_nick(message, platform, role)
if self.command_start_with(message, "plugin"): if self.command_start_with(message, "plugin"):
return True, self.plugin_oper(message, role, cached_plugins, platform) return True, await self.plugin_oper(message, role, self.global_object, platform)
if self.command_start_with(message, "myid") or self.command_start_with(message, "!myid"): if self.command_start_with(message, "myid") or self.command_start_with(message, "!myid"):
return True, self.get_my_id(message_obj, platform) return True, self.get_my_id(message_obj, platform)
if self.command_start_with(message, "web"): # 网页搜索 if self.command_start_with(message, "web"): # 网页搜索
return True, self.web_search(message) return True, self.web_search(message)
if self.command_start_with(message, "update"): if self.command_start_with(message, "update"):
return True, self.update(message, role) return True, self.update(message, role)
if not self.provider and self.command_start_with(message, "help"): if message == "t2i":
return True, self.t2i_toggle(message, role)
if not self.provider and message == "help":
return True, await self.help() return True, await self.help()
return False, None return False, None
@@ -116,40 +122,34 @@ class Command:
elif l[1] == 'off': elif l[1] == 'off':
self.global_object.web_search = False self.global_object.web_search = False
return True, "已关闭网页搜索", "web" return True, "已关闭网页搜索", "web"
def t2i_toggle(self, message, role):
p = cc.get("qq_pic_mode", True)
if p:
cc.put("qq_pic_mode", False)
return True, "已关闭文本转图片模式。", "t2i"
cc.put("qq_pic_mode", True)
return True, "已开启文本转图片模式。", "t2i"
def get_my_id(self, message_obj, platform): def get_my_id(self, message_obj, platform):
try: try:
user_id = str(message_obj.user_id) user_id = str(message_obj.sender.user_id)
return True, f"你在此平台上的ID{user_id}", "plugin" return True, f"你在此平台上的ID{user_id}", "plugin"
except BaseException as e: except BaseException as e:
return False, f"{platform}上获取你的ID失败原因: {str(e)}", "plugin" return False, f"{platform}上获取你的ID失败原因: {str(e)}", "plugin"
def get_new_conf(self, message, role): async def plugin_oper(self, message: str, role: str, ctx: GlobalObject, platform: str):
if role != "admin":
return False, f"你的身份组{role}没有权限使用此指令。", "newconf"
l = message.split(" ")
if len(l) <= 1:
obj = cc.get_all()
p = gu.create_text_image("【cmd_config.json】", json.dumps(
obj, indent=4, ensure_ascii=False))
return True, [Image.fromFileSystem(p)], "newconf"
'''
插件指令
'''
def plugin_oper(self, message: str, role: str, cached_plugins: List[RegisteredPlugin], platform: str):
l = message.split(" ") l = message.split(" ")
if len(l) < 2: if len(l) < 2:
p = gu.create_text_image( p = await text_to_image_base("# 插件指令面板 \n- 安装插件: `plugin i 插件Github地址`\n- 卸载插件: `plugin d 插件名`\n- 重载插件: `plugin reload`\n- 查看插件列表:`plugin l`\n - 更新插件: `plugin u 插件名`\n")
"【插件指令面板】", "安装插件: \nplugin i 插件Github地址\n卸载插件: \nplugin d 插件名 \n重载插件: \nplugin reload\n查看插件列表:\nplugin l\n更新插件: plugin u 插件名\n") with open(p, 'rb') as f:
return True, [Image.fromFileSystem(p)], "plugin" return True, [Image.fromBytes(f.read())], "plugin"
else: else:
if l[1] == "i": if l[1] == "i":
if role != "admin": if role != "admin":
return False, f"你的身份组{role}没有权限安装插件", "plugin" return False, f"你的身份组{role}没有权限安装插件", "plugin"
try: try:
putil.install_plugin(l[2], cached_plugins) putil.install_plugin(l[2], )
return True, "插件拉取并载入成功~", "plugin" return True, "插件拉取并载入成功~", "plugin"
except BaseException as e: except BaseException as e:
return False, f"拉取插件失败,原因: {str(e)}", "plugin" return False, f"拉取插件失败,原因: {str(e)}", "plugin"
@@ -157,37 +157,37 @@ class Command:
if role != "admin": if role != "admin":
return False, f"你的身份组{role}没有权限删除插件", "plugin" return False, f"你的身份组{role}没有权限删除插件", "plugin"
try: try:
putil.uninstall_plugin(l[2], cached_plugins) putil.uninstall_plugin(l[2], ctx)
return True, "插件卸载成功~", "plugin" return True, "插件卸载成功~", "plugin"
except BaseException as e: except BaseException as e:
return False, f"卸载插件失败,原因: {str(e)}", "plugin" return False, f"卸载插件失败,原因: {str(e)}", "plugin"
elif l[1] == "u": elif l[1] == "u":
try: try:
putil.update_plugin(l[2], cached_plugins) putil.update_plugin(l[2], ctx)
return True, "\n更新插件成功!!", "plugin" return True, "\n更新插件成功!!", "plugin"
except BaseException as e: except BaseException as e:
return False, f"更新插件失败,原因: {str(e)}\n建议: 使用 plugin i 指令进行覆盖安装(插件数据可能会丢失)", "plugin" return False, f"更新插件失败,原因: {str(e)}\n建议: 使用 plugin i 指令进行覆盖安装(插件数据可能会丢失)", "plugin"
elif l[1] == "l": elif l[1] == "l":
try: try:
plugin_list_info = "" plugin_list_info = ""
for plugin in cached_plugins: for plugin in ctx.cached_plugins:
plugin_list_info += f"{plugin.metadata.plugin_name}: \n名称: {plugin.metadata.plugin_name}\n简介: {plugin.metadata.plugin_desc}\n版本: {plugin.metadata.version}\n作者: {plugin.metadata.author}\n" plugin_list_info += f"### {plugin.metadata.plugin_name} \n- 名称: {plugin.metadata.plugin_name}\n- 简介: {plugin.metadata.desc}\n- 版本: {plugin.metadata.version}\n- 作者: {plugin.metadata.author}\n"
p = gu.create_text_image( p = await text_to_image_base(f"# 已激活的插件\n{plugin_list_info}\n> 使用plugin v 插件名 查看插件帮助\n")
"【已激活插件列表】", plugin_list_info + "\n使用plugin v 插件名 查看插件帮助\n") with open(p, 'rb') as f:
return True, [Image.fromFileSystem(p)], "plugin" return True, [Image.fromBytes(f.read())], "plugin"
except BaseException as e: except BaseException as e:
return False, f"获取插件列表失败,原因: {str(e)}", "plugin" return False, f"获取插件列表失败,原因: {str(e)}", "plugin"
elif l[1] == "v": elif l[1] == "v":
try: try:
info = None info = None
for i in cached_plugins: for i in ctx.cached_plugins:
if i.metadata.plugin_name == l[2]: if i.metadata.plugin_name == l[2]:
info = i.metadata info = i.metadata
break break
if info: if info:
p = gu.create_text_image( p = await text_to_image_base(f"# `{info.plugin_name}` 插件信息\n- 类型: {info.plugin_type}\n- 简介{info.desc}\n- 版本: {info.version}\n- 作者: {info.author}")
f"【插件信息】", f"名称: {info['name']}\n{info['desc']}\n版本: {info['version']}\n作者: {info['author']}\n\n帮助:\n{info['help']}") with open(p, 'rb') as f:
return True, [Image.fromFileSystem(p)], "plugin" return True, [Image.fromBytes(f.read())], "plugin"
else: else:
return False, "未找到该插件", "plugin" return False, "未找到该插件", "plugin"
except BaseException as e: except BaseException as e:
@@ -197,10 +197,10 @@ class Command:
nick: 存储机器人的昵称 nick: 存储机器人的昵称
''' '''
def set_nick(self, message: str, platform: str, role: str = "member"): def set_nick(self, message: str, platform: RegisteredPlatform, role: str = "member"):
if role != "admin": if role != "admin":
return True, "你无权使用该指令 :P", "nick" return True, "你无权使用该指令 :P", "nick"
if platform == PLATFORM_GOCQ: if str(platform) == PLATFORM_GOCQ:
l = message.split(" ") l = message.split(" ")
if len(l) == 1: if len(l) == 1:
return True, "【设置机器人昵称】示例:\n支持多昵称\nnick 昵称1 昵称2 昵称3", "nick" return True, "【设置机器人昵称】示例:\n支持多昵称\nnick 昵称1 昵称2 昵称3", "nick"
@@ -208,7 +208,7 @@ class Command:
cc.put("nick_qq", nick) cc.put("nick_qq", nick)
self.global_object.nick = tuple(nick) self.global_object.nick = tuple(nick)
return True, f"设置成功!现在你可以叫我这些昵称来提问我啦~", "nick" return True, f"设置成功!现在你可以叫我这些昵称来提问我啦~", "nick"
elif platform == PLATFORM_QQCHAN: elif str(platform) == PLATFORM_QQCHAN:
nick = message.split(" ")[2] nick = message.split(" ")[2]
return False, "QQ频道平台不支持为机器人设置昵称。", "nick" return False, "QQ频道平台不支持为机器人设置昵称。", "nick"
@@ -217,11 +217,10 @@ class Command:
"help": "帮助", "help": "帮助",
"keyword": "设置关键词/关键指令回复", "keyword": "设置关键词/关键指令回复",
"update": "更新项目", "update": "更新项目",
"nick": "设置机器人昵称", "nick": "设置机器人唤醒词",
"plugin": "插件安装、卸载和重载", "plugin": "插件安装、卸载和重载",
"web on/off": "LLM 网页搜索能力", "web on/off": "LLM 网页搜索能力",
"reset": "重置 LLM 对话", "t2i": "启用/关闭文本转图片模式"
"/gpt": "切换到 OpenAI 官方接口"
} }
async def help_messager(self, commands: dict, platform: str, cached_plugins: List[RegisteredPlugin] = None): async def help_messager(self, commands: dict, platform: str, cached_plugins: List[RegisteredPlugin] = None):
@@ -231,24 +230,25 @@ class Command:
notice = (await resp.json())["notice"] notice = (await resp.json())["notice"]
except BaseException as e: except BaseException as e:
notice = "" notice = ""
msg = "# Help Center\n## 指令列表\n" msg = "## 指令列表\n"
for key, value in commands.items(): for key, value in commands.items():
msg += f"`{key}` - {value}\n" msg += f"- `{key}`: {value}\n"
# plugins # plugins
if cached_plugins != None: if cached_plugins:
plugin_list_info = "" plugin_list_info = ""
for plugin in cached_plugins: for plugin in cached_plugins:
plugin_list_info += f"`{plugin.metadata.plugin_name}` {plugin.metadata.desc}\n" plugin_list_info += f"- `{plugin.metadata.plugin_name}`: {plugin.metadata.desc}\n"
if plugin_list_info.strip() != "": if plugin_list_info.strip():
msg += "\n## 插件列表\n> 使用plugin v 插件名 查看插件帮助\n" msg += "\n## 插件列表\n> 使用 plugin v 插件名 查看插件帮助\n"
msg += plugin_list_info msg += plugin_list_info
msg += notice msg += notice
try: try:
p = gu.create_markdown_image(msg) p = await text_to_image_base(msg)
return [Image.fromFileSystem(p),] with open(p, 'rb') as f:
return [Image.fromBytes(f.read()),]
except BaseException as e: except BaseException as e:
self.logger.log(str(e)) logger.error(str(e))
return msg return msg
def command_start_with(self, message: str, *args): def command_start_with(self, message: str, *args):
@@ -267,15 +267,14 @@ class Command:
if len(l) == 1: if len(l) == 1:
try: try:
update_info = util.updator.check_update() update_info = util.updator.check_update()
update_info += "\nTips:\n输入「update latest」更新到最新版本\n输入「update <版本号如v3.1.3>」切换到指定版本\n输入「update r」重启机器人\n" update_info += "\n> Tips: 输入「update latest」更新到最新版本输入「update <版本号如v3.1.3>」切换到指定版本输入「update r」重启机器人\n"
return True, update_info, "update" return True, update_info, "update"
except BaseException as e: except BaseException as e:
return False, "检查更新失败: "+str(e), "update" return False, "检查更新失败: "+str(e), "update"
else: else:
if l[1] == "latest": if l[1] == "latest":
try: try:
release_data = util.updator.request_release_info() util.updator.update_project()
util.updator.update_project(release_data)
return True, "更新成功重启生效。可输入「update r」重启", "update" return True, "更新成功重启生效。可输入「update r」重启", "update"
except BaseException as e: except BaseException as e:
return False, "更新失败: "+str(e), "update" return False, "更新失败: "+str(e), "update"
@@ -284,10 +283,7 @@ class Command:
else: else:
if l[1].lower().startswith('v'): if l[1].lower().startswith('v'):
try: try:
release_data = util.updator.request_release_info( util.updator.update_project(latest=False, version=l[1])
latest=False)
util.updator.update_project(
release_data, latest=False, version=l[1])
return True, "更新成功重启生效。可输入「update r」重启", "update" return True, "更新成功重启生效。可输入「update r」重启", "update"
except BaseException as e: except BaseException as e:
return False, "更新失败: "+str(e), "update" return False, "更新失败: "+str(e), "update"

View File

@@ -1,14 +1,26 @@
from model.command.command import Command from model.command.command import Command
from model.provider.openai_official import ProviderOpenAIOfficial from model.provider.openai_official import ProviderOpenAIOfficial, MODELS
from util.personality import personalities from util.personality import personalities
from cores.astrbot.types import GlobalObject from util.general_utils import download_image_by_url
from type.types import GlobalObject
from type.command import CommandItem
from SparkleLogging.utils.core import LogManager
from logging import Logger
from openai._exceptions import NotFoundError
from nakuru.entities.components import Image
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
class CommandOpenAIOfficial(Command): class CommandOpenAIOfficial(Command):
def __init__(self, provider: ProviderOpenAIOfficial, global_object: GlobalObject): def __init__(self, provider: ProviderOpenAIOfficial, global_object: GlobalObject):
self.provider = provider self.provider = provider
self.global_object = global_object self.global_object = global_object
self.personality_str = "" self.personality_str = ""
self.commands = [
CommandItem("reset", self.reset, "重置 LLM 会话。", "内置"),
CommandItem("his", self.his, "查看与 LLM 的历史记录。", "内置"),
CommandItem("status", self.status, "查看 GPT 配置信息和用量状态。", "内置"),
]
super().__init__(provider, global_object) super().__init__(provider, global_object)
async def check_command(self, async def check_command(self,
@@ -28,6 +40,8 @@ class CommandOpenAIOfficial(Command):
message_obj message_obj
) )
logger.debug(f"基础指令hit: {hit}, res: {res}")
# 这里是这个 LLM 的专属指令 # 这里是这个 LLM 的专属指令
if hit: if hit:
return True, res return True, res
@@ -35,12 +49,8 @@ class CommandOpenAIOfficial(Command):
return True, await self.reset(session_id, message) return True, await self.reset(session_id, message)
elif self.command_start_with(message, "his", "历史"): elif self.command_start_with(message, "his", "历史"):
return True, self.his(message, session_id) return True, self.his(message, session_id)
elif self.command_start_with(message, "token"):
return True, self.token(session_id)
elif self.command_start_with(message, "gpt"):
return True, self.gpt()
elif self.command_start_with(message, "status"): elif self.command_start_with(message, "status"):
return True, self.status() return True, self.status(session_id)
elif self.command_start_with(message, "help", "帮助"): elif self.command_start_with(message, "help", "帮助"):
return True, await self.help() return True, await self.help()
elif self.command_start_with(message, "unset"): elif self.command_start_with(message, "unset"):
@@ -51,21 +61,64 @@ class CommandOpenAIOfficial(Command):
return True, self.update(message, role) return True, self.update(message, role)
elif self.command_start_with(message, "", "draw"): elif self.command_start_with(message, "", "draw"):
return True, await self.draw(message) return True, await self.draw(message)
elif self.command_start_with(message, "key"):
return True, self.key(message)
elif self.command_start_with(message, "switch"): elif self.command_start_with(message, "switch"):
return True, await self.switch(message) return True, await self.switch(message)
elif self.command_start_with(message, "models"):
return True, await self.print_models()
elif self.command_start_with(message, "model"):
return True, await self.set_model(message)
return False, None return False, None
async def get_models(self):
try:
models = await self.provider.client.models.list()
except NotFoundError as e:
bu = str(self.provider.client.base_url)
self.provider.client.base_url = bu + "/v1"
models = await self.provider.client.models.list()
finally:
return filter(lambda x: x.id.startswith("gpt"), models.data)
async def print_models(self):
models = await self.get_models()
i = 1
ret = "OpenAI GPT 类可用模型"
for model in models:
ret += f"\n{i}. {model.id}"
i += 1
ret += "\nTips: 使用 /model 模型名/编号,即可实时更换模型。如目标模型不存在于上表,请输入模型名。"
logger.debug(ret)
return True, ret, "models"
async def set_model(self, message: str):
l = message.split(" ")
if len(l) == 1:
return True, "请输入 /model 模型名/编号", "model"
model = str(l[1])
if model.isdigit():
models = await self.get_models()
models = list(models)
if int(model) <= len(models) and int(model) >= 1:
model = models[int(model)-1]
self.provider.set_model(model.id)
return True, f"模型已设置为 {model.id}", "model"
else:
self.provider.set_model(model)
return True, f"模型已设置为 {model} (自定义)", "model"
async def help(self): async def help(self):
commands = super().general_commands() commands = super().general_commands()
commands[''] = '画画' commands[''] = '调用 OpenAI DallE 模型生成图片'
commands['key'] = '添加OpenAI key' commands['/set'] = '人格设置面板'
commands['set'] = '人格设置面板' commands['/status'] = '查看 Api Key 状态和配置信息'
commands['gpt'] = '查看gpt配置信息' commands['/token'] = '查看本轮会话 token'
commands['status'] = '查看key使用状态' commands['/reset'] = '重置当前与 LLM 的会话但保留人格system prompt'
commands['token'] = '查看本轮会话token' commands['/reset p'] = '重置当前与 LLM 的会话,并清除人格。'
commands['/models'] = '获取当前可用的模型'
commands['/model'] = '更换模型'
return True, await super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help" return True, await super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help"
async def reset(self, session_id: str, message: str = "reset"): async def reset(self, session_id: str, message: str = "reset"):
@@ -73,79 +126,44 @@ class CommandOpenAIOfficial(Command):
return False, "未启用 OpenAI 官方 API", "reset" return False, "未启用 OpenAI 官方 API", "reset"
l = message.split(" ") l = message.split(" ")
if len(l) == 1: if len(l) == 1:
await self.provider.forget(session_id) await self.provider.forget(session_id, keep_system_prompt=True)
return True, "重置成功", "reset" return True, "重置成功", "reset"
if len(l) == 2 and l[1] == "p": if len(l) == 2 and l[1] == "p":
self.provider.forget(session_id) await self.provider.forget(session_id)
if self.personality_str != "":
self.set(self.personality_str, session_id) # 重新设置人格
return True, "重置成功", "reset"
def his(self, message: str, session_id: str): def his(self, message: str, session_id: str):
if self.provider is None: if self.provider is None:
return False, "未启用 OpenAI 官方 API", "his" return False, "未启用 OpenAI 官方 API", "his"
# 分页每页5条
msg = ''
size_per_page = 3 size_per_page = 3
page = 1 page = 1
if message[4:]: l = message.split(" ")
page = int(message[4:]) if len(l) == 2:
# 检查是否有过历史记录 try:
if session_id not in self.provider.session_dict: page = int(l[1])
msg = f"历史记录为空" except BaseException as e:
return True, msg, "his" return True, "页码不合法", "his"
l = self.provider.session_dict[session_id] contexts, total_num = self.provider.dump_contexts_page(session_id, size_per_page, page=page)
max_page = len(l)//size_per_page + \ t_pages = total_num // size_per_page + 1
1 if len(l) % size_per_page != 0 else len(l)//size_per_page return True, f"历史记录如下:\n{contexts}\n{page} 页 | 共 {t_pages}\n*输入 /his 2 跳转到第 2 页", "his"
p = self.provider.get_prompts_by_cache_list(
self.provider.session_dict[session_id], divide=True, paging=True, size=size_per_page, page=page)
return True, f"历史记录如下:\n{p}\n{page}页 | 共{max_page}\n*输入/his 2跳转到第2页", "his"
def token(self, session_id: str): def status(self, session_id: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "token"
return True, f"会话的token数: {self.provider.get_user_usage_tokens(self.provider.session_dict[session_id])}\n系统最大缓存token数: {self.provider.max_tokens}", "token"
def gpt(self):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "gpt"
return True, f"OpenAI GPT配置:\n {self.provider.chatGPT_configs}", "gpt"
def status(self):
if self.provider is None: if self.provider is None:
return False, "未启用 OpenAI 官方 API", "status" return False, "未启用 OpenAI 官方 API", "status"
chatgpt_cfg_str = "" keys_data = self.provider.get_keys_data()
key_stat = self.provider.get_key_stat() ret = "OpenAI Key"
index = 1 for k in keys_data:
max = 9000000 status = "🟢" if keys_data[k] else "🔴"
gg_count = 0 ret += "\n|- " + k[:8] + " " + status
total = 0
tag = ''
for key in key_stat.keys():
sponsor = ''
total += key_stat[key]['used']
if key_stat[key]['exceed']:
gg_count += 1
continue
if 'sponsor' in key_stat[key]:
sponsor = key_stat[key]['sponsor']
chatgpt_cfg_str += f" |-{index}: {key[-8:]} {key_stat[key]['used']}/{max} {sponsor}{tag}\n"
index += 1
return True, f"⭐使用情况({str(gg_count)}个已用):\n{chatgpt_cfg_str}", "status"
def key(self, message: str): conf = self.provider.get_configs()
if self.provider is None: ret += "\n当前模型:" + conf['model']
return False, "未启用 OpenAI 官方 API", "reset" if conf['model'] in MODELS:
l = message.split(" ") ret += "\n最大上下文窗口:" + str(MODELS[conf['model']]) + " tokens"
if len(l) == 1:
msg = "感谢您赞助keykey为官方API使用请以以下格式赞助:\n/key xxxxx" if session_id in self.provider.session_memory and len(self.provider.session_memory[session_id]):
return True, msg, "key" ret += "\n你的会话上下文:" + str(self.provider.session_memory[session_id][-1]['usage_tokens']) + " tokens"
key = l[1]
if self.provider.check_key(key): return True, ret, "status"
self.provider.append_key(key)
return True, f"*★,°*:.☆( ̄▽ ̄)/$:*.°★* 。\n该Key被验证为有效。感谢你的赞助~"
else:
return True, "该Key被验证为无效。也许是输入错误了或者重试。", "key"
async def switch(self, message: str): async def switch(self, message: str):
''' '''
@@ -162,14 +180,13 @@ class CommandOpenAIOfficial(Command):
return True, ret, "switch" return True, ret, "switch"
elif len(l) == 2: elif len(l) == 2:
try: try:
key_stat = self.provider.get_key_stat() key_stat = self.provider.get_keys_data()
index = int(l[1]) index = int(l[1])
if index > len(key_stat) or index < 1: if index > len(key_stat) or index < 1:
return True, "账号序号不合法。", "switch" return True, "账号序号不合法。", "switch"
else: else:
try: try:
new_key = list(key_stat.keys())[index-1] new_key = list(key_stat.keys())[index-1]
ret = await self.provider.check_key(new_key)
self.provider.set_key(new_key) self.provider.set_key(new_key)
except BaseException as e: except BaseException as e:
return True, "账号切换失败,原因: " + str(e), "switch" return True, "账号切换失败,原因: " + str(e), "switch"
@@ -218,58 +235,21 @@ class CommandOpenAIOfficial(Command):
'name': ps, 'name': ps,
'prompt': personalities[ps] 'prompt': personalities[ps]
} }
self.provider.session_dict[session_id] = [] self.provider.personality_set(ps, session_id)
new_record = {
"user": {
"role": "user",
"content": personalities[ps],
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id].append(new_record)
self.personality_str = message
return True, f"人格{ps}已设置。", "set" return True, f"人格{ps}已设置。", "set"
else: else:
self.provider.curr_personality = { self.provider.curr_personality = {
'name': '自定义人格', 'name': '自定义人格',
'prompt': ps 'prompt': ps
} }
new_record = { self.provider.personality_set(ps, session_id)
"user": {
"role": "user",
"content": ps,
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id] = []
self.provider.session_dict[session_id].append(new_record)
self.personality_str = message
return True, f"自定义人格已设置。 \n人格信息: {ps}", "set" return True, f"自定义人格已设置。 \n人格信息: {ps}", "set"
async def draw(self, message): async def draw(self, message: str):
if self.provider is None: if self.provider is None:
return False, "未启用 OpenAI 官方 API", "draw" return False, "未启用 OpenAI 官方 API", "draw"
if message.startswith("/"): message = message.removeprefix("/").removeprefix("")
message = message[2:] img_url = await self.provider.image_generate(message)
elif message.startswith(""): p = await download_image_by_url(url=img_url)
message = message[1:] with open(p, 'rb') as f:
try: return True, [Image.fromBytes(f.read())], "draw"
# 画图模式传回3个参数
img_url = await self.provider.image_chat(message)
return True, img_url, "draw"
except Exception as e:
if 'exceeded' in str(e):
return f"OpenAI API错误。原因\n{str(e)} \n超额了。可自己搭建一个机器人(Github仓库QQChannelChatGPT)"
return False, f"图片生成失败: {e}", "draw"

View File

@@ -5,9 +5,10 @@ from nakuru import (
FriendMessage FriendMessage
) )
import botpy.message import botpy.message
from cores.astrbot.types import MessageType, AstrBotMessage, MessageMember from type.message import *
from typing import List, Union from typing import List, Union
import time from util.general_utils import save_temp_img
import time, base64
# QQ官方消息类型转换 # QQ官方消息类型转换
@@ -18,11 +19,14 @@ def qq_official_message_parse(message: List[BaseMessageComponent]):
for i in message: for i in message:
if isinstance(i, Plain): if isinstance(i, Plain):
plain_text += i.text plain_text += i.text
elif isinstance(i, Image) and image_path == None: elif isinstance(i, Image) and not image_path:
if i.path is not None: if i.path:
image_path = i.path image_path = i.path
elif i.file and i.file.startswith("base64://"):
img_data = base64.b64decode(i.file[9:])
image_path = save_temp_img(img_data)
else: else:
image_path = i.file image_path = save_temp_img(i.file)
return plain_text, image_path return plain_text, image_path
# QQ官方消息类型 2 AstrBotMessage # QQ官方消息类型 2 AstrBotMessage

View File

@@ -14,34 +14,40 @@ class Platform():
初始化平台的各种接口 初始化平台的各种接口
''' '''
self.message_handler = message_handler self.message_handler = message_handler
self.cnt_receive = 0
self.cnt_reply = 0
pass pass
@abc.abstractmethod @abc.abstractmethod
async def handle_msg(): async def handle_msg(self):
''' '''
处理到来的消息 处理到来的消息
''' '''
self.cnt_receive += 1
pass pass
@abc.abstractmethod @abc.abstractmethod
async def reply_msg(): async def reply_msg(self):
''' '''
回复消息(被动发送) 回复消息(被动发送)
''' '''
self.cnt_reply += 1
pass pass
@abc.abstractmethod @abc.abstractmethod
async def send_msg(target: Union[GuildMessage, GroupMessage, FriendMessage, str], message: Union[str, list]): async def send_msg(self, target: Union[GuildMessage, GroupMessage, FriendMessage, str], message: Union[str, list]):
''' '''
发送消息(主动发送) 发送消息(主动发送)
''' '''
self.cnt_reply += 1
pass pass
@abc.abstractmethod @abc.abstractmethod
async def send(target: Union[GuildMessage, GroupMessage, FriendMessage, str], message: Union[str, list]): async def send(self, target: Union[GuildMessage, GroupMessage, FriendMessage, str], message: Union[str, list]):
''' '''
发送消息(主动发送)同 send_msg() 发送消息(主动发送)同 send_msg()
''' '''
self.cnt_reply += 1
pass pass
def parse_message_outline(self, message: Union[GuildMessage, GroupMessage, FriendMessage, str, list]) -> str: def parse_message_outline(self, message: Union[GuildMessage, GroupMessage, FriendMessage, str, list]) -> str:

View File

@@ -1,5 +1,6 @@
from nakuru.entities.components import Plain, At, Image, Node from nakuru.entities.components import Plain, At, Image, Node
from util import general_utils as gu from util import general_utils as gu
from util.image_render.helper import text_to_image_base
from util.cmd_config import CmdConfig from util.cmd_config import CmdConfig
import asyncio import asyncio
from nakuru import ( from nakuru import (
@@ -11,11 +12,16 @@ from nakuru import (
Notify Notify
) )
from typing import Union from typing import Union
from type.types import GlobalObject
import time import time
from ._platfrom import Platform from ._platfrom import Platform
from ._message_parse import nakuru_message_parse_rev from ._message_parse import nakuru_message_parse_rev
from cores.astrbot.types import MessageType, AstrBotMessage, MessageMember from type.message import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
class FakeSource: class FakeSource:
@@ -25,7 +31,7 @@ class FakeSource:
class QQGOCQ(Platform): class QQGOCQ(Platform):
def __init__(self, cfg: dict, message_handler: callable, global_object) -> None: def __init__(self, cfg: dict, message_handler: callable, global_object: GlobalObject) -> None:
super().__init__(message_handler) super().__init__(message_handler)
self.loop = asyncio.new_event_loop() self.loop = asyncio.new_event_loop()
@@ -34,18 +40,10 @@ class QQGOCQ(Platform):
self.waiting = {} self.waiting = {}
self.cc = CmdConfig() self.cc = CmdConfig()
self.cfg = cfg self.cfg = cfg
self.logger: gu.Logger = global_object.logger
self.context = global_object
try:
self.nick_qq = cfg['nick_qq']
except:
self.nick_qq = ["ai", "!", ""]
nick_qq = self.nick_qq
if isinstance(nick_qq, str):
nick_qq = [nick_qq]
self.unique_session = cfg['uniqueSessionMode'] self.unique_session = cfg['uniqueSessionMode']
self.pic_mode = cfg['qq_pic_mode']
self.client = CQHTTP( self.client = CQHTTP(
host=self.cc.get("gocq_host", "127.0.0.1"), host=self.cc.get("gocq_host", "127.0.0.1"),
@@ -106,8 +104,9 @@ class QQGOCQ(Platform):
self.client.run() self.client.run()
async def handle_msg(self, message: AstrBotMessage): async def handle_msg(self, message: AstrBotMessage):
self.logger.log( await super().handle_msg()
f"{message.sender.nickname}/{message.sender.user_id} -> {self.parse_message_outline(message)}", tag="QQ_GOCQ") logger.info(
f"{message.sender.nickname}/{message.sender.user_id} -> {self.parse_message_outline(message)}")
assert isinstance(message.raw_message, assert isinstance(message.raw_message,
(GroupMessage, FriendMessage, GuildMessage)) (GroupMessage, FriendMessage, GuildMessage))
@@ -129,8 +128,8 @@ class QQGOCQ(Platform):
if message.type.value == "GroupMessage": if message.type.value == "GroupMessage":
if str(i.qq) == str(message.self_id): if str(i.qq) == str(message.self_id):
resp = True resp = True
elif isinstance(i, Plain): elif isinstance(i, Plain) and self.context.nick:
for nick in self.nick_qq: for nick in self.context.nick:
if nick != '' and i.text.strip().startswith(nick): if nick != '' and i.text.strip().startswith(nick):
resp = True resp = True
break break
@@ -178,6 +177,7 @@ class QQGOCQ(Platform):
async def reply_msg(self, async def reply_msg(self,
message: Union[AstrBotMessage, GuildMessage, GroupMessage, FriendMessage], message: Union[AstrBotMessage, GuildMessage, GroupMessage, FriendMessage],
result_message: list): result_message: list):
await super().reply_msg()
""" """
插件开发者请使用send方法, 可以不用直接调用这个方法。 插件开发者请使用send方法, 可以不用直接调用这个方法。
""" """
@@ -188,8 +188,8 @@ class QQGOCQ(Platform):
res = result_message res = result_message
self.logger.log( logger.info(
f"{source.user_id} <- {self.parse_message_outline(res)}", tag="QQ_GOCQ") f"{source.user_id} <- {self.parse_message_outline(res)}")
if isinstance(source, int): if isinstance(source, int):
source = FakeSource("GroupMessage", source) source = FakeSource("GroupMessage", source)
@@ -203,7 +203,7 @@ class QQGOCQ(Platform):
res.append(Plain(text=res_str)) res.append(Plain(text=res_str))
# if image mode, put all Plain texts into a new picture. # if image mode, put all Plain texts into a new picture.
if self.pic_mode and isinstance(res, list): if self.cc.get("qq_pic_mode", False) and isinstance(res, list):
plains = [] plains = []
news = [] news = []
for i in res: for i in res:
@@ -213,7 +213,8 @@ class QQGOCQ(Platform):
news.append(i) news.append(i)
plains_str = "".join(plains).strip() plains_str = "".join(plains).strip()
if plains_str != "" and len(plains_str) > 50: if plains_str != "" and len(plains_str) > 50:
p = gu.create_markdown_image("".join(plains)) # p = gu.create_markdown_image("".join(plains))
p = await text_to_image_base(plains_str)
news.append(Image.fromFileSystem(p)) news.append(Image.fromFileSystem(p))
res = news res = news
@@ -256,6 +257,7 @@ class QQGOCQ(Platform):
提供给插件的发送QQ消息接口。 提供给插件的发送QQ消息接口。
参数说明第一个参数可以是消息对象也可以是QQ群号。第二个参数是消息内容消息内容可以是消息链列表也可以是纯文字信息 参数说明第一个参数可以是消息对象也可以是QQ群号。第二个参数是消息内容消息内容可以是消息链列表也可以是纯文字信息
''' '''
await super().reply_msg()
try: try:
await self.reply_msg(message, result_message) await self.reply_msg(message, result_message)
except BaseException as e: except BaseException as e:
@@ -267,22 +269,17 @@ class QQGOCQ(Platform):
''' '''
同 send_msg() 同 send_msg()
''' '''
await super().reply_msg()
await self.reply_msg(to, res) await self.reply_msg(to, res)
def create_text_image(title: str, text: str, max_width=30, font_size=20): async def create_text_image(text: str):
''' '''
文本转图片。 文本转图片。
title: 标题
text: 文本内容 text: 文本内容
max_width: 文本宽度最大值默认30
font_size: 字体大小默认20
返回:文件路径 返回:文件路径
''' '''
try: try:
img = gu.word2img(title, text, max_width, font_size) return await text_to_image_base(text)
p = gu.save_temp_img(img)
return p
except Exception as e: except Exception as e:
raise e raise e

View File

@@ -5,6 +5,8 @@ import botpy.message
import re import re
import asyncio import asyncio
import aiohttp import aiohttp
import botpy.types
import botpy.types.message
from util import general_utils as gu from util import general_utils as gu
from botpy.types.message import Reference from botpy.types.message import Reference
@@ -15,13 +17,17 @@ from ._message_parse import (
qq_official_message_parse_rev, qq_official_message_parse_rev,
qq_official_message_parse qq_official_message_parse
) )
from cores.astrbot.types import MessageType, AstrBotMessage, MessageMember from type.message import *
from typing import Union, List from typing import Union, List
from nakuru.entities.components import BaseMessageComponent from nakuru.entities.components import *
from util.image_render.helper import text_to_image_base
from util.cmd_config import CmdConfig
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
# QQ 机器人官方框架 # QQ 机器人官方框架
class botClient(Client): class botClient(Client):
def set_platform(self, platform: 'QQOfficial'): def set_platform(self, platform: 'QQOfficial'):
self.platform = platform self.platform = platform
@@ -53,13 +59,13 @@ class QQOfficial(Platform):
asyncio.set_event_loop(self.loop) asyncio.set_event_loop(self.loop)
self.waiting: dict = {} self.waiting: dict = {}
self.cc = CmdConfig()
self.cfg = cfg self.cfg = cfg
self.appid = cfg['qqbot']['appid'] self.appid = cfg['qqbot']['appid']
self.token = cfg['qqbot']['token'] self.token = cfg['qqbot']['token']
self.secret = cfg['qqbot_secret'] self.secret = cfg['qqbot_secret']
self.unique_session = cfg['uniqueSessionMode'] self.unique_session = cfg['uniqueSessionMode']
self.logger: gu.Logger = global_object.logger
qq_group = cfg['qqofficial_enable_group_message'] qq_group = cfg['qqofficial_enable_group_message']
if qq_group: if qq_group:
@@ -99,13 +105,14 @@ class QQOfficial(Platform):
) )
async def handle_msg(self, message: AstrBotMessage): async def handle_msg(self, message: AstrBotMessage):
await super().handle_msg()
assert isinstance(message.raw_message, (botpy.message.Message, assert isinstance(message.raw_message, (botpy.message.Message,
botpy.message.GroupMessage, botpy.message.DirectMessage)) botpy.message.GroupMessage, botpy.message.DirectMessage))
is_group = message.type != MessageType.FRIEND_MESSAGE is_group = message.type != MessageType.FRIEND_MESSAGE
_t = "/私聊" if not is_group else "" _t = "/私聊" if not is_group else ""
self.logger.log( logger.info(
f"{message.sender.nickname}({message.sender.user_id}{_t}) -> {self.parse_message_outline(message)}", tag="QQ_OFFICIAL") f"{message.sender.nickname}({message.sender.user_id}{_t}) -> {self.parse_message_outline(message)}")
# 解析出 session_id # 解析出 session_id
if self.unique_session or not is_group: if self.unique_session or not is_group:
@@ -151,47 +158,70 @@ class QQOfficial(Platform):
''' '''
回复频道消息 回复频道消息
''' '''
await super().reply_msg()
if isinstance(message, AstrBotMessage): if isinstance(message, AstrBotMessage):
source = message.raw_message source = message.raw_message
else: else:
source = message source = message
assert isinstance(source, (botpy.message.Message, assert isinstance(source, (botpy.message.Message,
botpy.message.GroupMessage, botpy.message.DirectMessage)) botpy.message.GroupMessage, botpy.message.DirectMessage))
self.logger.log( logger.info(
f"{message.sender.nickname}({message.sender.user_id}) <- {self.parse_message_outline(res)}", tag="QQ_OFFICIAL") f"{message.sender.nickname}({message.sender.user_id}) <- {self.parse_message_outline(res)}")
plain_text = '' plain_text = ''
image_path = '' image_path = ''
msg_ref = None msg_ref = None
# if isinstance(res, list):
# plain_text, image_path = qq_official_message_parse(res)
# elif isinstance(res, str):
# plain_text = res
# if self.cfg['qq_pic_mode']:
# # 文本转图片,并且加上原来的图片
# if plain_text != '' or image_path != '':
# if image_path is not None and image_path != '':
# if image_path.startswith("http"):
# plain_text += "\n\n" + "![](" + image_path + ")"
# else:
# plain_text += "\n\n" + \
# "![](file:///" + image_path + ")"
# # image_path = gu.create_markdown_image("".join(plain_text))
# image_path = await text_to_image_base("".join(plain_text))
# plain_text = ""
# else:
# if image_path is not None and image_path != '':
# msg_ref = None
# if image_path.startswith("http"):
# async with aiohttp.ClientSession() as session:
# async with session.get(image_path) as response:
# if response.status == 200:
# image = PILImage.open(io.BytesIO(await response.read()))
# image_path = gu.save_temp_img(image)
if self.cc.get("qq_pic_mode", False):
plains = []
news = []
if isinstance(res, str):
res = [Plain(text=res, convert=False),]
for i in res:
if isinstance(i, Plain):
plains.append(i.text)
else:
news.append(i)
plains_str = "".join(plains).strip()
if plains_str and len(plains_str) > 50:
p = await text_to_image_base(plains_str, return_url=False)
with open(p, "rb") as f:
news.append(Image.fromBytes(f.read()))
res = news
if isinstance(res, list): if isinstance(res, list):
plain_text, image_path = qq_official_message_parse(res) plain_text, image_path = qq_official_message_parse(res)
elif isinstance(res, str): else:
plain_text = res plain_text = res
if self.cfg['qq_pic_mode']: if source and not image_path: # file_image与message_reference不能同时传入
# 文本转图片,并且加上原来的图片
if plain_text != '' or image_path != '':
if image_path is not None and image_path != '':
if image_path.startswith("http"):
plain_text += "\n\n" + "![](" + image_path + ")"
else:
plain_text += "\n\n" + \
"![](file:///" + image_path + ")"
image_path = gu.create_markdown_image("".join(plain_text))
plain_text = ""
else:
if image_path is not None and image_path != '':
msg_ref = None
if image_path.startswith("http"):
async with aiohttp.ClientSession() as session:
async with session.get(image_path) as response:
if response.status == 200:
image = PILImage.open(io.BytesIO(await response.read()))
image_path = gu.save_temp_img(image)
if source is not None and image_path == '': # file_image与message_reference不能同时传入
msg_ref = Reference(message_id=source.id, msg_ref = Reference(message_id=source.id,
ignore_get_message_error=False) ignore_get_message_error=False)
@@ -210,7 +240,7 @@ class QQOfficial(Platform):
data['guild_id'] = source.guild_id data['guild_id'] = source.guild_id
else: else:
raise ValueError(f"未知的消息类型: {message.type}") raise ValueError(f"未知的消息类型: {message.type}")
if image_path != '': if image_path:
data['file_image'] = image_path data['file_image'] = image_path
try: try:

View File

@@ -5,90 +5,109 @@ import time
import tiktoken import tiktoken
import threading import threading
import traceback import traceback
import base64
from openai import AsyncOpenAI from openai import AsyncOpenAI
from openai.types.images_response import ImagesResponse from openai.types.images_response import ImagesResponse
from openai.types.chat.chat_completion import ChatCompletion from openai.types.chat.chat_completion import ChatCompletion
from openai._exceptions import *
from cores.database.conn import dbConn from persist.session import dbConn
from model.provider.provider import Provider from model.provider.provider import Provider
from util import general_utils as gu from util import general_utils as gu
from util.cmd_config import CmdConfig from util.cmd_config import CmdConfig
from util.general_utils import Logger from SparkleLogging.utils.core import LogManager
from logging import Logger
from typing import List, Dict
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/' MODELS = {
"gpt-4o": 128000,
"gpt-4o-2024-05-13": 128000,
"gpt-4-turbo": 128000,
"gpt-4-turbo-2024-04-09": 128000,
"gpt-4-turbo-preview": 128000,
"gpt-4-0125-preview": 128000,
"gpt-4-1106-preview": 128000,
"gpt-4-vision-preview": 128000,
"gpt-4-1106-vision-preview": 128000,
"gpt-4": 8192,
"gpt-4-0613": 8192,
"gpt-4-32k": 32768,
"gpt-4-32k-0613": 32768,
"gpt-3.5-turbo-0125": 16385,
"gpt-3.5-turbo": 16385,
"gpt-3.5-turbo-1106": 16385,
"gpt-3.5-turbo-instruct": 4096,
"gpt-3.5-turbo-16k": 16385,
"gpt-3.5-turbo-0613": 16385,
"gpt-3.5-turbo-16k-0613": 16385,
}
class ProviderOpenAIOfficial(Provider): class ProviderOpenAIOfficial(Provider):
def __init__(self, cfg): def __init__(self, cfg) -> None:
self.cc = CmdConfig() super().__init__()
self.logger = Logger()
self.key_list = [] os.makedirs("data/openai", exist_ok=True)
# 如果 cfg['key'] 中有长度为 1 的字符串,那么是格式错误,直接报错
for key in cfg['key']:
if len(key) == 1:
raise BaseException(
"检查到了长度为 1 的Key。配置文件中的 openai.key 处的格式错误 (符号 - 的后面要加空格)。")
if cfg['key'] != '' and cfg['key'] != None:
self.key_list = cfg['key']
if len(self.key_list) == 0:
raise Exception("您打开了 OpenAI 模型服务,但是未填写 key。请前往填写。")
self.key_stat = {} self.cc = CmdConfig
for k in self.key_list: self.key_data_path = "data/openai/keys.json"
self.key_stat[k] = {'exceed': False, 'used': 0} self.api_keys = []
self.chosen_api_key = None
self.base_url = None
self.keys_data = {} # 记录超额
self.api_base = None if cfg['key']: self.api_keys = cfg['key']
if 'api_base' in cfg and cfg['api_base'] != 'none' and cfg['api_base'] != '': if cfg['api_base']: self.base_url = cfg['api_base']
self.api_base = cfg['api_base'] if not self.api_keys:
self.logger.log(f"设置 api_base 为: {self.api_base}", tag="OpenAI") logger.warn("看起来你没有添加 OpenAI 的 API 密钥OpenAI LLM 能力将不会启用。")
else:
self.chosen_api_key = self.api_keys[0]
for key in self.api_keys:
self.keys_data[key] = True
# 创建 OpenAI Client
self.client = AsyncOpenAI( self.client = AsyncOpenAI(
api_key=self.key_list[0], api_key=self.chosen_api_key,
base_url=self.api_base base_url=self.base_url
) )
self.model_configs: Dict = cfg['chatGPTConfigs']
self.openai_model_configs: dict = cfg['chatGPTConfigs'] super().set_curr_model(self.model_configs['model'])
self.logger.log( self.image_generator_model_configs: Dict = self.cc.get('openai_image_generate', None)
f'加载 OpenAI Chat Configs: {self.openai_model_configs}', tag="OpenAI") self.session_memory: Dict[str, List] = {} # 会话记忆
self.openai_configs = cfg self.session_memory_lock = threading.Lock()
# 会话缓存 self.max_tokens = self.model_configs['max_tokens'] # 上下文窗口大小
self.session_dict = {} self.tokenizer = tiktoken.get_encoding("cl100k_base") # todo: 根据 model 切换分词器
# 最大缓存token self.DEFAULT_PERSONALITY = {
self.max_tokens = cfg['total_tokens_limit'] "name": "default",
# 历史记录持久化间隔时间 "prompt": "你是一个很有帮助的 AI 助手。"
self.history_dump_interval = 20 }
self.curr_personality = self.DEFAULT_PERSONALITY
self.enc = tiktoken.get_encoding("cl100k_base") self.session_personality = {} # 记录了某个session是否已设置人格。
# 从 SQLite DB 读取历史记录 # 从 SQLite DB 读取历史记录
try: try:
db1 = dbConn() db1 = dbConn()
for session in db1.get_all_session(): for session in db1.get_all_session():
self.session_dict[session[0]] = json.loads(session[1])['data'] self.session_memory_lock.acquire()
self.logger.log("读取历史记录成功。", tag="OpenAI") self.session_memory[session[0]] = json.loads(session[1])['data']
self.session_memory_lock.release()
except BaseException as e: except BaseException as e:
self.logger.log("读取历史记录失败,但不影响使用。", logger.warn(f"读取 OpenAI LLM 对话历史记录 失败{e}。仍可正常使用。")
level=gu.LEVEL_ERROR, tag="OpenAI")
# 定时保存历史记录
# 创建转储定时器线程
threading.Thread(target=self.dump_history, daemon=True).start() threading.Thread(target=self.dump_history, daemon=True).start()
# 人格
self.curr_personality = {}
# 转储历史记录
def dump_history(self): def dump_history(self):
'''
转储历史记录
'''
time.sleep(10) time.sleep(10)
db = dbConn() db = dbConn()
while True: while True:
try: try:
# print("转储历史记录...") for key in self.session_memory:
for key in self.session_dict: data = self.session_memory[key]
data = self.session_dict[key]
data_json = { data_json = {
'data': data 'data': data
} }
@@ -96,326 +115,386 @@ class ProviderOpenAIOfficial(Provider):
db.update_session(key, json.dumps(data_json)) db.update_session(key, json.dumps(data_json))
else: else:
db.insert_session(key, json.dumps(data_json)) db.insert_session(key, json.dumps(data_json))
# print("转储历史记录完毕") logger.debug("已保存 OpenAI 会话历史记录")
except BaseException as e: except BaseException as e:
print(e) print(e)
# 每隔10分钟转储一次 finally:
time.sleep(10*self.history_dump_interval) time.sleep(10*60)
def personality_set(self, default_personality: dict, session_id: str): def personality_set(self, default_personality: dict, session_id: str):
if not default_personality: return
if session_id not in self.session_memory:
self.session_memory[session_id] = []
self.curr_personality = default_personality self.curr_personality = default_personality
self.session_personality = {} # 重置
encoded_prompt = self.tokenizer.encode(default_personality['prompt'])
tokens_num = len(encoded_prompt)
model = self.model_configs['model']
if model in MODELS and tokens_num > MODELS[model] - 500:
default_personality['prompt'] = self.tokenizer.decode(encoded_prompt[:MODELS[model] - 500])
new_record = { new_record = {
"user": { "user": {
"role": "user", "role": "system",
"content": default_personality['prompt'], "content": default_personality['prompt'],
}, },
"AI": { 'usage_tokens': 0, # 到该条目的总 token 数
"role": "assistant", 'single-tokens': 0 # 该条目的 token 数
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
} }
self.session_dict[session_id].append(new_record)
async def text_chat(self, prompt, self.session_memory[session_id].append(new_record)
session_id=None,
image_url=None,
function_call=None,
extra_conf: dict = None,
default_personality: dict = None):
if session_id is None:
session_id = "unknown"
if "unknown" in self.session_dict:
del self.session_dict["unknown"]
# 会话机制
if session_id not in self.session_dict:
self.session_dict[session_id] = []
if len(self.session_dict[session_id]) == 0: async def encode_image_bs64(self, image_url: str) -> str:
# 设置默认人格 '''
if default_personality is not None: 将图片转换为 base64
self.personality_set(default_personality, session_id) '''
if image_url.startswith("http"):
image_url = await gu.download_image_by_url(image_url)
with open(image_url, "rb") as f:
image_bs64 = base64.b64encode(f.read()).decode()
return "data:image/jpeg;base64," + image_bs64
# 使用 tictoken 截断消息 async def retrieve_context(self, session_id: str):
_encoded_prompt = self.enc.encode(prompt) '''
if self.openai_model_configs['max_tokens'] < len(_encoded_prompt): 根据 session_id 获取保存的 OpenAI 格式的上下文
prompt = self.enc.decode(_encoded_prompt[:int( '''
self.openai_model_configs['max_tokens']*0.80)]) if session_id not in self.session_memory:
self.logger.log(f"注意,有一部分 prompt 文本由于超出 token 限制而被截断。", raise Exception("会话 ID 不存在")
level=gu.LEVEL_WARNING, tag="OpenAI")
# 转换为 openai 要求的格式
cache_data_list, new_record, req = self.wrap( context = []
prompt, session_id, image_url) is_lvm = await self.is_lvm()
self.logger.log(f"cache: {str(cache_data_list)}", for record in self.session_memory[session_id]:
level=gu.LEVEL_DEBUG, tag="OpenAI") if "user" in record and record['user']:
self.logger.log(f"request: {str(req)}", if not is_lvm and "content" in record['user'] and isinstance(record['user']['content'], list):
level=gu.LEVEL_DEBUG, tag="OpenAI") logger.warn(f"由于当前模型 {self.model_configs['model']}不支持视觉,将忽略上下文中的图片输入。如果一直弹出此警告,可以尝试 reset 指令。")
retry = 0
response = None
err = ''
# 截断倍率
truncate_rate = 0.75
use_gpt4v = False
for i in req:
if isinstance(i['content'], list):
use_gpt4v = True
break
if image_url is not None:
use_gpt4v = True
if use_gpt4v:
conf = self.openai_model_configs.copy()
conf['model'] = 'gpt-4-vision-preview'
else:
conf = self.openai_model_configs
if extra_conf is not None:
conf.update(extra_conf)
while retry < 10:
try:
if function_call is None:
response = await self.client.chat.completions.create(
messages=req,
**conf
)
else:
response = await self.client.chat.completions.create(
messages=req,
tools=function_call,
**conf
)
break
except Exception as e:
traceback.print_exc()
if 'Invalid content type. image_url is only supported by certain models.' in str(e):
raise e
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
self.logger.log("当前 Key 已超额或异常, 正在切换",
level=gu.LEVEL_WARNING, tag="OpenAI")
self.key_stat[self.client.api_key]['exceed'] = True
is_switched = self.handle_switch_key()
if not is_switched:
raise e
retry -= 1
elif 'maximum context length' in str(e):
self.logger.log("token 超限, 清空对应缓存,并进行消息截断", tag="OpenAI")
self.session_dict[session_id] = []
prompt = prompt[:int(len(prompt)*truncate_rate)]
truncate_rate -= 0.05
cache_data_list, new_record, req = self.wrap(
prompt, session_id)
elif 'Limit: 3 / min. Please try again in 20s.' in str(e) or "OpenAI response error" in str(e):
time.sleep(30)
continue continue
else: context.append(record['user'])
self.logger.log(str(e), level=gu.LEVEL_ERROR, tag="OpenAI") if "AI" in record and record['AI']:
time.sleep(2) context.append(record['AI'])
err = str(e)
retry += 1
if retry >= 10:
self.logger.log(
r"如果报错, 且您的机器在中国大陆内, 请确保您的电脑已经设置好代理软件(梯子), 并在配置文件设置了系统代理地址。详见 https://github.com/Soulter/QQChannelChatGPT/wiki", tag="OpenAI")
raise BaseException("连接出错: "+str(err))
assert isinstance(response, ChatCompletion)
self.logger.log(
f"OPENAI RESPONSE: {response.usage}", level=gu.LEVEL_DEBUG, tag="OpenAI")
# 结果分类 return context
choice = response.choices[0]
if choice.message.content != None: async def is_lvm(self):
# 文本形式 '''
chatgpt_res = str(choice.message.content).strip() 是否是 LVM
elif choice.message.tool_calls != None and len(choice.message.tool_calls) > 0: '''
return self.model_configs['model'].startswith("gpt-4")
async def get_models(self):
'''
获取所有模型
'''
models = await self.client.models.list()
logger.info(f"OpenAI 模型列表:{models}")
return models
async def assemble_context(self, session_id: str, prompt: str, image_url: str = None):
'''
组装上下文,并且根据当前上下文窗口大小截断
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
tokens_num = len(self.tokenizer.encode(prompt))
previous_total_tokens_num = 0 if not self.session_memory[session_id] else self.session_memory[session_id][-1]['usage_tokens']
message = {
"usage_tokens": previous_total_tokens_num + tokens_num,
"single_tokens": tokens_num,
"AI": None
}
if image_url:
user_content = {
"role": "user",
"content": [
{
"type": "text",
"text": prompt
},
{
"type": "image_url",
"image_url": {
"url": await self.encode_image_bs64(image_url)
}
}
]
}
else:
user_content = {
"role": "user",
"content": prompt
}
message["user"] = user_content
self.session_memory[session_id].append(message)
# 根据 模型的上下文窗口 淘汰掉多余的记录
curr_model = self.model_configs['model']
if curr_model in MODELS:
maxium_tokens_num = MODELS[curr_model] - 300 # 至少预留 300 给 completion
# if message['usage_tokens'] > maxium_tokens_num:
# 淘汰多余的记录,使得最终的 usage_tokens 不超过 maxium_tokens_num - 300
# contexts = self.session_memory[session_id]
# need_to_remove_idx = 0
# freed_tokens_num = contexts[0]['single-tokens']
# while freed_tokens_num < message['usage_tokens'] - maxium_tokens_num:
# need_to_remove_idx += 1
# freed_tokens_num += contexts[need_to_remove_idx]['single-tokens']
# # 更新之后的所有记录的 usage_tokens
# for i in range(len(contexts)):
# if i > need_to_remove_idx:
# contexts[i]['usage_tokens'] -= freed_tokens_num
# logger.debug(f"淘汰上下文记录 {need_to_remove_idx+1} 条,释放 {freed_tokens_num} 个 token。当前上下文总 token 为 {contexts[-1]['usage_tokens']}。")
# self.session_memory[session_id] = contexts[need_to_remove_idx+1:]
while len(self.session_memory[session_id]) and self.session_memory[session_id][-1]['usage_tokens'] > maxium_tokens_num:
self.pop_record(session_id)
async def pop_record(self, session_id: str, pop_system_prompt: bool = False):
'''
弹出第一条记录
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
if len(self.session_memory[session_id]) == 0:
return None
for i in range(len(self.session_memory[session_id])):
# 检查是否是 system prompt
if not pop_system_prompt and self.session_memory[session_id][i]['user']['role'] == "system":
# 如果只有一个 system prompt才不删掉
f = False
for j in range(i+1, len(self.session_memory[session_id])):
if self.session_memory[session_id][j]['user']['role'] == "system":
f = True
break
if not f:
continue
record = self.session_memory[session_id].pop(i)
break
# 更新之后所有记录的 usage_tokens
for i in range(len(self.session_memory[session_id])):
self.session_memory[session_id][i]['usage_tokens'] -= record['single-tokens']
logger.debug(f"淘汰上下文记录 1 条,释放 {record['single-tokens']} 个 token。当前上下文总 token 为 {self.session_memory[session_id][-1]['usage_tokens']}")
return record
async def text_chat(self,
prompt: str,
session_id: str,
image_url: None=None,
tools: None=None,
extra_conf: Dict = None,
**kwargs
) -> str:
super().accu_model_stat()
if not session_id:
session_id = "unknown"
if "unknown" in self.session_memory:
del self.session_memory["unknown"]
if session_id not in self.session_memory:
self.session_memory[session_id] = []
if session_id not in self.session_personality or not self.session_personality[session_id]:
self.personality_set(self.curr_personality, session_id)
self.session_personality[session_id] = True
# 如果 prompt 超过了最大窗口,截断。
# 1. 可以保证之后 pop 的时候不会出现问题
# 2. 可以保证不会超过最大 token 数
_encoded_prompt = self.tokenizer.encode(prompt)
curr_model = self.model_configs['model']
if curr_model in MODELS and len(_encoded_prompt) > MODELS[curr_model] - 300:
_encoded_prompt = _encoded_prompt[:MODELS[curr_model] - 300]
prompt = self.tokenizer.decode(_encoded_prompt)
# 组装上下文,并且根据当前上下文窗口大小截断
await self.assemble_context(session_id, prompt, image_url)
# 获取上下文openai 格式
contexts = await self.retrieve_context(session_id)
conf = self.model_configs
if extra_conf: conf.update(extra_conf)
# start request
retry = 0
rate_limit_retry = 0
while retry < 3 or rate_limit_retry < 5:
logger.debug(conf)
logger.debug(contexts)
if tools:
completion_coro = self.client.chat.completions.create(
messages=contexts,
tools=tools,
**conf
)
else:
completion_coro = self.client.chat.completions.create(
messages=contexts,
**conf
)
try:
completion = await completion_coro
break
except AuthenticationError as e:
api_key = self.chosen_api_key[10:] + "..."
logger.error(f"OpenAI API Key {api_key} 验证错误。详细原因:{e}。正在切换到下一个可用的 Key如果有的话")
self.keys_data[self.chosen_api_key] = False
ok = await self.switch_to_next_key()
if ok: continue
else: raise Exception("所有 OpenAI API Key 目前都不可用。")
except BadRequestError as e:
logger.warn(f"OpenAI 请求异常:{e}")
if "image_url is only supported by certain models." in str(e):
raise Exception(f"当前模型 { self.model_configs['model'] } 不支持图片输入,请更换模型。")
retry += 1
except RateLimitError as e:
if "You exceeded your current quota" in str(e):
self.keys_data[self.chosen_api_key] = False
ok = await self.switch_to_next_key()
if ok: continue
else: raise Exception("所有 OpenAI API Key 目前都不可用。")
logger.error(f"OpenAI API Key {self.chosen_api_key} 达到请求速率限制或者官方服务器当前超载。详细原因:{e}")
await self.switch_to_next_key()
rate_limit_retry += 1
time.sleep(1)
except Exception as e:
retry += 1
if retry >= 3:
logger.error(traceback.format_exc())
raise Exception(f"OpenAI 请求失败:{e}。重试次数已达到上限。")
if "maximum context length" in str(e):
logger.warn(f"OpenAI 请求失败:{e}。上下文长度超过限制。尝试弹出最早的记录然后重试。")
self.pop_record(session_id)
logger.warning(f"OpenAI 请求失败:{e}。重试第 {retry} 次。")
time.sleep(1)
assert isinstance(completion, ChatCompletion)
logger.debug(f"openai completion: {completion.usage}")
choice = completion.choices[0]
usage_tokens = completion.usage.total_tokens
completion_tokens = completion.usage.completion_tokens
self.session_memory[session_id][-1]['usage_tokens'] = usage_tokens
self.session_memory[session_id][-1]['single_tokens'] += completion_tokens
if choice.message.content:
# 返回文本
completion_text = str(choice.message.content).strip()
elif choice.message.tool_calls and choice.message.tool_calls:
# tools call (function calling) # tools call (function calling)
return choice.message.tool_calls[0].function return choice.message.tool_calls[0].function
self.key_stat[self.client.api_key]['used'] += response.usage.total_tokens self.session_memory[session_id][-1]['AI'] = {
current_usage_tokens = response.usage.total_tokens "role": "assistant",
"content": completion_text
# 超过指定tokens 尽可能的保留最多的条目直到小于max_tokens
if current_usage_tokens > self.max_tokens:
t = current_usage_tokens
index = 0
while t > self.max_tokens:
if index >= len(cache_data_list):
break
# 保留人格信息
if cache_data_list[index]['type'] != 'personality':
t -= int(cache_data_list[index]['single_tokens'])
del cache_data_list[index]
else:
index += 1
# 删除完后更新相关字段
self.session_dict[session_id] = cache_data_list
# 添加新条目进入缓存的prompt
new_record['AI'] = {
'role': 'assistant',
'content': chatgpt_res,
} }
new_record['usage_tokens'] = current_usage_tokens
if len(cache_data_list) > 0:
new_record['single_tokens'] = current_usage_tokens - \
int(cache_data_list[-1]['usage_tokens'])
else:
new_record['single_tokens'] = current_usage_tokens
cache_data_list.append(new_record) return completion_text
self.session_dict[session_id] = cache_data_list async def switch_to_next_key(self):
'''
return chatgpt_res 切换到下一个 API Key
'''
async def image_chat(self, prompt, img_num=1, img_size="1024x1024"): if not self.api_keys:
retry = 0 logger.error("OpenAI API Key 不存在。")
image_url = ''
image_generate_configs = self.cc.get("openai_image_generate", None)
while retry < 5:
try:
response: ImagesResponse = await self.client.images.generate(
prompt=prompt,
**image_generate_configs
)
image_url = []
for i in range(img_num):
image_url.append(response.data[i].url)
break
except Exception as e:
self.logger.log(str(e), level=gu.LEVEL_ERROR)
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(
e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
self.logger.log("当前 Key 已超额或者不正常, 正在切换",
level=gu.LEVEL_WARNING, tag="OpenAI")
self.key_stat[self.client.api_key]['exceed'] = True
is_switched = self.handle_switch_key()
if not is_switched:
raise e
elif 'Your request was rejected as a result of our safety system.' in str(e):
self.logger.log("您的请求被 OpenAI 安全系统拒绝, 请稍后再试",
level=gu.LEVEL_WARNING, tag="OpenAI")
raise e
else:
retry += 1
if retry >= 5:
raise BaseException("连接超时")
return image_url
async def forget(self, session_id=None) -> bool:
if session_id is None:
return False return False
self.session_dict[session_id] = []
return True
def get_prompts_by_cache_list(self, cache_data_list, divide=False, paging=False, size=5, page=1): for key in self.keys_data:
if self.keys_data[key]:
# 没超额
self.chosen_api_key = key
self.client.api_key = key
logger.info(f"OpenAI 切换到 API Key {key[:10]}... 成功。")
return True
return False
async def image_generate(self, prompt: str, session_id: str = None, **kwargs) -> str:
'''
生成图片
'''
retry = 0
conf = self.image_generator_model_configs
super().accu_model_stat(model=conf['model'])
if not conf:
logger.error("OpenAI 图片生成模型配置不存在。")
raise Exception("OpenAI 图片生成模型配置不存在。")
while retry < 3:
try:
images_response = await self.client.images.generate(
prompt=prompt,
**conf
)
image_url = images_response.data[0].url
return image_url
except Exception as e:
retry += 1
if retry >= 3:
logger.error(traceback.format_exc())
raise Exception(f"OpenAI 图片生成请求失败:{e}。重试次数已达到上限。")
logger.warning(f"OpenAI 图片生成请求失败:{e}。重试第 {retry} 次。")
time.sleep(1)
async def forget(self, session_id=None, keep_system_prompt: bool=False) -> bool:
if session_id is None: return False
self.session_memory[session_id] = []
if keep_system_prompt:
self.personality_set(self.curr_personality, session_id)
else:
self.curr_personality = self.DEFAULT_PERSONALITY
return True
def dump_contexts_page(self, session_id: str, size=5, page=1,):
''' '''
获取缓存的会话 获取缓存的会话
''' '''
prompts = "" # contexts_str = ""
if paging: # for i, key in enumerate(self.session_memory):
page_begin = (page-1)*size # if i < (page-1)*size or i >= page*size:
page_end = page*size # continue
if page_begin < 0: # contexts_str += f"Session ID: {key}\n"
page_begin = 0 # for record in self.session_memory[key]:
if page_end > len(cache_data_list): # if "user" in record:
page_end = len(cache_data_list) # contexts_str += f"User: {record['user']['content']}\n"
cache_data_list = cache_data_list[page_begin:page_end] # if "AI" in record:
for item in cache_data_list: # contexts_str += f"AI: {record['AI']['content']}\n"
prompts += str(item['user']['role']) + ":\n" + \ # contexts_str += "---\n"
str(item['user']['content']) + "\n" contexts_str = ""
prompts += str(item['AI']['role']) + ":\n" + \ if session_id in self.session_memory:
str(item['AI']['content']) + "\n" for record in self.session_memory[session_id]:
if "user" in record and record['user']:
text = record['user']['content'][:100] + "..." if len(record['user']['content']) > 100 else record['user']['content']
contexts_str += f"User: {text}\n"
if "AI" in record and record['AI']:
text = record['AI']['content'][:100] + "..." if len(record['AI']['content']) > 100 else record['AI']['content']
contexts_str += f"Assistant: {text}\n"
else:
contexts_str = "会话 ID 不存在。"
if divide: return contexts_str, len(self.session_memory[session_id])
prompts += "----------\n"
return prompts
def wrap(self, prompt, session_id, image_url=None):
if image_url is not None:
prompt = [
{
"type": "text",
"text": prompt
},
{
"type": "image_url",
"image_url": {
"url": image_url
}
}
]
# 获得缓存信息
context = self.session_dict[session_id]
new_record = {
"user": {
"role": "user",
"content": prompt,
},
"AI": {},
'type': "common",
'usage_tokens': 0,
}
req_list = []
for i in context:
if 'user' in i:
req_list.append(i['user'])
if 'AI' in i:
req_list.append(i['AI'])
req_list.append(new_record['user'])
return context, new_record, req_list
def handle_switch_key(self):
is_all_exceed = True
for key in self.key_stat:
if key == None or self.key_stat[key]['exceed']:
continue
is_all_exceed = False
self.client.api_key = key
self.logger.log(
f"切换到 Key: {key}(已使用 token: {self.key_stat[key]['used']})", level=gu.LEVEL_INFO, tag="OpenAI")
break
if is_all_exceed:
self.logger.log(
"所有 Key 已超额", level=gu.LEVEL_CRITICAL, tag="OpenAI")
return False
return True
def set_model(self, model: str):
self.model_configs['model'] = model
self.cc.put_by_dot_str("openai.chatGPTConfigs.model", model)
super().set_curr_model(model)
def get_configs(self): def get_configs(self):
return self.openai_configs return self.model_configs
def get_key_stat(self): def get_keys_data(self):
return self.key_stat return self.keys_data
def get_key_list(self):
return self.key_list
def get_curr_key(self): def get_curr_key(self):
return self.client.api_key return self.chosen_api_key
def set_key(self, key): def set_key(self, key):
self.client.api_key = key self.client.api_key = key
# 添加key
def append_key(self, key, sponsor):
self.key_list.append(key)
self.key_stat[key] = {'exceed': False, 'used': 0, 'sponsor': sponsor}
# 检查key是否可用
async def check_key(self, key):
client_ = AsyncOpenAI(
api_key=key,
base_url=self.api_base
)
messages = [{"role": "user", "content": "please just echo `test`"}]
await client_.chat.completions.create(
messages=messages,
**self.openai_model_configs
)
return True

View File

@@ -1,9 +1,32 @@
from collections import defaultdict
class Provider: class Provider:
def __init__(self) -> None:
self.model_stat = defaultdict(int) # 用于记录 LLM Model 使用数据
self.curr_model_name = "unknown"
def reset_model_stat(self):
self.model_stat.clear()
def set_curr_model(self, model_name: str):
self.curr_model_name = model_name
def get_curr_model(self):
'''
返回当前正在使用的 LLM
'''
return self.curr_model_name
def accu_model_stat(self, model: str = None):
if not model:
model = self.get_curr_model()
self.model_stat[model] += 1
async def text_chat(self, async def text_chat(self,
prompt: str, prompt: str,
session_id: str, session_id: str,
image_url: None, image_url: None = None,
function_call: None, tools: None = None,
extra_conf: dict = None, extra_conf: dict = None,
default_personality: dict = None, default_personality: dict = None,
**kwargs) -> str: **kwargs) -> str:
@@ -14,11 +37,11 @@ class Provider:
[optional] [optional]
image_url: 图片url识图 image_url: 图片url识图
function_call: 函数调用 tools: 函数调用工具
extra_conf: 额外配置 extra_conf: 额外配置
default_personality: 默认人格 default_personality: 默认人格
''' '''
raise NotImplementedError raise NotImplementedError()
async def image_generate(self, prompt, session_id, **kwargs) -> str: async def image_generate(self, prompt, session_id, **kwargs) -> str:
''' '''
@@ -26,10 +49,10 @@ class Provider:
prompt: 提示词 prompt: 提示词
session_id: 会话id session_id: 会话id
''' '''
raise NotImplementedError raise NotImplementedError()
async def forget(self, session_id=None) -> bool: async def forget(self, session_id=None) -> bool:
''' '''
重置会话 重置会话
''' '''
raise NotImplementedError raise NotImplementedError()

View File

@@ -1,13 +1,16 @@
import sqlite3 import sqlite3
import yaml import os
import shutil
import time import time
from typing import Tuple from typing import Tuple
class dbConn(): class dbConn():
def __init__(self): def __init__(self):
# 读取参数,并支持中文 db_path = "data/data.db"
conn = sqlite3.connect("data.db") if os.path.exists("data.db"):
shutil.copy("data.db", db_path)
conn = sqlite3.connect(db_path)
conn.text_factory = str conn.text_factory = str
self.conn = conn self.conn = conn
c = conn.cursor() c = conn.cursor()

View File

@@ -4,15 +4,15 @@ requests
openai~=1.2.3 openai~=1.2.3
qq-botpy qq-botpy
chardet~=5.1.0 chardet~=5.1.0
Pillow~=9.4.0 Pillow
GitPython~=3.1.31
nakuru-project nakuru-project
beautifulsoup4 beautifulsoup4
googlesearch-python googlesearch-python
tiktoken tiktoken
readability-lxml readability-lxml
baidu-aip~=4.16.9 baidu-aip
websockets websockets
flask flask
psutil psutil
lxml_html_clean lxml_html_clean
SparkleLogging

Binary file not shown.

28
type/command.py Normal file
View File

@@ -0,0 +1,28 @@
from typing import Union, List, Callable
from dataclasses import dataclass
@dataclass
class CommandItem():
'''
用来描述单个指令
'''
command_name: Union[str, tuple] # 指令名
callback: Callable # 回调函数
description: str # 描述
origin: str # 注册来源
class CommandResult():
'''
用于在Command中返回多个值
'''
def __init__(self, hit: bool, success: bool = False, message_chain: list = [], command_name: str = "unknown_command") -> None:
self.hit = hit
self.success = success
self.message_chain = message_chain
self.command_name = command_name
def _result_tuple(self):
return (self.success, self.message_chain, self.command_name)

1
type/config.py Normal file
View File

@@ -0,0 +1 @@
VERSION = '3.2.4'

62
type/message.py Normal file
View File

@@ -0,0 +1,62 @@
from enum import Enum
from typing import List
from dataclasses import dataclass
from nakuru.entities.components import BaseMessageComponent
from type.register import RegisteredPlatform
from type.types import GlobalObject
class MessageType(Enum):
GROUP_MESSAGE = 'GroupMessage' # 群组形式的消息
FRIEND_MESSAGE = 'FriendMessage' # 私聊、好友等单聊消息
GUILD_MESSAGE = 'GuildMessage' # 频道消息
@dataclass
class MessageMember():
user_id: str # 发送者id
nickname: str = None
class AstrBotMessage():
'''
AstrBot 的消息对象
'''
tag: str # 消息来源标签
type: MessageType # 消息类型
self_id: str # 机器人的识别id
session_id: str # 会话id
message_id: str # 消息id
sender: MessageMember # 发送者
message: List[BaseMessageComponent] # 消息链使用 Nakuru 的消息链格式
message_str: str # 最直观的纯文本消息字符串
raw_message: object
timestamp: int # 消息时间戳
def __str__(self) -> str:
return str(self.__dict__)
class AstrMessageEvent():
'''
消息事件。
'''
context: GlobalObject # 一些公用数据
message_str: str # 纯消息字符串
message_obj: AstrBotMessage # 消息对象
platform: RegisteredPlatform # 来源平台
role: str # 基本身份。`admin` 或 `member`
session_id: int # 会话 id
def __init__(self,
message_str: str,
message_obj: AstrBotMessage,
platform: RegisteredPlatform,
role: str,
context: GlobalObject,
session_id: str = None):
self.context = context
self.message_str = message_str
self.message_obj = message_obj
self.platform = platform
self.role = role
self.session_id = session_id

27
type/plugin.py Normal file
View File

@@ -0,0 +1,27 @@
from enum import Enum
from dataclasses import dataclass
class PluginType(Enum):
PLATFORM = 'platform' # 平台类插件。
LLM = 'llm' # 大语言模型类插件
COMMON = 'common' # 其他插件
@dataclass
class PluginMetadata:
'''
插件的元数据。
'''
# required
plugin_name: str
plugin_type: PluginType
author: str # 插件作者
desc: str # 插件简介
version: str # 插件版本
# optional
repo: str = None # 插件仓库地址
def __str__(self) -> str:
return f"PluginMetadata({self.plugin_name}, {self.plugin_type}, {self.desc}, {self.version}, {self.repo})"

53
type/register.py Normal file
View File

@@ -0,0 +1,53 @@
from model.provider.provider import Provider as LLMProvider
from model.platform._platfrom import Platform
from type.plugin import *
from typing import List
from types import ModuleType
from dataclasses import dataclass
@dataclass
class RegisteredPlugin:
'''
注册在 AstrBot 中的插件。
'''
metadata: PluginMetadata
plugin_instance: object
module_path: str
module: ModuleType
root_dir_name: str
trig_cnt: int = 0
def reset_trig_cnt(self):
self.trig_cnt = 0
def trig(self):
self.trig_cnt += 1
def __str__(self) -> str:
return f"RegisteredPlugin({self.metadata}, {self.module_path}, {self.root_dir_name})"
RegisteredPlugins = List[RegisteredPlugin]
@dataclass
class RegisteredPlatform:
'''
注册在 AstrBot 中的平台。平台应当实现 Platform 接口。
'''
platform_name: str
platform_instance: Platform
origin: str = None # 注册来源
def __str__(self) -> str:
return self.platform_name
@dataclass
class RegisteredLLM:
'''
注册在 AstrBot 中的大语言模型调用。大语言模型应当实现 LLMProvider 接口。
'''
llm_name: str
llm_instance: LLMProvider
origin: str = None # 注册来源

36
type/types.py Normal file
View File

@@ -0,0 +1,36 @@
from type.register import *
from typing import List
from logging import Logger
class GlobalObject:
'''
存放一些公用的数据,用于在不同模块(如core与command)之间传递
'''
version: str # 机器人版本
nick: tuple # 用户定义的机器人的别名
base_config: dict # config.json 中导出的配置
cached_plugins: List[RegisteredPlugin] # 加载的插件
platforms: List[RegisteredPlatform]
llms: List[RegisteredLLM]
web_search: bool # 是否开启了网页搜索
reply_prefix: str # 回复前缀
unique_session: bool # 是否开启了独立会话
default_personality: dict
dashboard_data = None
logger: Logger = None
def __init__(self):
self.nick = None # gocq 的昵称
self.base_config = None # config.yaml
self.cached_plugins = [] # 缓存的插件
self.web_search = False # 是否开启了网页搜索
self.reply_prefix = None
self.unique_session = False
self.platforms = []
self.llms = []
self.default_personality = None
self.dashboard_data = None
self.stat = {}

183
util/agent/web_searcher.py Normal file
View File

@@ -0,0 +1,183 @@
import traceback
import random
import json
import asyncio
import aiohttp
import os
from readability import Document
from bs4 import BeautifulSoup
from openai.types.chat.chat_completion_message_tool_call import Function
from util.agent.func_call import FuncCall
from util.search_engine_scraper.config import HEADERS, USER_AGENTS
from util.search_engine_scraper.bing import Bing
from util.search_engine_scraper.sogo import Sogo
from util.search_engine_scraper.google import Google
from model.provider.provider import Provider
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
bing_search = Bing()
sogo_search = Sogo()
google = Google()
proxy = os.environ.get("HTTPS_PROXY", None)
def tidy_text(text: str) -> str:
'''
清理文本,去除空格、换行符等
'''
return text.strip().replace("\n", " ").replace("\r", " ").replace(" ", " ")
# def special_fetch_zhihu(link: str) -> str:
# '''
# function-calling 函数, 用于获取知乎文章的内容
# '''
# response = requests.get(link, headers=HEADERS)
# response.encoding = "utf-8"
# soup = BeautifulSoup(response.text, "html.parser")
# if "zhuanlan.zhihu.com" in link:
# r = soup.find(class_="Post-RichTextContainer")
# else:
# r = soup.find(class_="List-item").find(class_="RichContent-inner")
# if r is None:
# print("debug: zhihu none")
# raise Exception("zhihu none")
# return tidy_text(r.text)
async def search_from_bing(keyword: str) -> str:
'''
tools, 从 bing 搜索引擎搜索
'''
logger.info("web_searcher - search_from_bing: " + keyword)
results = []
try:
results = await google.search(keyword, 5)
except BaseException as e:
logger.error(f"google search error: {e}, try the next one...")
if len(results) == 0:
logger.debug("search google failed")
try:
results = await bing_search.search(keyword, 5)
except BaseException as e:
logger.error(f"bing search error: {e}, try the next one...")
if len(results) == 0:
logger.debug("search bing failed")
try:
results = await sogo_search.search(keyword, 5)
except BaseException as e:
logger.error(f"sogo search error: {e}")
if len(results) == 0:
logger.debug("search sogo failed")
return "没有搜索到结果"
ret = ""
idx = 1
for i in results:
logger.info(f"web_searcher - scraping web: {i.title} - {i.url}")
try:
site_result = await fetch_website_content(i.url)
except:
site_result = ""
site_result = site_result[:600] + "..." if len(site_result) > 600 else site_result
ret += f"{idx}. {i.title} \n{i.snippet}\n{site_result}\n\n"
idx += 1
return ret
async def fetch_website_content(url):
header = HEADERS
header.update({'User-Agent': random.choice(USER_AGENTS)})
async with aiohttp.ClientSession() as session:
async with session.get(url, headers=HEADERS, timeout=6, proxy=proxy) as response:
html = await response.text(encoding="utf-8")
doc = Document(html)
ret = doc.summary(html_partial=True)
soup = BeautifulSoup(ret, 'html.parser')
ret = tidy_text(soup.get_text())
return ret
async def web_search(prompt, provider: Provider, session_id, official_fc=False):
'''
official_fc: 使用官方 function-calling
'''
new_func_call = FuncCall(provider)
new_func_call.add_func("web_search", [{
"type": "string",
"name": "keyword",
"description": "搜索关键词"
}],
"通过搜索引擎搜索。如果问题需要获取近期、实时的消息,在网页上搜索(如天气、新闻或任何需要通过网页获取信息的问题),则调用此函数;如果没有,不要调用此函数。",
search_from_bing
)
new_func_call.add_func("fetch_website_content", [{
"type": "string",
"name": "url",
"description": "要获取内容的网页链接"
}],
"获取网页的内容。如果问题带有合法的网页链接并且用户有需求了解网页内容(例如: `帮我总结一下 https://github.com 的内容`), 就调用此函数。如果没有,不要调用此函数。",
fetch_website_content
)
has_func = False
function_invoked_ret = ""
if official_fc:
# we use official function-calling
result = await provider.text_chat(prompt, session_id, tools=new_func_call.get_func())
if isinstance(result, Function):
logger.debug(f"web_searcher - function-calling: {result}")
func_obj = None
for i in new_func_call.func_list:
if i["name"] == result.name:
func_obj = i["func_obj"]
break
if not func_obj:
return await provider.text_chat(prompt, session_id) + "\n(网页搜索失败, 此为默认回复)"
try:
args = json.loads(result.arguments)
function_invoked_ret = await func_obj(**args)
has_func = True
except BaseException as e:
traceback.print_exc()
return await provider.text_chat(prompt, session_id) + "\n(网页搜索失败, 此为默认回复)"
else:
return result
else:
# we use our own function-calling
try:
args = {
'question': prompt,
'func_definition': new_func_call.func_dump(),
'is_task': False,
'is_summary': False,
}
function_invoked_ret, has_func = await asyncio.to_thread(new_func_call.func_call, **args)
except BaseException as e:
res = await provider.text_chat(prompt) + "\n(网页搜索失败, 此为默认回复)"
return res
has_func = True
if has_func:
await provider.forget(session_id)
summary_prompt = f"""
你是一个专业且高效的助手,你的任务是
1. 根据下面的相关材料对用户的问题 `{prompt}` 进行总结;
2. 简单地发表你对这个问题的简略看法。
# 例子
1. 从网上的信息来看,可以知道...我个人认为...你觉得呢?
2. 根据网上的最新信息,可以得知...我觉得...你怎么看?
# 限制
1. 限制在 200 字以内;
2. 请**直接输出总结**,不要输出多余的内容和提示语。
# 相关材料
{function_invoked_ret}"""
ret = await provider.text_chat(summary_prompt, session_id)
return ret
return function_invoked_ret

View File

@@ -1,9 +1,9 @@
import os import os
import json import json
import yaml
from typing import Union from typing import Union
cpath = "cmd_config.json" cpath = "data/cmd_config.json"
def check_exist(): def check_exist():
if not os.path.exists(cpath): if not os.path.exists(cpath):
@@ -89,8 +89,7 @@ def init_astrbot_config_items():
# 加载默认配置 # 加载默认配置
cc = CmdConfig() cc = CmdConfig()
cc.init_attributes("qq_forward_threshold", 200) cc.init_attributes("qq_forward_threshold", 200)
cc.init_attributes( cc.init_attributes("qq_welcome", "")
"qq_welcome", "欢迎加入本群!\n欢迎给https://github.com/Soulter/QQChannelChatGPT项目一个Star😊~\n输入help查看帮助~\n")
cc.init_attributes("qq_pic_mode", False) cc.init_attributes("qq_pic_mode", False)
cc.init_attributes("gocq_host", "127.0.0.1") cc.init_attributes("gocq_host", "127.0.0.1")
cc.init_attributes("gocq_http_port", 5700) cc.init_attributes("gocq_http_port", 5700)
@@ -119,3 +118,28 @@ def init_astrbot_config_items():
cc.init_attributes("https_proxy", "") cc.init_attributes("https_proxy", "")
cc.init_attributes("dashboard_username", "") cc.init_attributes("dashboard_username", "")
cc.init_attributes("dashboard_password", "") cc.init_attributes("dashboard_password", "")
def try_migrate_config():
'''
将 cmd_config.json 迁移至 data/cmd_config.json
'''
print("try migrate configs")
if os.path.exists("cmd_config.json"):
with open("cmd_config.json", "r", encoding="utf-8-sig") as f:
data = json.load(f)
with open("data/cmd_config.json", "w", encoding="utf-8-sig") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
try:
os.remove("cmd_config.json")
except Exception as e:
pass
if not os.path.exists("cmd_config.json") and not os.path.exists("data/cmd_config.json"):
# 从 configs/config.yaml 上拿数据
configs_pth = os.path.abspath(os.path.join(os.path.abspath(__file__), "../../configs/config.yaml"))
with open(configs_pth, encoding='utf-8') as f:
data = yaml.load(f, Loader=yaml.Loader)
print(data)
with open("data/cmd_config.json", "w", encoding="utf-8-sig") as f:
json.dump(data, f, indent=2, ensure_ascii=False)

View File

@@ -1,300 +0,0 @@
import requests
import util.general_utils as gu
import traceback
import time
import json
import asyncio
from googlesearch import search, SearchResult
from readability import Document
from bs4 import BeautifulSoup
from openai.types.chat.chat_completion_message_tool_call import Function
from util.function_calling.func_call import (
FuncCall,
FuncCallJsonFormatError,
FuncNotFoundError
)
from model.provider.provider import Provider
def tidy_text(text: str) -> str:
'''
清理文本,去除空格、换行符等
'''
return text.strip().replace("\n", " ").replace("\r", " ").replace(" ", " ")
def special_fetch_zhihu(link: str) -> str:
'''
function-calling 函数, 用于获取知乎文章的内容
'''
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
response = requests.get(link, headers=headers)
response.encoding = "utf-8"
soup = BeautifulSoup(response.text, "html.parser")
if "zhuanlan.zhihu.com" in link:
r = soup.find(class_="Post-RichTextContainer")
else:
r = soup.find(class_="List-item").find(class_="RichContent-inner")
if r is None:
print("debug: zhihu none")
raise Exception("zhihu none")
return tidy_text(r.text)
def google_web_search(keyword) -> str:
'''
获取 google 搜索结果, 得到 title、desc、link
'''
ret = ""
index = 1
try:
ls = search(keyword, advanced=True, num_results=4)
for i in ls:
desc = i.description
try:
# gu.log(f"搜索网页: {i.url}", tag="网页搜索", level=gu.LEVEL_INFO)
desc = fetch_website_content(i.url)
except BaseException as e:
print(f"(google) fetch_website_content err: {str(e)}")
# gu.log(f"# No.{str(index)}\ntitle: {i.title}\nurl: {i.url}\ncontent: {desc}\n\n", level=gu.LEVEL_DEBUG, max_len=9999)
ret += f"# No.{str(index)}\ntitle: {i.title}\nurl: {i.url}\ncontent: {desc}\n\n"
index += 1
except Exception as e:
print(f"google search err: {str(e)}")
return web_keyword_search_via_bing(keyword)
return ret
def web_keyword_search_via_bing(keyword) -> str:
'''
获取bing搜索结果, 得到 title、desc、link
'''
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
url = "https://www.bing.com/search?q="+keyword
_cnt = 0
# _detail_store = []
while _cnt < 5:
try:
response = requests.get(url, headers=headers)
response.encoding = "utf-8"
# gu.log(f"bing response: {response.text}", tag="bing", level=gu.LEVEL_DEBUG, max_len=9999)
soup = BeautifulSoup(response.text, "html.parser")
res = ""
result_cnt = 0
ols = soup.find(id="b_results")
for i in ols.find_all("li", class_="b_algo"):
try:
title = i.find("h2").text
desc = i.find("p").text
link = i.find("h2").find("a").get("href")
# res.append({
# "title": title,
# "desc": desc,
# "link": link,
# })
try:
# gu.log(f"搜索网页: {link}", tag="网页搜索", level=gu.LEVEL_INFO)
desc = fetch_website_content(link)
except BaseException as e:
print(f"(bing) fetch_website_content err: {str(e)}")
res += f"# No.{str(result_cnt + 1)}\ntitle: {title}\nurl: {link}\ncontent: {desc}\n\n"
result_cnt += 1
if result_cnt > 5:
break
# if len(_detail_store) >= 3:
# continue
# # 爬取前两条的网页内容
# if "zhihu.com" in link:
# try:
# _detail_store.append(special_fetch_zhihu(link))
# except BaseException as e:
# print(f"zhihu parse err: {str(e)}")
# else:
# try:
# _detail_store.append(fetch_website_content(link))
# except BaseException as e:
# print(f"fetch_website_content err: {str(e)}")
except Exception as e:
print(f"bing parse err: {str(e)}")
if result_cnt == 0:
break
return res
except Exception as e:
# gu.log(f"bing fetch err: {str(e)}")
_cnt += 1
time.sleep(1)
# gu.log("fail to fetch bing info, using sougou.")
return web_keyword_search_via_sougou(keyword)
def web_keyword_search_via_sougou(keyword) -> str:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
}
url = f"https://sogou.com/web?query={keyword}"
response = requests.get(url, headers=headers)
response.encoding = "utf-8"
soup = BeautifulSoup(response.text, "html.parser")
res = []
results = soup.find("div", class_="results")
for i in results.find_all("div", class_="vrwrap"):
try:
title = tidy_text(i.find("h3").text)
link = tidy_text(i.find("h3").find("a").get("href"))
if link.startswith("/link?url="):
link = "https://www.sogou.com" + link
res.append({
"title": title,
"link": link,
})
if len(res) >= 5: # 限制5条
break
except Exception as e:
pass
# gu.log(f"sougou parse err: {str(e)}", tag="web_keyword_search_via_sougou", level=gu.LEVEL_ERROR)
# 爬取网页内容
_detail_store = []
for i in res:
if _detail_store >= 3:
break
try:
_detail_store.append(fetch_website_content(i["link"]))
except BaseException as e:
print(f"fetch_website_content err: {str(e)}")
ret = f"{str(res)}"
if len(_detail_store) > 0:
ret += f"\n网页内容: {str(_detail_store)}"
return ret
def fetch_website_content(url):
# gu.log(f"fetch_website_content: {url}", tag="fetch_website_content", level=gu.LEVEL_DEBUG)
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
response = requests.get(url, headers=headers, timeout=3)
response.encoding = "utf-8"
doc = Document(response.content)
# print('title:', doc.title())
ret = doc.summary(html_partial=True)
soup = BeautifulSoup(ret, 'html.parser')
ret = tidy_text(soup.get_text())
return ret
async def web_search(question, provider: Provider, session_id, official_fc=False):
'''
official_fc: 使用官方 function-calling
'''
new_func_call = FuncCall(provider)
new_func_call.add_func("google_web_search", [{
"type": "string",
"name": "keyword",
"description": "google search query (分词,尽量保留所有信息)"
}],
"通过搜索引擎搜索。如果问题需要获取近期、实时的消息,在网页上搜索(如天气、新闻或任何需要通过网页获取信息的问题),则调用此函数;如果没有,不要调用此函数。",
web_keyword_search_via_bing
)
new_func_call.add_func("fetch_website_content", [{
"type": "string",
"name": "url",
"description": "网址"
}],
"获取网页的内容。如果问题带有合法的网页链接(例如: `帮我总结一下 https://github.com 的内容`), 就调用此函数。如果没有,不要调用此函数。",
fetch_website_content
)
question1 = f"{question} \n> hint: 最多只能调用1个function, 并且存在不会调用任何function的可能性。"
has_func = False
function_invoked_ret = ""
if official_fc:
# we use official function-calling
func = await provider.text_chat(question1, session_id, function_call=new_func_call.get_func())
if isinstance(func, Function):
# 执行对应的结果:
func_obj = None
for i in new_func_call.func_list:
if i["name"] == func.name:
func_obj = i["func_obj"]
break
if not func_obj:
# gu.log("找不到返回的 func name " + func.name, level=gu.LEVEL_ERROR)
return await provider.text_chat(question1, session_id) + "\n(网页搜索失败, 此为默认回复)"
try:
args = json.loads(func.arguments)
# we use to_thread to avoid blocking the event loop
function_invoked_ret = await asyncio.to_thread(func_obj, **args)
has_func = True
except BaseException as e:
traceback.print_exc()
return await provider.text_chat(question1, session_id) + "\n(网页搜索失败, 此为默认回复)"
else:
# now func is a string
return func
else:
# we use our own function-calling
try:
args = {
'question': question1,
'func_definition': new_func_call.func_dump(),
'is_task': False,
'is_summary': False,
}
function_invoked_ret, has_func = await asyncio.to_thread(new_func_call.func_call, **args)
except BaseException as e:
res = await provider.text_chat(question) + "\n(网页搜索失败, 此为默认回复)"
return res
has_func = True
if has_func:
await provider.forget(session_id)
question3 = f"""
你的任务是:
1. 根据末尾的材料对问题`{question}`做切题的总结(详细);
2. 简单地发表你对这个问题的看法(简略)。
你的总结末尾应当有对材料的引用, 如果有链接, 请在末尾附上引用网页链接。引用格式严格按照 `\n[1] title url \n`。
不要提到任何函数调用的信息。
一些回复的消息模板:
模板1:
```
从网上的信息来看,可以知道...我个人认为...你觉得呢?
```
模板2:
```
根据网上的最新信息,可以得知...我觉得...你怎么看?
```
你可以根据这些模板来组织回答,但可以不照搬,要根据问题的内容来回答。
以下是相关材料:
"""
_c = 0
while _c < 3:
try:
print('text chat')
final_ret = await provider.text_chat(question3 + "```" + function_invoked_ret + "```", session_id)
return final_ret
except Exception as e:
print(e)
_c += 1
if _c == 3:
raise e
if "The message you submitted was too long" in str(e):
await provider.forget(session_id)
function_invoked_ret = function_invoked_ret[:int(
len(function_invoked_ret) / 2)]
time.sleep(3)
return function_invoked_ret

View File

@@ -1,143 +1,24 @@
import datetime
import time import time
import socket import socket
from PIL import Image, ImageDraw, ImageFont
import os import os
import re import re
import requests import requests
from util.cmd_config import CmdConfig import aiohttp
import socket import socket
from cores.astrbot.types import GlobalObject
import platform
import logging
import json import json
import sys import sys
import psutil import psutil
import ssl
import zipfile
import shutil
import stat
PLATFORM_GOCQ = 'gocq' from PIL import Image, ImageDraw, ImageFont
PLATFORM_QQCHAN = 'qqchan' from type.types import GlobalObject
from SparkleLogging.utils.core import LogManager
from logging import Logger
FG_COLORS = { logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
"black": "30",
"red": "31",
"green": "32",
"yellow": "33",
"blue": "34",
"purple": "35",
"cyan": "36",
"white": "37",
"default": "39",
}
BG_COLORS = {
"black": "40",
"red": "41",
"green": "42",
"yellow": "43",
"blue": "44",
"purple": "45",
"cyan": "46",
"white": "47",
"default": "49",
}
LEVEL_DEBUG = "DEBUG"
LEVEL_INFO = "INFO"
LEVEL_WARNING = "WARN"
LEVEL_ERROR = "ERROR"
LEVEL_CRITICAL = "CRITICAL"
# 为了兼容旧版
level_codes = {
LEVEL_DEBUG: logging.DEBUG,
LEVEL_INFO: logging.INFO,
LEVEL_WARNING: logging.WARNING,
LEVEL_ERROR: logging.ERROR,
LEVEL_CRITICAL: logging.CRITICAL,
}
level_colors = {
"INFO": "green",
"WARN": "yellow",
"ERROR": "red",
"CRITICAL": "purple",
}
class Logger:
def __init__(self) -> None:
self.history = []
def log(
self,
msg: str,
level: str = "INFO",
tag: str = "System",
fg: str = None,
bg: str = None,
max_len: int = 50000,
err: Exception = None,):
"""
日志打印函数
"""
_set_level_code = level_codes[LEVEL_INFO]
if 'LOG_LEVEL' in os.environ and os.environ['LOG_LEVEL'] in level_codes:
_set_level_code = level_codes[os.environ['LOG_LEVEL']]
if level in level_codes and level_codes[level] < _set_level_code:
return
if err is not None:
msg += "\n异常原因: " + str(err)
level = LEVEL_ERROR
if len(msg) > max_len:
msg = msg[:max_len] + "..."
now = datetime.datetime.now().strftime("%H:%M:%S")
pres = []
for line in msg.split("\n"):
if line == "\n":
pres.append("")
else:
pres.append(f"[{now}] [{tag}/{level}] {line}")
if level == "INFO":
if fg is None:
fg = FG_COLORS["green"]
if bg is None:
bg = BG_COLORS["default"]
elif level == "WARN":
if fg is None:
fg = FG_COLORS["yellow"]
if bg is None:
bg = BG_COLORS["default"]
elif level == "ERROR":
if fg is None:
fg = FG_COLORS["red"]
if bg is None:
bg = BG_COLORS["default"]
elif level == "CRITICAL":
if fg is None:
fg = FG_COLORS["purple"]
if bg is None:
bg = BG_COLORS["default"]
ret = ""
for line in pres:
ret += f"\033[{fg};{bg}m{line}\033[0m\n"
try:
requests.post("http://localhost:6185/api/log",
data=ret[:-1].encode(), timeout=1)
except BaseException as e:
pass
self.history.append(ret)
if len(self.history) > 100:
self.history = self.history[-100:]
print(ret[:-1])
log = Logger().log
def port_checker(port: int, host: str = "localhost"): def port_checker(port: int, host: str = "localhost"):
@@ -151,56 +32,16 @@ def port_checker(port: int, host: str = "localhost"):
sk.close() sk.close()
return False return False
def get_font(size: int) -> ImageFont.FreeTypeFont:
def get_font_path() -> str: # get yahei first
if os.path.exists("resources/fonts/syst.otf"): # common and default fonts on Windows, macOS and Linux
font_path = "resources/fonts/syst.otf" fonts = ["msyh.ttc", "NotoSansCJK-Regular.ttc", "msyhbd.ttc", "PingFang.ttc", "Heiti.ttc"]
elif os.path.exists("QQChannelChatGPT/resources/fonts/syst.otf"): for font in fonts:
font_path = "QQChannelChatGPT/resources/fonts/syst.otf" try:
elif os.path.exists("AstrBot/resources/fonts/syst.otf"): font = ImageFont.truetype(font, size)
font_path = "AstrBot/resources/fonts/syst.otf" return font
elif os.path.exists("C:/Windows/Fonts/simhei.ttf"): except Exception as e:
font_path = "C:/Windows/Fonts/simhei.ttf" pass
elif os.path.exists("/usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc"):
font_path = "/usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc"
else:
raise Exception("找不到字体文件")
return font_path
def word2img(title: str, text: str, max_width=30, font_size=20):
font_path = get_font_path()
width_factor = 1.0
height_factor = 1.5
# 格式化文本宽度最大为30
lines = text.split('\n')
i = 0
length = len(lines)
for l in lines:
if len(l) > max_width:
cp = l
for ii in range(len(l)):
if ii % max_width == 0:
cp = cp[:ii] + '\n' + cp[ii:]
length += 1
lines[i] = cp
i += 1
text = '\n'.join(lines)
width = int(max_width * font_size * width_factor)
height = int(length * font_size * height_factor)
image = Image.new('RGB', (width, height), (255, 255, 255))
draw = ImageDraw.Draw(image)
text_font = ImageFont.truetype(font_path, font_size)
title_font = ImageFont.truetype(font_path, font_size + 5)
# 标题居中
title_width, title_height = title_font.getsize(title)
draw.text(((width - title_width) / 2, 10),
title, fill=(0, 0, 0), font=title_font)
# 文本不居中
draw.text((10, title_height+20), text, fill=(0, 0, 0), font=text_font)
return image
def render_markdown(markdown_text, image_width=800, image_height=600, font_size=26, font_color=(0, 0, 0), bg_color=(255, 255, 255)): def render_markdown(markdown_text, image_width=800, image_height=600, font_size=26, font_color=(0, 0, 0), bg_color=(255, 255, 255)):
@@ -242,11 +83,8 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
# 用于匹配图片的正则表达式 # 用于匹配图片的正则表达式
IMAGE_REGEX = r"!\s*\[.*?\]\s*\((.*?)\)" IMAGE_REGEX = r"!\s*\[.*?\]\s*\((.*?)\)"
font_path = get_font_path()
font_path1 = font_path
# 加载字体 # 加载字体
font = ImageFont.truetype(font_path, font_size) font = get_font(font_size)
images: Image = {} images: Image = {}
@@ -367,7 +205,7 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
header_level = line.count("#") header_level = line.count("#")
line = line.strip("#").strip() line = line.strip("#").strip()
font_size_header = HEADER_FONT_STANDARD_SIZE - header_level * 4 font_size_header = HEADER_FONT_STANDARD_SIZE - header_level * 4
font = ImageFont.truetype(font_path, font_size_header) font = get_font(font_size_header)
y += HEADER_MARGIN # 上边距 y += HEADER_MARGIN # 上边距
# 字间距 # 字间距
draw.text((x, y), line, font=font, fill=font_color) draw.text((x, y), line, font=font, fill=font_color)
@@ -381,7 +219,7 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
y += QUOTE_LEFT_LINE_MARGIN y += QUOTE_LEFT_LINE_MARGIN
draw.line((x, y, x, y + QUOTE_LEFT_LINE_HEIGHT), draw.line((x, y, x, y + QUOTE_LEFT_LINE_HEIGHT),
fill=QUOTE_LEFT_LINE_COLOR, width=QUOTE_LEFT_LINE_WIDTH) fill=QUOTE_LEFT_LINE_COLOR, width=QUOTE_LEFT_LINE_WIDTH)
font = ImageFont.truetype(font_path, QUOTE_FONT_SIZE) font = get_font(QUOTE_FONT_SIZE)
draw.text((x + QUOTE_FONT_LINE_MARGIN, y + QUOTE_FONT_LINE_MARGIN), draw.text((x + QUOTE_FONT_LINE_MARGIN, y + QUOTE_FONT_LINE_MARGIN),
quote_text, font=font, fill=QUOTE_FONT_COLOR) quote_text, font=font, fill=QUOTE_FONT_COLOR)
y += font_size + QUOTE_LEFT_LINE_HEIGHT + QUOTE_LEFT_LINE_MARGIN y += font_size + QUOTE_LEFT_LINE_HEIGHT + QUOTE_LEFT_LINE_MARGIN
@@ -389,7 +227,7 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
elif line.startswith("-"): elif line.startswith("-"):
# 处理列表 # 处理列表
list_text = line.strip("-").strip() list_text = line.strip("-").strip()
font = ImageFont.truetype(font_path, LIST_FONT_SIZE) font = get_font(LIST_FONT_SIZE)
y += LIST_MARGIN y += LIST_MARGIN
draw.text((x, y), " · " + list_text, draw.text((x, y), " · " + list_text,
font=font, fill=LIST_FONT_COLOR) font=font, fill=LIST_FONT_COLOR)
@@ -406,7 +244,7 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
code_block_codes = [] code_block_codes = []
draw.rounded_rectangle((x, code_block_start_y, image_width - 10, y+CODE_BLOCK_CODES_MARGIN_VERTICAL + draw.rounded_rectangle((x, code_block_start_y, image_width - 10, y+CODE_BLOCK_CODES_MARGIN_VERTICAL +
CODE_BLOCK_TEXT_MARGIN), radius=5, fill=CODE_BLOCK_BG_COLOR, width=2) CODE_BLOCK_TEXT_MARGIN), radius=5, fill=CODE_BLOCK_BG_COLOR, width=2)
font = ImageFont.truetype(font_path1, CODE_BLOCK_FONT_SIZE) font = get_font(CODE_BLOCK_FONT_SIZE)
draw.text((x + CODE_BLOCK_CODES_MARGIN_HORIZONTAL, code_block_start_y + draw.text((x + CODE_BLOCK_CODES_MARGIN_HORIZONTAL, code_block_start_y +
CODE_BLOCK_CODES_MARGIN_VERTICAL), codes, font=font, fill=font_color) CODE_BLOCK_CODES_MARGIN_VERTICAL), codes, font=font, fill=font_color)
y += CODE_BLOCK_CODES_MARGIN_VERTICAL + CODE_BLOCK_MARGIN y += CODE_BLOCK_CODES_MARGIN_VERTICAL + CODE_BLOCK_MARGIN
@@ -423,7 +261,7 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
# the judge has a tiny bug. # the judge has a tiny bug.
# when line is like "hi`hi`". all the parts will be in parts_inline. # when line is like "hi`hi`". all the parts will be in parts_inline.
if part in parts_inline: if part in parts_inline:
font = ImageFont.truetype(font_path, INLINE_CODE_FONT_SIZE) font = get_font(INLINE_CODE_FONT_SIZE)
code_text = part.strip("`") code_text = part.strip("`")
code_width = font.getsize( code_width = font.getsize(
code_text)[0] + INLINE_CODE_FONT_MARGIN*2 code_text)[0] + INLINE_CODE_FONT_MARGIN*2
@@ -436,7 +274,7 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
code_text, font=font, fill=font_color) code_text, font=font, fill=font_color)
x += code_width+INLINE_CODE_MARGIN-INLINE_CODE_FONT_MARGIN x += code_width+INLINE_CODE_MARGIN-INLINE_CODE_FONT_MARGIN
else: else:
font = ImageFont.truetype(font_path, font_size) font = get_font(font_size)
draw.text((x, y), part, font=font, fill=font_color) draw.text((x, y), part, font=font, fill=font_color)
x += font.getsize(part)[0] x += font.getsize(part)[0]
y += font_size + INLINE_CODE_MARGIN y += font_size + INLINE_CODE_MARGIN
@@ -447,7 +285,7 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
if line == "": if line == "":
y += TEXT_LINE_MARGIN y += TEXT_LINE_MARGIN
else: else:
font = ImageFont.truetype(font_path, font_size) font = get_font(font_size)
draw.text((x, y), line, font=font, fill=font_color) draw.text((x, y), line, font=font, fill=font_color)
y += font_size + TEXT_LINE_MARGIN*2 y += font_size + TEXT_LINE_MARGIN*2
@@ -477,32 +315,61 @@ def save_temp_img(img: Image) -> str:
if time.time() - ctime > 3600: if time.time() - ctime > 3600:
os.remove(path) os.remove(path)
except Exception as e: except Exception as e:
print(f"清除临时文件失败: {e}", level=LEVEL_WARNING, tag="GeneralUtils") print(f"清除临时文件失败: {e}")
# 获得时间戳 # 获得时间戳
timestamp = int(time.time()) timestamp = int(time.time())
p = f"temp/{timestamp}.png" p = f"temp/{timestamp}.jpg"
img.save(p)
if isinstance(img, Image.Image):
img.save(p)
else:
with open(p, "wb") as f:
f.write(img)
logger.info(f"保存临时图片: {p}")
return p return p
async def download_image_by_url(url: str, post: bool = False, post_data: dict = None) -> str:
def create_text_image(title: str, text: str, max_width=30, font_size=20):
''' '''
文本转图片 下载图片
title: 标题
text: 文本内容
max_width: 文本宽度最大值默认30
font_size: 字体大小默认20
返回:文件路径
''' '''
try: try:
img = word2img(title, text, max_width, font_size) logger.info(f"下载图片: {url}")
p = save_temp_img(img) async with aiohttp.ClientSession() as session:
return p if post:
async with session.post(url, json=post_data) as resp:
return save_temp_img(await resp.read())
else:
async with session.get(url) as resp:
return save_temp_img(await resp.read())
except aiohttp.client_exceptions.ClientConnectorSSLError as e:
# 关闭SSL验证
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
async with aiohttp.ClientSession(trust_env=False) as session:
if post:
async with session.get(url, ssl=ssl_context) as resp:
return save_temp_img(await resp.read())
else:
async with session.get(url, ssl=ssl_context) as resp:
return save_temp_img(await resp.read())
except Exception as e: except Exception as e:
raise e raise e
def download_file(url: str, path: str):
'''
从指定 url 下载文件到指定路径 path
'''
try:
logger.info(f"下载文件: {url}")
with requests.get(url, stream=True) as r:
with open(path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
except Exception as e:
raise e
def create_markdown_image(text: str): def create_markdown_image(text: str):
''' '''
@@ -517,17 +384,6 @@ def create_markdown_image(text: str):
raise e raise e
def try_migrate_config(old_config: dict):
'''
迁移配置文件到 cmd_config.json
'''
cc = CmdConfig()
if cc.get("qqbot", None) is None:
# 未迁移过
for k in old_config:
cc.put(k, old_config[k])
def get_local_ip_addresses(): def get_local_ip_addresses():
ip = '' ip = ''
try: try:
@@ -541,31 +397,42 @@ def get_local_ip_addresses():
return ip return ip
def get_sys_info(global_object: GlobalObject):
mem = None
stats = global_object.dashboard_data.stats
os_name = platform.system()
os_version = platform.version()
if 'sys_perf' in stats and 'memory' in stats['sys_perf']:
mem = stats['sys_perf']['memory']
return {
'mem': mem,
'os': os_name + '_' + os_version,
'py': platform.python_version(),
}
def upload(_global_object: GlobalObject): def upload(_global_object: GlobalObject):
'''
上传相关非敏感统计数据
'''
time.sleep(10)
while True: while True:
addr_ip = '' platform_stats = {}
llm_stats = {}
plugin_stats = {}
for platform in _global_object.platforms:
platform_stats[platform.platform_name] = {
"cnt_receive": platform.platform_instance.cnt_receive,
"cnt_reply": platform.platform_instance.cnt_reply
}
for llm in _global_object.llms:
stat = llm.llm_instance.model_stat
for k in stat:
llm_stats[llm.llm_name + "#" + k] = stat[k]
llm.llm_instance.reset_model_stat()
for plugin in _global_object.cached_plugins:
plugin_stats[plugin.metadata.plugin_name] = {
"metadata": plugin.metadata,
"trig_cnt": plugin.trig_cnt
}
plugin.reset_trig_cnt()
try: try:
res = { res = {
"version": _global_object.version, "stat_version": "moon",
"count": _global_object.cnt_total, "version": _global_object.version, # 版本号
"ip": addr_ip, "platform_stats": platform_stats, # 过去 30 分钟各消息平台交互消息数
"sys": sys.platform, "llm_stats": llm_stats,
"admin": "null", "plugin_stats": plugin_stats,
"sys": sys.platform, # 系统版本
} }
resp = requests.post( resp = requests.post(
'https://api.soulter.top/upload', data=json.dumps(res), timeout=5) 'https://api.soulter.top/upload', data=json.dumps(res), timeout=5)
@@ -575,8 +442,22 @@ def upload(_global_object: GlobalObject):
_global_object.cnt_total = 0 _global_object.cnt_total = 0
except BaseException as e: except BaseException as e:
pass pass
time.sleep(10*60) time.sleep(30*60)
def retry(n: int = 3):
'''
重试装饰器
'''
def decorator(func):
def wrapper(*args, **kwargs):
for i in range(n):
try:
return func(*args, **kwargs)
except Exception as e:
if i == n-1: raise e
logger.warning(f"函数 {func.__name__}{i+1} 次重试... {e}")
return wrapper
return decorator
def run_monitor(global_object: GlobalObject): def run_monitor(global_object: GlobalObject):
''' '''
@@ -595,3 +476,24 @@ def run_monitor(global_object: GlobalObject):
} }
stat['sys_start_time'] = start_time stat['sys_start_time'] = start_time
time.sleep(30) time.sleep(30)
def remove_dir(file_path) -> bool:
if not os.path.exists(file_path): return True
try:
shutil.rmtree(file_path, onerror=on_error)
return True
except BaseException as e:
logger.error(f"删除文件/文件夹 {file_path} 失败: {str(e)}")
return False
def on_error(func, path, exc_info):
'''
a callback of the rmtree function.
'''
print(f"remove {path} failed.")
import stat
if not os.access(path, os.W_OK):
os.chmod(path, stat.S_IWUSR)
func(path)
else:
raise

View File

@@ -0,0 +1,43 @@
import aiohttp, os
from util.general_utils import download_image_by_url, create_markdown_image
from type.config import VERSION
BASE_RENDER_URL = "https://t2i.soulter.top/text2img"
TEMPLATE_PATH = os.path.join(os.path.dirname(__file__), "template")
async def text_to_image_base(text: str, return_url: bool = False) -> str:
'''
返回图像的文件路径
'''
with open(os.path.join(TEMPLATE_PATH, "base.html"), "r", encoding='utf-8') as f:
tmpl_str = f.read()
assert(tmpl_str)
text = text.replace("`", "\`")
post_data = {
"tmpl": tmpl_str,
"json": return_url,
"tmpldata": {
"text": text,
"version": f"v{VERSION}",
},
"options": {
"full_page": True
}
}
if return_url:
async with aiohttp.ClientSession() as session:
async with session.post(f"{BASE_RENDER_URL}/generate", json=post_data) as resp:
ret = await resp.json()
return f"{BASE_RENDER_URL}/{ret['data']['id']}"
else:
image_path = ""
try:
image_path = await download_image_by_url(f"{BASE_RENDER_URL}/generate", post=True, post_data=post_data)
except Exception as e:
print(f"调用 markdown 渲染 API 失败,错误信息:{e},将使用本地渲染方式。")
image_path = create_markdown_image(text)
return image_path

View File

@@ -0,0 +1,247 @@
<!doctype html>
<html>
<head>
<meta charset="utf-8"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.16.10/dist/katex.min.css" integrity="sha384-wcIxkf4k558AjM3Yz3BBFQUbk/zgIYC2R0QpeeYb+TwlBVMrlgLqwRjRtGZiK7ww" crossorigin="anonymous">
<link rel="stylesheet" href="/path/to/styles/default.min.css">
<script src="/path/to/highlight.min.js"></script>
<script>hljs.highlightAll();</script>
<script defer src="https://cdn.jsdelivr.net/npm/katex@0.16.10/dist/katex.min.js" integrity="sha384-hIoBPJpTUs74ddyc4bFZSM1TVlQDA60VBbJS0oA934VSz82sBx1X7kSx2ATBDIyd" crossorigin="anonymous"></script>
<script defer src="https://cdn.jsdelivr.net/npm/katex@0.16.10/dist/contrib/auto-render.min.js" integrity="sha384-43gviWU0YVjaDtb/GhzOouOXtZMP/7XUzwPTstBeZFe/+rCMvRwr4yROQP43s0Xk" crossorigin="anonymous"
onload="renderMathInElement(document.getElementById('content'),{delimiters: [{left: '$$', right: '$$', display: true},{left: '$', right: '$', display: false}]});"></script>
</head>
<body>
<div style="background-color: #3276dc; color: #fff; font-size: 64px; ">
<span style="font-weight: bold; margin-left: 16px"># AstrBot</span>
<span>{{ version }}</span>
</div>
<article style="margin-top: 32px" id="content"></article>
<script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
<script>
document.getElementById('content').innerHTML = marked.parse(`{{ text | safe}}`);
</script>
</body>
</html>
<style>
#content {
min-width: 200px;
max-width: 85%;
margin: 0 auto;
padding: 2rem 1em 1em;
}
body {
word-break: break-word;
line-height: 1.75;
font-weight: 400;
font-size: 32px;
margin: 0;
padding: 0;
overflow-x: hidden;
color: #333;
font-family: -apple-system,BlinkMacSystemFont,Segoe UI,Helvetica,Arial,sans-serif,Apple Color Emoji,Segoe UI Emoji;
}
h1, h2, h3, h4, h5, h6 {
line-height: 1.5;
margin-top: 35px;
margin-bottom: 10px;
padding-bottom: 5px;
}
h1:first-child, h2:first-child, h3:first-child, h4:first-child, h5:first-child, h6:first-child {
margin-top: -1.5rem;
margin-bottom: 1rem;
}
h1::before, h2::before, h3::before, h4::before, h5::before, h6::before {
content: "#";
display: inline-block;
color: #3eaf7c;
padding-right: 0.23em;
}
h1 {
position: relative;
font-size: 2.5rem;
margin-bottom: 5px;
}
h1::before {
font-size: 2.5rem;
}
h2 {
padding-bottom: 0.5rem;
font-size: 2.2rem;
border-bottom: 1px solid #ececec;
}
h3 {
font-size: 1.5rem;
padding-bottom: 0;
}
h4 {
font-size: 1.25rem;
}
h5 {
font-size: 1rem;
}
h6 {
margin-top: 5px;
}
p {
line-height: inherit;
margin-top: 22px;
margin-bottom: 22px;
}
strong {
color: #3eaf7c;
}
img {
max-width: 100%;
border-radius: 2px;
display: block;
margin: auto;
border: 3px solid rgba(62, 175, 124, 0.2);
}
hr {
border-top: 1px solid #3eaf7c;
border-bottom: none;
border-left: none;
border-right: none;
margin-top: 32px;
margin-bottom: 32px;
}
code {
font-family: Menlo, Monaco, Consolas, "Courier New", monospace;
word-break: break-word;
overflow-x: auto;
padding: 0.2rem 0.5rem;
margin: 0;
color: #3eaf7c;
font-size: 0.85em;
background-color: rgba(27, 31, 35, 0.05);
border-radius: 3px;
}
pre {
font-family: Menlo, Monaco, Consolas, "Courier New", monospace;
overflow: auto;
position: relative;
line-height: 1.75;
border-radius: 6px;
border: 2px solid #3eaf7c;
}
pre > code {
font-size: 12px;
padding: 15px 12px;
margin: 0;
word-break: normal;
display: block;
overflow-x: auto;
color: #333;
background: #f8f8f8;
}
a {
font-weight: 500;
text-decoration: none;
color: #3eaf7c;
}
a:hover, a:active {
border-bottom: 1.5px solid #3eaf7c;
}
a:before {
content: "⇲";
}
table {
display: inline-block !important;
font-size: 12px;
width: auto;
max-width: 100%;
overflow: auto;
border: solid 1px #3eaf7c;
}
thead {
background: #3eaf7c;
color: #fff;
text-align: left;
}
tr:nth-child(2n) {
background-color: rgba(62, 175, 124, 0.2);
}
th, td {
padding: 12px 7px;
line-height: 24px;
}
td {
min-width: 120px;
}
blockquote {
color: #666;
padding: 1px 23px;
margin: 22px 0;
border-left: 0.5rem solid rgba(62, 175, 124, 0.6);
border-color: #42b983;
background-color: #f8f8f8;
}
blockquote::after {
display: block;
content: "";
}
blockquote > p {
margin: 10px 0;
}
details {
border: none;
outline: none;
border-left: 4px solid #3eaf7c;
padding-left: 10px;
margin-left: 4px;
}
details summary {
cursor: pointer;
border: none;
outline: none;
background: white;
margin: 0px -17px;
}
details summary::-webkit-details-marker {
color: #3eaf7c;
}
ol, ul {
padding-left: 28px;
}
ol li, ul li {
margin-bottom: 0;
list-style: inherit;
}
ol li .task-list-item, ul li .task-list-item {
list-style: none;
}
ol li .task-list-item ul, ul li .task-list-item ul, ol li .task-list-item ol, ul li .task-list-item ol {
margin-top: 0;
}
ol ul, ul ul, ol ol, ul ol {
margin-top: 3px;
}
ol li {
padding-left: 6px;
}
ol li::marker {
color: #3eaf7c;
}
ul li {
list-style: none;
}
ul li:before {
content: "•";
margin-right: 4px;
color: #3eaf7c;
}
@media (max-width: 720px) {
h1 {
font-size: 24px;
}
h2 {
font-size: 20px;
}
h3 {
font-size: 18px;
}
}
</style>

View File

@@ -1,11 +1,5 @@
from cores.astrbot.types import ( from type.plugin import PluginMetadata, PluginType
PluginMetadata, from type.register import RegisteredLLM, RegisteredPlatform, RegisteredPlugin, RegisteredPlugins
RegisteredLLM, from type.types import GlobalObject
RegisteredPlugin, from type.message import AstrMessageEvent
RegisteredPlatform, from type.command import CommandResult
RegisteredPlugins,
PluginType,
GlobalObject,
AstrMessageEvent,
CommandResult
)

View File

@@ -1,5 +1,6 @@
from cores.astrbot.core import oper_msg from astrbot.core import oper_msg
from cores.astrbot.types import AstrMessageEvent, CommandResult from type.message import *
from type.command import CommandResult
from model.platform._message_result import MessageResult from model.platform._message_result import MessageResult
''' '''

View File

@@ -5,7 +5,8 @@
''' '''
from model.provider.provider import Provider as LLMProvider from model.provider.provider import Provider as LLMProvider
from model.platform._platfrom import Platform from model.platform._platfrom import Platform
from cores.astrbot.types import GlobalObject, RegisteredPlatform, RegisteredLLM from type.types import GlobalObject
from type.register import RegisteredPlatform, RegisteredLLM
def register_platform(platform_name: str, platform_instance: Platform, context: GlobalObject) -> None: def register_platform(platform_name: str, platform_instance: Platform, context: GlobalObject) -> None:
''' '''

View File

@@ -2,4 +2,4 @@
插件类型 插件类型
''' '''
from cores.astrbot.types import PluginType from type.plugin import PluginType

View File

@@ -1,26 +1,22 @@
''' '''
插件工具函数 插件工具函数
''' '''
import os import os, sys, zipfile, shutil, yaml
import inspect import inspect
try:
import git.exc
from git.repo import Repo
except ImportError:
pass
import shutil
import importlib
import stat
import traceback import traceback
import uuid
from types import ModuleType from types import ModuleType
from typing import List from type.plugin import *
from pip._internal import main as pipmain from type.register import *
from cores.astrbot.types import ( from SparkleLogging.utils.core import LogManager
PluginMetadata, from logging import Logger
PluginType, from type.types import GlobalObject
RegisteredPlugin, from util.general_utils import download_file, remove_dir
RegisteredPlugins from util.updator import request_release_info
)
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
# 找出模块里所有的类名 # 找出模块里所有的类名
@@ -62,32 +58,50 @@ def get_modules(path):
def get_plugin_store_path(): def get_plugin_store_path():
if os.path.exists("addons/plugins"): plugin_dir = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), "../addons/plugins"))
return "addons/plugins" return plugin_dir
elif os.path.exists("QQChannelChatGPT/addons/plugins"):
return "QQChannelChatGPT/addons/plugins"
elif os.path.exists("AstrBot/addons/plugins"):
return "AstrBot/addons/plugins"
else:
raise FileNotFoundError("插件文件夹不存在。")
def get_plugin_modules(): def get_plugin_modules():
plugins = [] plugins = []
try: try:
if os.path.exists("addons/plugins"): plugin_dir = get_plugin_store_path()
plugins = get_modules("addons/plugins") if os.path.exists(plugin_dir):
plugins = get_modules(plugin_dir)
return plugins return plugins
elif os.path.exists("QQChannelChatGPT/addons/plugins"):
plugins = get_modules("QQChannelChatGPT/addons/plugins")
return plugins
else:
return None
except BaseException as e: except BaseException as e:
raise e raise e
def check_plugin_dept_update(cached_plugins: RegisteredPlugins, target_plugin: str = None):
plugin_dir = get_plugin_store_path()
if not os.path.exists(plugin_dir):
return False
to_update = []
if target_plugin:
to_update.append(target_plugin)
else:
for p in cached_plugins:
to_update.append(p.root_dir_name)
for p in to_update:
plugin_path = os.path.join(plugin_dir, p)
if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
pth = os.path.join(plugin_path, "requirements.txt")
logger.info(f"正在检查更新插件 {p} 的依赖: {pth}")
update_plugin_dept(os.path.join(plugin_path, "requirements.txt"))
def plugin_reload(cached_plugins: RegisteredPlugins): def has_init_param(cls, param_name):
try:
# 获取 __init__ 方法的签名
init_signature = inspect.signature(cls.__init__)
# 检查参数名是否在签名中
return param_name in init_signature.parameters
except (AttributeError, ValueError):
# 如果类没有 __init__ 方法或者无法获取签名
return False
def plugin_reload(ctx: GlobalObject):
cached_plugins = ctx.cached_plugins
plugins = get_plugin_modules() plugins = get_plugin_modules()
if plugins is None: if plugins is None:
return False, "未找到任何插件模块" return False, "未找到任何插件模块"
@@ -103,15 +117,18 @@ def plugin_reload(cached_plugins: RegisteredPlugins):
module_path = plugin['module_path'] module_path = plugin['module_path']
root_dir_name = plugin['pname'] root_dir_name = plugin['pname']
if module_path in registered_map: check_plugin_dept_update(cached_plugins, root_dir_name)
# 之前注册过
module = importlib.reload(module) module = __import__("addons.plugins." +
else:
module = __import__("addons.plugins." +
root_dir_name + "." + p, fromlist=[p]) root_dir_name + "." + p, fromlist=[p])
cls = get_classes(p, module) cls = get_classes(p, module)
obj = getattr(module, cls[0])()
try:
# 尝试传入 ctx
obj = getattr(module, cls[0])(ctx=ctx)
except:
obj = getattr(module, cls[0])()
metadata = None metadata = None
try: try:
@@ -123,8 +140,7 @@ def plugin_reload(cached_plugins: RegisteredPlugins):
else: else:
metadata = PluginMetadata( metadata = PluginMetadata(
plugin_name=info['name'], plugin_name=info['name'],
plugin_type=PluginType.COMMON if 'plugin_type' not in info else PluginType( plugin_type=PluginType.COMMON if 'plugin_type' not in info else PluginType(info['plugin_type']),
info['plugin_type']),
author=info['author'], author=info['author'],
desc=info['desc'], desc=info['desc'],
version=info['version'], version=info['version'],
@@ -138,13 +154,15 @@ def plugin_reload(cached_plugins: RegisteredPlugins):
except BaseException as e: except BaseException as e:
fail_rec += f"注册插件 {module_path} 失败, 原因: {str(e)}\n" fail_rec += f"注册插件 {module_path} 失败, 原因: {str(e)}\n"
continue continue
cached_plugins.append(RegisteredPlugin(
metadata=metadata, if module_path not in registered_map:
plugin_instance=obj, cached_plugins.append(RegisteredPlugin(
module=module, metadata=metadata,
module_path=module_path, plugin_instance=obj,
root_dir_name=root_dir_name module=module,
)) module_path=module_path,
root_dir_name=root_dir_name
))
except BaseException as e: except BaseException as e:
traceback.print_exc() traceback.print_exc()
fail_rec += f"加载{p}插件出现问题,原因 {str(e)}\n" fail_rec += f"加载{p}插件出现问题,原因 {str(e)}\n"
@@ -152,29 +170,106 @@ def plugin_reload(cached_plugins: RegisteredPlugins):
return True, None return True, None
else: else:
return False, fail_rec return False, fail_rec
def update_plugin_dept(path):
mirror = "https://mirrors.aliyun.com/pypi/simple/"
py = sys.executable
os.system(f"{py} -m pip install -r {path} -i {mirror} --quiet")
def install_plugin(repo_url: str, cached_plugins: RegisteredPlugins): def install_plugin(repo_url: str, ctx: GlobalObject):
ppath = get_plugin_store_path() ppath = get_plugin_store_path()
# 删除末尾的 /
if repo_url.endswith("/"): if repo_url.endswith("/"):
repo_url = repo_url[:-1] repo_url = repo_url[:-1]
# 得到 url 的最后一段
d = repo_url.split("/")[-1] repo_namespace = repo_url.split("/")[-2:]
# 转换非法字符:- repo = repo_namespace[1]
d = d.replace("-", "_")
# 创建文件夹 plugin_path = os.path.join(ppath, repo.replace("-", "_").lower())
plugin_path = os.path.join(ppath, d) if os.path.exists(plugin_path): remove_dir(plugin_path)
if os.path.exists(plugin_path):
remove_dir(plugin_path) # we no longer use Git anymore :)
Repo.clone_from(repo_url, to_path=plugin_path, branch='master') # Repo.clone_from(repo_url, to_path=plugin_path, branch='master')
# 读取插件的requirements.txt
if os.path.exists(os.path.join(plugin_path, "requirements.txt")): download_from_repo_url(plugin_path, repo_url)
if pipmain(['install', '-r', os.path.join(plugin_path, "requirements.txt"), '--quiet']) != 0: unzip_file(plugin_path + ".zip", plugin_path)
raise Exception("插件的依赖安装失败, 需要您手动 pip 安装对应插件的依赖。")
ok, err = plugin_reload(cached_plugins) with open(os.path.join(plugin_path, "REPO"), "w", encoding='utf-8') as f:
f.write(repo_url)
ok, err = plugin_reload(ctx)
if not ok: if not ok:
raise Exception(err) raise Exception(err)
def install_plugin_from_file(zip_file_path: str, ctx: GlobalObject):
# try to unzip
temp_dir = os.path.join(os.path.dirname(zip_file_path), str(uuid.uuid4()))
unzip_file(zip_file_path, temp_dir)
# check if the plugin has metadata.yaml
if not os.path.exists(os.path.join(temp_dir, "metadata.yaml")):
remove_dir(temp_dir)
raise Exception("插件缺少 metadata.yaml 文件。")
metadata = load_plugin_metadata(temp_dir)
plugin_name = metadata.plugin_name
if not plugin_name:
remove_dir(temp_dir)
raise Exception("插件 metadata.yaml 文件中 name 字段为空。")
plugin_name = plugin_name.replace("-", "_").lower()
ppath = get_plugin_store_path()
plugin_path = os.path.join(ppath, plugin_name.replace("-", "_").lower())
if os.path.exists(plugin_path): remove_dir(plugin_path)
# move to the target path
shutil.move(temp_dir, plugin_path)
with open(os.path.join(plugin_path, "REPO"), "w", encoding='utf-8') as f:
if metadata.repo: f.write(metadata.repo)
# remove the temp dir
remove_dir(temp_dir)
ok, err = plugin_reload(ctx)
if not ok:
raise Exception(err)
def load_plugin_metadata(plugin_path: str) -> PluginMetadata:
if not os.path.exists(plugin_path):
raise Exception("插件不存在。")
if not os.path.exists(os.path.join(plugin_path, "metadata.yaml")):
raise Exception("插件缺少 metadata.yaml 文件。")
metadata = None
with open(os.path.join(plugin_path, "metadata.yaml"), "r", encoding='utf-8') as f:
metadata = yaml.safe_load(f)
if 'name' not in metadata or 'desc' not in metadata or 'version' not in metadata or 'author' not in metadata:
raise Exception("插件 metadata.yaml 信息不完整。")
return PluginMetadata(
plugin_name=metadata['name'],
plugin_type=PluginType.COMMON if 'plugin_type' not in metadata else PluginType(metadata['plugin_type']),
author=metadata['author'],
desc=metadata['desc'],
version=metadata['version'],
repo=metadata['repo'] if 'repo' in metadata else None
)
def download_from_repo_url(target_path: str, repo_url: str):
repo_namespace = repo_url.split("/")[-2:]
author = repo_namespace[0]
repo = repo_namespace[1]
logger.info(f"正在下载插件 {repo} ...")
release_url = f"https://api.github.com/repos/{author}/{repo}/releases"
releases = request_release_info(latest=True, url=release_url, mirror_url=release_url)
if not releases:
# download from the default branch directly.
logger.warn(f"未在插件 {author}/{repo} 中找到任何发布版本,将从默认分支下载。")
release_url = f"https://github.com/{author}/{repo}/archive/refs/heads/master.zip"
else:
release_url = releases[0]['zipball_url']
download_file(release_url, target_path + ".zip")
def get_registered_plugin(plugin_name: str, cached_plugins: RegisteredPlugins) -> RegisteredPlugin: def get_registered_plugin(plugin_name: str, cached_plugins: RegisteredPlugins) -> RegisteredPlugin:
@@ -186,45 +281,81 @@ def get_registered_plugin(plugin_name: str, cached_plugins: RegisteredPlugins) -
return ret return ret
def uninstall_plugin(plugin_name: str, cached_plugins: RegisteredPlugins): def uninstall_plugin(plugin_name: str, ctx: GlobalObject):
plugin = get_registered_plugin(plugin_name, cached_plugins) plugin = get_registered_plugin(plugin_name, ctx.cached_plugins)
if not plugin: if not plugin:
raise Exception("插件不存在。") raise Exception("插件不存在。")
root_dir_name = plugin.root_dir_name root_dir_name = plugin.root_dir_name
ppath = get_plugin_store_path() ppath = get_plugin_store_path()
cached_plugins.remove(plugin) ctx.cached_plugins.remove(plugin)
if not remove_dir(os.path.join(ppath, root_dir_name)): if not remove_dir(os.path.join(ppath, root_dir_name)):
raise Exception("移除插件成功,但是删除插件文件夹失败。您可以手动删除该文件夹,位于 addons/plugins/ 下。") raise Exception("移除插件成功,但是删除插件文件夹失败。您可以手动删除该文件夹,位于 addons/plugins/ 下。")
def update_plugin(plugin_name: str, cached_plugins: RegisteredPlugins): def update_plugin(plugin_name: str, ctx: GlobalObject):
plugin = get_registered_plugin(plugin_name, cached_plugins) plugin = get_registered_plugin(plugin_name, ctx.cached_plugins)
if not plugin: if not plugin:
raise Exception("插件不存在。") raise Exception("插件不存在。")
ppath = get_plugin_store_path() ppath = get_plugin_store_path()
root_dir_name = plugin.root_dir_name root_dir_name = plugin.root_dir_name
plugin_path = os.path.join(ppath, root_dir_name) plugin_path = os.path.join(ppath, root_dir_name)
repo = Repo(path=plugin_path)
repo.remotes.origin.pull() if not os.path.exists(os.path.join(plugin_path, "REPO")):
# 读取插件的requirements.txt raise Exception("插件更新信息文件 `REPO` 不存在,请手动升级,或者先卸载然后重新安装该插件。")
if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
if pipmain(['install', '-r', os.path.join(plugin_path, "requirements.txt"), '--quiet']) != 0: repo_url = None
raise Exception("插件依赖安装失败, 需要您手动pip安装对应插件的依赖。") with open(os.path.join(plugin_path, "REPO"), "r", encoding='utf-8') as f:
ok, err = plugin_reload(cached_plugins) repo_url = f.read()
download_from_repo_url(plugin_path, repo_url)
try:
remove_dir(plugin_path)
except BaseException as e:
logger.error(f"删除旧版本插件 {plugin_name} 文件夹失败: {str(e)},使用覆盖安装。")
unzip_file(plugin_path + ".zip", plugin_path)
ok, err = plugin_reload(ctx)
if not ok: if not ok:
raise Exception(err) raise Exception(err)
def unzip_file(zip_path: str, target_dir: str):
'''
解压缩文件, 并将压缩包内**第一个**文件夹内的文件移动到 target_dir
'''
os.makedirs(target_dir, exist_ok=True)
update_dir = ""
logger.info(f"解压文件: {zip_path}")
with zipfile.ZipFile(zip_path, 'r') as z:
update_dir = z.namelist()[0]
z.extractall(target_dir)
def remove_dir(file_path) -> bool: files = os.listdir(os.path.join(target_dir, update_dir))
try_cnt = 50 for f in files:
while try_cnt > 0: logger.info(f"移动更新文件/目录: {f}")
if not os.path.exists(file_path): if os.path.isdir(os.path.join(target_dir, update_dir, f)):
return False if os.path.exists(os.path.join(target_dir, f)):
try: shutil.rmtree(os.path.join(target_dir, f), onerror=on_error)
shutil.rmtree(file_path) else:
return True if os.path.exists(os.path.join(target_dir, f)):
except PermissionError as e: os.remove(os.path.join(target_dir, f))
err_file_path = str(e).split("\'", 2)[1] shutil.move(os.path.join(target_dir, update_dir, f), target_dir)
if os.path.exists(err_file_path):
os.chmod(err_file_path, stat.S_IWUSR) try:
try_cnt -= 1 logger.info(f"删除临时更新文件: {zip_path}{os.path.join(target_dir, update_dir)}")
shutil.rmtree(os.path.join(target_dir, update_dir), onerror=on_error)
os.remove(zip_path)
except:
logger.warn(f"删除更新文件失败,可以手动删除 {zip_path}{os.path.join(target_dir, update_dir)}")
def on_error(func, path, exc_info):
'''
a callback of the rmtree function.
'''
print(f"remove {path} failed.")
import stat
if not os.access(path, os.W_OK):
os.chmod(path, stat.S_IWUSR)
func(path)
else:
raise

View File

@@ -0,0 +1,38 @@
from typing import List
try:
from util.search_engine_scraper.engine import SearchEngine, SearchResult
from util.search_engine_scraper.config import HEADERS, USER_AGENT_BING
except ImportError:
from engine import SearchEngine, SearchResult
from config import HEADERS, USER_AGENT_BING
class Bing(SearchEngine):
def __init__(self) -> None:
super().__init__()
self.base_url = "https://www.bing.com"
self.headers.update({'User-Agent': USER_AGENT_BING})
def _set_selector(self, selector: str):
selectors = {
'url': 'div.b_attribution cite',
'title': 'h2',
'text': 'p',
'links': 'ol#b_results > li.b_algo',
'next': 'div#b_content nav[role="navigation"] a.sb_pagN'
}
return selectors[selector]
async def _get_next_page(self, query) -> str:
if self.page == 1:
await self._get_html(self.base_url)
url = f'{self.base_url}/search?q={query}&form=QBLH&sp=-1&lq=0&pq=hi&sc=10-2&qs=n&sk=&cvid=DE75965E2D6346D681288933984DE48F&ghsh=0&ghacc=0&ghpl='
return await self._get_html(url, None)
async def search(self, query: str, num_results: int) -> List[SearchResult]:
results = await super().search(query, num_results)
for result in results:
if not isinstance(result.url, str):
result.url = result.url.text
return results

View File

@@ -0,0 +1,20 @@
HEADERS = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; rv:84.0) Gecko/20100101 Firefox/84.0',
'Accept': '*/*',
'Connection': 'keep-alive',
'Accept-Language': 'en-GB,en;q=0.5'
}
USER_AGENT_BING = 'Mozilla/5.0 (Windows NT 6.1; rv:84.0) Gecko/20100101 Firefox/84.0'
USER_AGENTS = [
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Version/14.1.2 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Version/14.1 Safari/537.36',
'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0',
'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:88.0) Gecko/20100101 Firefox/88.0'
]

View File

@@ -0,0 +1,73 @@
import random
try:
from util.search_engine_scraper.config import HEADERS, USER_AGENTS
except ImportError:
from config import HEADERS, USER_AGENTS
from bs4 import BeautifulSoup
from aiohttp import ClientSession
from dataclasses import dataclass
from typing import List
@dataclass
class SearchResult():
title: str
url: str
snippet: str
def __str__(self) -> str:
return f"{self.title} - {self.url}\n{self.snippet}"
class SearchEngine():
'''
搜索引擎爬虫基类
'''
def __init__(self) -> None:
self.TIMEOUT = 10
self.page = 1
self.headers = HEADERS
def _set_selector(self, selector: str) -> None:
raise NotImplementedError()
def _get_next_page(self):
raise NotImplementedError()
async def _get_html(self, url: str, data: dict = None) -> str:
headers = self.headers
headers["Referer"] = url
headers["User-Agent"] = random.choice(USER_AGENTS)
if data:
async with ClientSession() as session:
async with session.post(url, headers=headers, data=data, timeout=self.TIMEOUT) as resp:
return await resp.text(encoding="utf-8")
else:
async with ClientSession() as session:
async with session.get(url, headers=headers, timeout=self.TIMEOUT) as resp:
return await resp.text(encoding="utf-8")
def tidy_text(self, text: str) -> str:
'''
清理文本,去除空格、换行符等
'''
return text.strip().replace("\n", " ").replace("\r", " ").replace(" ", " ")
async def search(self, query: str, num_results: int) -> List[SearchResult]:
try:
resp = await self._get_next_page(query)
soup = BeautifulSoup(resp, 'html.parser')
links = soup.select(self._set_selector('links'))
results = []
for link in links:
title = self.tidy_text(link.select_one(self._set_selector('title')).text)
url = link.select_one(self._set_selector('url'))
snippet = ''
if title and url:
results.append(SearchResult(title=title, url=url, snippet=snippet))
return results[:num_results] if len(results) > num_results else results
except Exception as e:
raise e

View File

@@ -0,0 +1,27 @@
import os
from googlesearch import search
try:
from util.search_engine_scraper.engine import SearchEngine, SearchResult
from util.search_engine_scraper.config import HEADERS, USER_AGENTS
except ImportError:
from engine import SearchEngine, SearchResult
from config import HEADERS, USER_AGENTS
from typing import List
class Google(SearchEngine):
def __init__(self) -> None:
super().__init__()
self.proxy = os.environ.get("HTTPS_PROXY")
async def search(self, query: str, num_results: int) -> List[SearchResult]:
results = []
try:
print("use proxy:", self.proxy)
ls = search(query, advanced=True, num_results=num_results, timeout=3, proxy=self.proxy)
for i in ls:
results.append(SearchResult(title=i.title, url=i.url, snippet=i.description))
except Exception as e:
raise e
return results

View File

@@ -0,0 +1,49 @@
import random, re
from bs4 import BeautifulSoup
try:
from util.search_engine_scraper.engine import SearchEngine, SearchResult
from util.search_engine_scraper.config import HEADERS, USER_AGENTS
except ImportError:
from engine import SearchEngine, SearchResult
from config import HEADERS, USER_AGENTS
from typing import List
class Sogo(SearchEngine):
def __init__(self) -> None:
super().__init__()
self.base_url = "https://www.sogou.com"
self.headers['User-Agent'] = random.choice(USER_AGENTS)
def _set_selector(self, selector: str):
selectors = {
'url': 'h3 > a',
'title': 'h3',
'text': '',
'links': 'div.results > div.vrwrap:not(.middle-better-hintBox)',
'next': ''
}
return selectors[selector]
async def _get_next_page(self, query) -> str:
url = f'{self.base_url}/web?query={query}'
return await self._get_html(url, None)
async def search(self, query: str, num_results: int) -> List[SearchResult]:
results = await super().search(query, num_results)
for result in results:
result.url = result.url.get("href")
if result.url.startswith("/link?"):
result.url = self.base_url + result.url
result.url = await self._parse_url(result.url)
return results
async def _parse_url(self, url) -> str:
html = await self._get_html(url)
soup = BeautifulSoup(html, 'html.parser')
script = soup.find("script")
if script:
url = re.search(r'window.location.replace\("(.+?)"\)', script.string).group(1)
return url

View File

@@ -0,0 +1,22 @@
from sogo import Sogo
from bing import Bing
sogo_search = Sogo()
bing_search = Bing()
async def search(keyword: str) -> str:
results = await sogo_search.search(keyword, 5)
# results = await bing_search.search(keyword, 5)
ret = ""
if len(results) == 0:
return "没有搜索到结果"
idx = 1
for i in results:
ret += f"{idx}. {i.title}({i.url})\n{i.snippet}\n\n"
idx += 1
return ret
import asyncio
ret = asyncio.run(search("gpt4orelease"))
print(ret)

View File

@@ -1,50 +1,61 @@
has_git = True import sys, os, zipfile, shutil
try:
import git.exc
from git.repo import Repo
except BaseException as e:
has_git = False
import sys, os
import requests import requests
import psutil
from type.config import VERSION
from SparkleLogging.utils.core import LogManager
from logging import Logger
from util.general_utils import download_file
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
ASTRBOT_RELEASE_API = "https://api.github.com/repos/Soulter/AstrBot/releases"
MIRROR_ASTRBOT_RELEASE_API = "https://api.soulter.top/releases" # 0-10 分钟的缓存时间
def get_main_path():
ret = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), ".."))
return ret
def terminate_child_processes():
try:
parent = psutil.Process(os.getpid())
children = parent.children(recursive=True)
logger.info(f"正在终止 {len(children)} 个子进程。")
for child in children:
logger.info(f"正在终止子进程 {child.pid}")
child.terminate()
try:
child.wait(timeout=3)
except psutil.NoSuchProcess:
continue
except psutil.TimeoutExpired:
logger.info(f"子进程 {child.pid} 没有被正常终止, 正在强行杀死。")
child.kill()
except psutil.NoSuchProcess:
pass
def _reboot(): def _reboot():
py = sys.executable py = sys.executable
terminate_child_processes()
os.execl(py, py, *sys.argv) os.execl(py, py, *sys.argv)
def find_repo() -> Repo: def request_release_info(latest: bool = True, url: str = ASTRBOT_RELEASE_API, mirror_url: str = MIRROR_ASTRBOT_RELEASE_API) -> list:
if not has_git:
raise Exception("未安装 GitPython 库,无法进行更新。")
repo = None
# 由于项目更名过,因此这里需要多次尝试。
try:
repo = Repo()
except git.exc.InvalidGitRepositoryError:
try:
repo = Repo(path="QQChannelChatGPT")
except git.exc.InvalidGitRepositoryError:
repo = Repo(path="AstrBot")
if not repo:
raise Exception("在已知的目录下未找到项目位置。请联系项目维护者。")
return repo
def request_release_info(latest: bool = True) -> list:
''' '''
请求版本信息。 请求版本信息。
返回一个列表每个元素是一个字典包含版本号、发布时间、更新内容、commit hash等信息。 返回一个列表每个元素是一个字典包含版本号、发布时间、更新内容、commit hash等信息。
''' '''
api_url1 = "https://api.github.com/repos/Soulter/AstrBot/releases"
api_url2 = "https://api.soulter.top/releases" # 0-10 分钟的缓存时间
try: try:
result = requests.get(api_url2).json() result = requests.get(mirror_url).json()
except BaseException as e: except BaseException as e:
result = requests.get(api_url1).json() result = requests.get(url).json()
try: try:
if not result: return []
if latest: if latest:
ret = github_api_release_parser([result[0]]) ret = github_api_release_parser([result[0]])
else: else:
ret = github_api_release_parser(result) ret = github_api_release_parser(result)
except BaseException as e: except BaseException as e:
logger.error(f"解析版本信息失败: {result}")
raise Exception(f"解析版本信息失败: {result}") raise Exception(f"解析版本信息失败: {result}")
return ret return ret
@@ -66,53 +77,64 @@ def github_api_release_parser(releases: list) -> list:
"published_at": release['published_at'], "published_at": release['published_at'],
"body": release['body'], "body": release['body'],
"commit_hash": commit_hash, "commit_hash": commit_hash,
"tag_name": release['tag_name'] "tag_name": release['tag_name'],
"zipball_url": release['zipball_url']
}) })
return ret return ret
def check_update() -> str: def compare_version(v1: str, v2: str) -> int:
repo = find_repo() '''
curr_commit = repo.commit().hexsha 比较两个版本号的大小。
update_data = request_release_info() 返回 1 表示 v1 > v2返回 -1 表示 v1 < v2返回 0 表示 v1 = v2。
new_commit = update_data[0]['commit_hash'] '''
print(f"当前版本: {curr_commit}") v1 = v1.replace('v', '')
print(f"最新版本: {new_commit}") v2 = v2.replace('v', '')
if curr_commit.startswith(new_commit): v1 = v1.split('.')
return "当前已经是最新版本。" v2 = v2.split('.')
else:
update_info = f"""有新版本可用。
=== 当前版本 ===
{curr_commit}
=== 新版本 === for i in range(3):
if int(v1[i]) > int(v2[i]):
return 1
elif int(v1[i]) < int(v2[i]):
return -1
return 0
def check_update() -> str:
update_data = request_release_info()
tag_name = update_data[0]['tag_name']
logger.debug(f"当前版本: v{VERSION}")
logger.debug(f"最新版本: {tag_name}")
if compare_version(VERSION, tag_name) >= 0:
return "当前已经是最新版本。"
update_info = f"""# 当前版本
v{VERSION}
# 最新版本
{update_data[0]['version']} {update_data[0]['version']}
=== 发布时间 === # 发布时间
{update_data[0]['published_at']} {update_data[0]['published_at']}
=== 更新内容 === # 更新内容
{update_data[0]['body']}""" ---
return update_info {update_data[0]['body']}
---"""
return update_info
def update_project(update_data: list, def update_project(reboot: bool = False,
reboot: bool = False,
latest: bool = True, latest: bool = True,
version: str = ''): version: str = ''):
repo = find_repo() update_data = request_release_info(latest)
# update_data = request_release_info(latest)
if latest: if latest:
# 检查本地commit和最新commit是否一致 latest_version = update_data[0]['tag_name']
curr_commit = repo.head.commit.hexsha if compare_version(VERSION, latest_version) >= 0:
new_commit = update_data[0]['commit_hash']
if curr_commit == '':
raise Exception("无法获取当前版本号对应的版本位置。请联系项目维护者。")
if curr_commit.startswith(new_commit):
raise Exception("当前已经是最新版本。") raise Exception("当前已经是最新版本。")
else: else:
# 更新到最新版本对应的commit
try: try:
repo.remotes.origin.fetch() download_file(update_data[0]['zipball_url'], "temp.zip")
repo.git.checkout(update_data[0]['tag_name']) unzip_file("temp.zip", get_main_path())
if reboot: _reboot() if reboot: _reboot()
except BaseException as e: except BaseException as e:
raise e raise e
@@ -123,22 +145,65 @@ def update_project(update_data: list,
for data in update_data: for data in update_data:
if data['tag_name'] == version: if data['tag_name'] == version:
try: try:
repo.remotes.origin.fetch() download_file(data['zipball_url'], "temp.zip")
repo.git.checkout(data['tag_name']) unzip_file("temp.zip", get_main_path())
flag = True flag = True
if reboot: _reboot() if reboot: _reboot()
except BaseException as e: except BaseException as e:
raise e raise e
if not flag: if not flag:
raise Exception("未找到指定版本。") raise Exception("未找到指定版本。")
def unzip_file(zip_path: str, target_dir: str):
'''
解压缩文件, 并将压缩包内**第一个**文件夹内的文件移动到 target_dir
'''
os.makedirs(target_dir, exist_ok=True)
update_dir = ""
logger.info(f"解压文件: {zip_path}")
with zipfile.ZipFile(zip_path, 'r') as z:
update_dir = z.namelist()[0]
z.extractall(target_dir)
avoid_dirs = ["logs", "data", "configs", "temp_plugins", update_dir]
# copy addons/plugins to the target_dir temporarily
if os.path.exists(os.path.join(target_dir, "addons/plugins")):
logger.info("备份插件目录:从 addons/plugins 到 temp_plugins")
shutil.copytree(os.path.join(target_dir, "addons/plugins"), "temp_plugins")
files = os.listdir(os.path.join(target_dir, update_dir))
for f in files:
logger.info(f"移动更新文件/目录: {f}")
if os.path.isdir(os.path.join(target_dir, update_dir, f)):
if f in avoid_dirs: continue
if os.path.exists(os.path.join(target_dir, f)):
shutil.rmtree(os.path.join(target_dir, f), onerror=on_error)
else:
if os.path.exists(os.path.join(target_dir, f)):
os.remove(os.path.join(target_dir, f))
shutil.move(os.path.join(target_dir, update_dir, f), target_dir)
# move back
if os.path.exists("temp_plugins"):
logger.info("恢复插件目录:从 temp_plugins 到 addons/plugins")
shutil.rmtree(os.path.join(target_dir, "addons/plugins"), onerror=on_error)
shutil.move("temp_plugins", os.path.join(target_dir, "addons/plugins"))
def checkout_branch(branch_name: str):
repo = find_repo()
try: try:
origin = repo.remotes.origin logger.info(f"删除临时更新文件: {zip_path}{os.path.join(target_dir, update_dir)}")
origin.fetch() shutil.rmtree(os.path.join(target_dir, update_dir), onerror=on_error)
repo.git.checkout(branch_name) os.remove(zip_path)
repo.git.pull("origin", branch_name, "-f") except:
return True logger.warn(f"删除更新文件失败,可以手动删除 {zip_path}{os.path.join(target_dir, update_dir)}")
except BaseException as e:
raise e def on_error(func, path, exc_info):
'''
a callback of the rmtree function.
'''
print(f"remove {path} failed.")
import stat
if not os.access(path, os.W_OK):
os.chmod(path, stat.S_IWUSR)
func(path)
else:
raise