Compare commits

...

65 Commits

Author SHA1 Message Date
Soulter
542d4bc703 typo: fix t2i typo 2024-06-03 08:47:51 -04:00
Soulter
e3640fdac9 perf: 优化update、help等指令的输出效果 2024-06-03 08:33:17 -04:00
Soulter
f64ab4b190 chore: 移除了一些过时的方法 2024-06-03 05:54:40 -04:00
Soulter
bd571e1577 feat: 提供新的文本转图片样式 2024-06-03 05:51:44 -04:00
Soulter
e4a5cbd893 prof: 改善加载插件时的稳定性 2024-06-03 00:20:56 -04:00
Soulter
7a9fd7fd1e fix: 修复报配置文件未找到的问题 2024-06-02 23:14:48 -04:00
Soulter
d9b60108db Update README.md 2024-05-30 18:11:57 +08:00
Soulter
8455c8b4ed Update README.md 2024-05-30 18:03:59 +08:00
Soulter
5c2e7099fc Update README.md 2024-05-26 21:38:32 +08:00
Soulter
1fd1d55895 Update config.py 2024-05-26 21:31:26 +08:00
Soulter
5ce4137e75 fix: 修复model指令 2024-05-26 21:15:33 +08:00
Soulter
d49179541e feat: 给插件的init方法传入 ctx 2024-05-26 21:10:19 +08:00
Soulter
676f258981 perf: 重启后终止子进程 2024-05-26 21:09:23 +08:00
Soulter
fa44749240 fix: 修复相对路径导致的windows启动器无法安装依赖的问题 2024-05-26 18:15:25 +08:00
Soulter
6c856f9da2 fix(typo): 修复插件注册器的一个typo导致无法注册消息平台插件的问题 2024-05-26 18:07:07 +08:00
Soulter
e8773cea7f fix: 修复配置文件没有有效迁移的问题 2024-05-25 20:59:37 +08:00
Soulter
4d36ffcb08 fix: 优化插件的结果处理 2024-05-25 18:46:38 +08:00
Soulter
c653e492c4 Merge pull request #164 from Soulter/stat-upload-perf
/models 指令优化
2024-05-25 18:35:56 +08:00
Soulter
f08de1f404 perf: 添加 models 指令到帮助中 2024-05-25 18:34:08 +08:00
Soulter
1218691b61 perf: model 指令放宽限制,支持输入自定义模型。设置模型后持久化保存。 2024-05-25 18:29:01 +08:00
Soulter
61fc27ff79 Merge pull request #163 from Soulter/stat-upload-perf
优化统计记录数据结构
2024-05-25 18:28:08 +08:00
Soulter
123ee24f7e fix: stat perf 2024-05-25 18:01:16 +08:00
Soulter
52c9045a28 feat: 优化了统计信息数据结构 2024-05-25 17:47:41 +08:00
Soulter
f00f1e8933 fix: 画图报错 2024-05-24 13:33:02 +08:00
Soulter
8da4433e57 chore: 更改相关字段 2024-05-21 08:44:05 +08:00
Soulter
7babb87934 perf: 更改库的加载顺序 2024-05-21 08:41:46 +08:00
Soulter
f67b171385 perf: 数据库迁移至 data 目录下 2024-05-19 17:10:11 +08:00
Soulter
1780d1355d perf: 将内部pip全部更换为阿里云镜像; 插件依赖更新逻辑优化 2024-05-19 16:45:08 +08:00
Soulter
5a3390e4f3 fix: force update 2024-05-19 16:06:47 +08:00
Soulter
337d96b41d Merge pull request #160 from Soulter/dev_default_openai_refactor
优化自带的 OpenAI LLM 交互, 人格, 网页搜索
2024-05-19 15:23:19 +08:00
Soulter
38a1dfea98 fix: web content scraper add proxy 2024-05-19 15:08:22 +08:00
Soulter
fbef73aeec fix: websearch encoding set to utf-8 2024-05-19 14:42:28 +08:00
Soulter
d6214c2b7c fix: web search 2024-05-19 12:55:54 +08:00
Soulter
d58c86f6fc perf: websearch 优化;项目结构调整 2024-05-19 12:46:07 +08:00
Soulter
ea34c20198 perf: 优化人格和LVM的处理过程 2024-05-18 10:34:35 +08:00
Soulter
934ca94e62 refactor: 重写 LLM OpenAI 模块 2024-05-17 22:56:44 +08:00
Soulter
1775327c2e chore: refact openai official 2024-05-17 09:07:11 +08:00
Soulter
707fcad8b4 feat: gpt 模型列表查看指令 models 2024-05-17 00:06:49 +08:00
Soulter
f143c5afc6 fix: 修复 plugin v 子指令报错的问题 2024-05-16 23:11:07 +08:00
Soulter
99f94b2611 fix: 修复无法调用某些指令的问题 2024-05-16 23:04:47 +08:00
Soulter
e39c1f9116 remove: 移除自动更换多模态模型的功能 2024-05-16 22:46:50 +08:00
Soulter
235e0b9b8f fix: gocq logging 2024-05-09 13:24:31 +08:00
Soulter
d5a9bed8a4 fix(updator): IterableList object has no
attribute origin
2024-05-08 19:18:21 +08:00
Soulter
d7dc8a7612 chore: 添加一些日志;更新版本 2024-05-08 19:12:23 +08:00
Soulter
08cd3ca40c perf: 更好的日志输出;
fix: 修复可视化面板刷新404
2024-05-08 19:01:36 +08:00
Soulter
a13562dcea fix: 修复启动器启动加载带有配置的插件时提示配置文件缺失的问题 2024-05-08 16:28:30 +08:00
Soulter
d7a0c0d1d0 Update requirements.txt 2024-05-07 15:58:51 +08:00
Soulter
c0729b2d29 fix: 修复插件重载相关问题 2024-04-22 19:04:15 +08:00
Soulter
a80f474290 fix: 修复更新插件时的报错 2024-04-22 18:36:56 +08:00
Soulter
699207dd54 update: version 2024-04-21 22:41:48 +08:00
Soulter
e7708010c9 fix: 修复 gocq 平台下无法回复消息的问题 2024-04-21 22:39:09 +08:00
Soulter
f66091e08f 🎨: clean codes 2024-04-21 22:20:23 +08:00
Soulter
03bb932f8f fix: 修复可视化面板报错 2024-04-21 22:16:42 +08:00
Soulter
fbf8b349e0 update: helloworld 2024-04-21 22:13:27 +08:00
Soulter
e9278fce6a !! delete: 移除对逆向 ChatGPT 的所有支持。 2024-04-21 22:12:09 +08:00
Soulter
9a7db956d5 fix: 修复 3.10.x readibility 依赖导致的报错 2024-04-21 16:40:02 +08:00
Soulter
13196dd667 perf: 修改包路径 2024-03-15 14:49:44 +08:00
Soulter
52b80e24d2 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-03-15 14:29:48 +08:00
Soulter
7dff87e65d fix: 修复无法更新到指定版本的问题 2024-03-15 14:29:28 +08:00
Soulter
31ee64d1b2 Update docker-image.yml 2024-03-15 14:11:57 +08:00
Soulter
8e865b6918 fix: 修复无LLM情况下update不管用的问题 2024-03-15 14:05:16 +08:00
Soulter
66f91e5832 update: 更新版本号 2024-03-15 13:50:57 +08:00
Soulter
cd2d368f9c fix: 修复可视化面板指定版本更新失效的问题 2024-03-15 13:48:14 +08:00
Soulter
7736c1c9bd feat: QQ机器人官方API 支持可选择是否接收Q群消息 2024-03-15 13:44:18 +08:00
Soulter
6728c0b7b5 chore: 改变包名 2024-03-15 13:37:51 +08:00
72 changed files with 2665 additions and 2402 deletions

View File

@@ -14,6 +14,8 @@ jobs:
uses: actions/checkout@v2
- name: Build image
run: |
git clone https://github.com/Soulter/AstrBot
cd AstrBot
docker build -t ${{ secrets.DOCKER_HUB_USERNAME }}/astrbot:latest .
- name: Publish image
run: |

1
.gitignore vendored
View File

@@ -10,3 +10,4 @@ cmd_config.json
addons/plugins/
data/*
cookies.json
logs/

View File

@@ -7,62 +7,37 @@
# AstrBot
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/Soulter/AstrBot)](https://github.com/Soulter/AstrBot/releases/latest)
<img src="https://wakatime.com/badge/user/915e5316-99c6-4563-a483-ef186cf000c9/project/34412545-2e37-400f-bedc-42348713ac1f.svg" alt="wakatime">
<img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="python">
<a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg"/></a>
<a href="https://qm.qq.com/cgi-bin/qm/qr?k=EYGsuUTfe00_iOu9JTXS7_TEpMkXOvwv&jump_from=webapi&authKey=uUEMKCROfsseS+8IzqPjzV3y1tzy4AkykwTib2jNkOFdzezF9s9XknqnIaf3CDft">
<img alt="Static Badge" src="https://img.shields.io/badge/QQ群-322154837-purple">
</a>
<img alt="Static Badge" src="https://img.shields.io/badge/频道-x42d56aki2-purple">
<a href="https://astrbot.soulter.top/center">项目部署</a>
<a href="https://github.com/Soulter/QQChannelChatGPT/issues">问题提交</a>
<a href="https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91">插件开发(最少只需 25 行)</a>
<a href="https://github.com/Soulter/AstrBot/issues">问题提交</a>
<a href="https://astrbot.soulter.top/center/docs/%E5%BC%80%E5%8F%91/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91">插件开发</a>
</div>
## 🤔您可能想了解的
- **如何部署?** [帮助文档](https://astrbot.soulter.top/center/docs/%E9%83%A8%E7%BD%B2/%E9%80%9A%E8%BF%87Docker%E9%83%A8%E7%BD%B2) (部署不成功欢迎进群捞人解决<3)
- **go-cqhttp启动不成功报登录失败** [在这里搜索解决方法](https://github.com/Mrs4s/go-cqhttp/issues)
- **程序闪退/机器人启动不成功** [提交issue或加群反馈](https://github.com/Soulter/QQChannelChatGPT/issues)
- **如何开启 ChatGPTClaudeHuggingChat 等语言模型** [查看帮助](https://astrbot.soulter.top/center/docs/%E4%BD%BF%E7%94%A8/%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B)
## 🧩功能:
最近功能
1. 可视化面板
2. Docker 一键部署项目[链接](https://astrbot.soulter.top/center/docs/%E9%83%A8%E7%BD%B2/%E9%80%9A%E8%BF%87Docker%E9%83%A8%E7%BD%B2)
🌍支持的消息平台/接口
- go-cqhttpQQQQ频道
- QQ 官方机器人接口
🌍支持的消息平台
- QQ 群、QQ 频道OneBot、QQ 官方接口)
- Telegram由 [astrbot_plugin_telegram](https://github.com/Soulter/astrbot_plugin_telegram) 插件支持)
🌍支持的AI语言模型一览
🌍支持的大模型一览:
**文字模型/图片理解**
- OpenAI GPT-3原生支持
- OpenAI GPT-3.5原生支持
- OpenAI GPT-4原生支持
- OpenAI GPT、DallE 系列
- Claude免费由[LLMs插件](https://github.com/Soulter/llms)支持)
- HuggingChat免费由[LLMs插件](https://github.com/Soulter/llms)支持)
- Gemini免费由[LLMs插件](https://github.com/Soulter/llms)支持)
**图片生成**
- OpenAI Dalle 接口
- NovelAI/Naifu (免费[AIDraw插件](https://github.com/Soulter/aidraw)支持)
🌍机器人支持的能力一览:
- 可视化面板beta
- 同时部署机器人到 QQ QQ 频道
- 大模型对话
- 大模型网页搜索能力 **(目前仅支持OpenAI系模型最新版本下使用 web on 指令打开)**
- 插件在QQ或QQ频道聊天框内输入 `plugin` 了解详情
- 回复文字图片渲染以图片markdown格式回复**大幅度降低被风控概率**需手动在`cmd_config.json`内开启qq_pic_mode
- 人格设置
- 关键词回复
- 热更新更新本项目时**仅需**在QQ或QQ频道聊天框内输入`update latest r`
- Windows一键部署 https://github.com/Soulter/QQChatGPTLauncher/releases/latest
- 大模型对话、人格、网页搜索
- 可视化管理面板
- 同时处理多平台消息
- 精确到个人的会话隔离
- 插件支持
- 文本转图片回复
<!--
### 基本功能
@@ -133,7 +108,7 @@
- `LLMS`: https://github.com/Soulter/llms | Claude, HuggingChat 大语言模型接入。
- `GoodPlugins`: https://github.com/Soulter/goodplugins | 随机动漫图片、搜番、喜报生成器等
- `GoodPlugins`: https://github.com/Soulter/goodplugins | 随机动漫图片、搜番、喜报生成器等
- `sysstat`: https://github.com/Soulter/sysstatqcbot | 查看系统状态
@@ -141,6 +116,8 @@
- `liferestart`: https://github.com/Soulter/liferestart | 人生重开模拟器
- `astrbot_plugin_aiocqhttp`: https://github.com/Soulter/astrbot_plugin_aiocqhttp | aiocqhttp 适配器,支持接入支持反向 WS 的 OneBot 协议实现,如 Lagrange.OneBotShamrock 等。
<img width="900" alt="image" src="https://github.com/Soulter/AstrBot/assets/37870767/824d1ff3-7b85-481c-b795-8e62dedb9fd7">
@@ -160,7 +137,6 @@
- `/key` 动态添加key
- `/set` 人格设置面板
- `/keyword nihao 你好` 设置关键词回复。nihao->你好
- `/revgpt` 切换为ChatGPT逆向库
- `/画` 画画
#### 逆向ChatGPT库语言模型

View File

@@ -1,14 +1,17 @@
from aip import AipContentCensor
class BaiduJudge:
def __init__(self, baidu_configs) -> None:
if 'app_id' in baidu_configs and 'api_key' in baidu_configs and 'secret_key' in baidu_configs:
self.app_id = str(baidu_configs['app_id'])
self.api_key = baidu_configs['api_key']
self.secret_key = baidu_configs['secret_key']
self.client = AipContentCensor(self.app_id, self.api_key, self.secret_key)
self.client = AipContentCensor(
self.app_id, self.api_key, self.secret_key)
else:
raise ValueError("Baidu configs error! 请填写百度内容审核服务相关配置!")
def judge(self, text):
res = self.client.textCensorUserDefined(text)
if 'conclusionType' not in res:

View File

@@ -1 +1 @@
import{x as i,o as l,c as _,w as s,a as e,f as a,J as m,V as c,b as t,t as u,ad as p,B as n,ae as o,j as f}from"./index-9075b0bb.js";const b={class:"text-h3"},h={class:"d-flex align-center"},g={class:"d-flex align-center"},V=i({__name:"BaseBreadcrumb",props:{title:String,breadcrumbs:Array,icon:String},setup(d){const r=d;return(x,B)=>(l(),_(c,{class:"page-breadcrumb mb-1 mt-1"},{default:s(()=>[e(a,{cols:"12",md:"12"},{default:s(()=>[e(m,{variant:"outlined",elevation:"0",class:"px-4 py-3 withbg"},{default:s(()=>[e(c,{"no-gutters":"",class:"align-center"},{default:s(()=>[e(a,{md:"5"},{default:s(()=>[t("h3",b,u(r.title),1)]),_:1}),e(a,{md:"7",sm:"12",cols:"12"},{default:s(()=>[e(p,{items:r.breadcrumbs,class:"text-h5 justify-md-end pa-1"},{divider:s(()=>[t("div",h,[e(n(o),{size:"17"})])]),prepend:s(()=>[e(f,{size:"small",icon:"mdi-home",class:"text-secondary mr-2"}),t("div",g,[e(n(o),{size:"17"})])]),_:1},8,["items"])]),_:1})]),_:1})]),_:1})]),_:1})]),_:1}))}});export{V as _};
import{x as i,o as l,c as _,w as s,a as e,f as a,J as m,V as c,b as t,t as u,ad as p,B as n,ae as o,j as f}from"./index-dc96e1be.js";const b={class:"text-h3"},h={class:"d-flex align-center"},g={class:"d-flex align-center"},V=i({__name:"BaseBreadcrumb",props:{title:String,breadcrumbs:Array,icon:String},setup(d){const r=d;return(x,B)=>(l(),_(c,{class:"page-breadcrumb mb-1 mt-1"},{default:s(()=>[e(a,{cols:"12",md:"12"},{default:s(()=>[e(m,{variant:"outlined",elevation:"0",class:"px-4 py-3 withbg"},{default:s(()=>[e(c,{"no-gutters":"",class:"align-center"},{default:s(()=>[e(a,{md:"5"},{default:s(()=>[t("h3",b,u(r.title),1)]),_:1}),e(a,{md:"7",sm:"12",cols:"12"},{default:s(()=>[e(p,{items:r.breadcrumbs,class:"text-h5 justify-md-end pa-1"},{divider:s(()=>[t("div",h,[e(n(o),{size:"17"})])]),prepend:s(()=>[e(f,{size:"small",icon:"mdi-home",class:"text-secondary mr-2"}),t("div",g,[e(n(o),{size:"17"})])]),_:1},8,["items"])]),_:1})]),_:1})]),_:1})]),_:1})]),_:1}))}});export{V as _};

View File

@@ -1 +1 @@
import{x as e,o as a,c as t,w as o,a as s,B as n,Z as r,W as c}from"./index-9075b0bb.js";const f=e({__name:"BlankLayout",setup(p){return(u,_)=>(a(),t(c,null,{default:o(()=>[s(n(r))]),_:1}))}});export{f as default};
import{x as e,o as a,c as t,w as o,a as s,B as n,Z as r,W as c}from"./index-dc96e1be.js";const f=e({__name:"BlankLayout",setup(p){return(u,_)=>(a(),t(c,null,{default:o(()=>[s(n(r))]),_:1}))}});export{f as default};

View File

@@ -1 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-3bf6ea80.js";import{_}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as p,D as a,o as r,s,a as e,w as t,f as o,V as i,F as n,u as g,c as h,a0 as b,e as x,t as y}from"./index-9075b0bb.js";const P=p({__name:"ColorPage",setup(C){const c=a({title:"Colors Page"}),d=a([{title:"Utilities",disabled:!1,href:"#"},{title:"Colors",disabled:!0,href:"#"}]),u=a(["primary","lightprimary","secondary","lightsecondary","info","success","accent","warning","error","darkText","lightText","borderLight","inputBorder","containerBg"]);return(V,k)=>(r(),s(n,null,[e(m,{title:c.value.title,breadcrumbs:d.value},null,8,["title","breadcrumbs"]),e(i,null,{default:t(()=>[e(o,{cols:"12",md:"12"},{default:t(()=>[e(_,{title:"Color Palette"},{default:t(()=>[e(i,null,{default:t(()=>[(r(!0),s(n,null,g(u.value,(l,f)=>(r(),h(o,{md:"3",cols:"12",key:f},{default:t(()=>[e(b,{rounded:"md",class:"align-center justify-center d-flex",height:"100",width:"100%",color:l},{default:t(()=>[x("class: "+y(l),1)]),_:2},1032,["color"])]),_:2},1024))),128))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{P as default};
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-e31f96f8.js";import{_}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as p,D as a,o as r,s,a as e,w as t,f as o,V as i,F as n,u as g,c as h,a0 as b,e as x,t as y}from"./index-dc96e1be.js";const P=p({__name:"ColorPage",setup(C){const c=a({title:"Colors Page"}),d=a([{title:"Utilities",disabled:!1,href:"#"},{title:"Colors",disabled:!0,href:"#"}]),u=a(["primary","lightprimary","secondary","lightsecondary","info","success","accent","warning","error","darkText","lightText","borderLight","inputBorder","containerBg"]);return(V,k)=>(r(),s(n,null,[e(m,{title:c.value.title,breadcrumbs:d.value},null,8,["title","breadcrumbs"]),e(i,null,{default:t(()=>[e(o,{cols:"12",md:"12"},{default:t(()=>[e(_,{title:"Color Palette"},{default:t(()=>[e(i,null,{default:t(()=>[(r(!0),s(n,null,g(u.value,(l,f)=>(r(),h(o,{md:"3",cols:"12",key:f},{default:t(()=>[e(b,{rounded:"md",class:"align-center justify-center d-flex",height:"100",width:"100%",color:l},{default:t(()=>[x("class: "+y(l),1)]),_:2},1032,["color"])]),_:2},1024))),128))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{P as default};

View File

@@ -1 +1 @@
import{o as l,s as o,u as c,c as n,w as u,Q as g,b as s,R as k,F as t,ab as h,O as p,t as m,a as V,ac as f,i as C,q as x,k as v,A as U}from"./index-9075b0bb.js";import{_ as w}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";const S={__name:"ConfigDetailCard",props:{config:Array},setup(d){return(y,B)=>(l(!0),o(t,null,c(d.config,r=>(l(),n(w,{key:r.name,title:r.name,style:{"margin-bottom":"16px"}},{default:u(()=>[g(s("a",null,"No data",512),[[k,d.config.length===0]]),(l(!0),o(t,null,c(r.body,e=>(l(),o(t,null,[e.config_type==="item"?(l(),o(t,{key:0},[e.val_type==="bool"?(l(),n(h,{key:0,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,color:"primary",inset:""},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="str"?(l(),n(p,{key:1,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="int"?(l(),n(p,{key:2,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="list"?(l(),o(t,{key:3},[s("span",null,m(e.name),1),V(f,{modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,chips:"",clearable:"",label:"请添加",multiple:"","prepend-icon":"mdi-tag-multiple-outline"},{selection:u(({attrs:a,item:i,select:b,selected:_})=>[V(C,x(a,{"model-value":_,closable:"",onClick:b,"onClick:close":D=>y.remove(i)}),{default:u(()=>[s("strong",null,m(i),1)]),_:2},1040,["model-value","onClick","onClick:close"])]),_:2},1032,["modelValue","onUpdate:modelValue"])],64)):v("",!0)],64)):e.config_type==="divider"?(l(),n(U,{key:1,style:{"margin-top":"8px","margin-bottom":"8px"}})):v("",!0)],64))),256))]),_:2},1032,["title"]))),128))}};export{S as _};
import{o as l,s as o,u as c,c as n,w as u,Q as g,b as s,R as k,F as t,ab as h,O as p,t as m,a as V,ac as f,i as C,q as x,k as v,A as U}from"./index-dc96e1be.js";import{_ as w}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";const S={__name:"ConfigDetailCard",props:{config:Array},setup(d){return(y,B)=>(l(!0),o(t,null,c(d.config,r=>(l(),n(w,{key:r.name,title:r.name,style:{"margin-bottom":"16px"}},{default:u(()=>[g(s("a",null,"No data",512),[[k,d.config.length===0]]),(l(!0),o(t,null,c(r.body,e=>(l(),o(t,null,[e.config_type==="item"?(l(),o(t,{key:0},[e.val_type==="bool"?(l(),n(h,{key:0,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,color:"primary",inset:""},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="str"?(l(),n(p,{key:1,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="int"?(l(),n(p,{key:2,modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,label:e.name,hint:e.description,style:{"margin-bottom":"8px"},variant:"outlined"},null,8,["modelValue","onUpdate:modelValue","label","hint"])):e.val_type==="list"?(l(),o(t,{key:3},[s("span",null,m(e.name),1),V(f,{modelValue:e.value,"onUpdate:modelValue":a=>e.value=a,chips:"",clearable:"",label:"请添加",multiple:"","prepend-icon":"mdi-tag-multiple-outline"},{selection:u(({attrs:a,item:i,select:b,selected:_})=>[V(C,x(a,{"model-value":_,closable:"",onClick:b,"onClick:close":D=>y.remove(i)}),{default:u(()=>[s("strong",null,m(i),1)]),_:2},1040,["model-value","onClick","onClick:close"])]),_:2},1032,["modelValue","onUpdate:modelValue"])],64)):v("",!0)],64)):e.config_type==="divider"?(l(),n(U,{key:1,style:{"margin-top":"8px","margin-bottom":"8px"}})):v("",!0)],64))),256))]),_:2},1032,["title"]))),128))}};export{S as _};

View File

@@ -1 +1 @@
import{_ as y}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as h,o,c as u,w as t,a,a8 as b,b as c,K as x,e as f,t as g,G as V,A as w,L as S,a9 as $,J as B,s as _,d as v,F as d,u as p,f as G,V as T,aa as j,T as l}from"./index-9075b0bb.js";import{_ as m}from"./ConfigDetailCard-d45b9ca7.js";const D={class:"d-sm-flex align-center justify-space-between"},C=h({__name:"ConfigGroupCard",props:{title:String},setup(e){const s=e;return(i,n)=>(o(),u(B,{variant:"outlined",elevation:"0",class:"withbg",style:{width:"50%"}},{default:t(()=>[a(b,{style:{padding:"10px 20px"}},{default:t(()=>[c("div",D,[a(x,null,{default:t(()=>[f(g(s.title),1)]),_:1}),a(V)])]),_:1}),a(w),a(S,null,{default:t(()=>[$(i.$slots,"default")]),_:3})]),_:3}))}}),I={style:{display:"flex","flex-direction":"row","justify-content":"space-between","align-items":"center","margin-bottom":"12px"}},N={style:{display:"flex","flex-direction":"row"}},R={style:{"margin-right":"10px",color:"black"}},F={style:{color:"#222"}},k=h({__name:"ConfigGroupItem",props:{title:String,desc:String,btnRoute:String,namespace:String},setup(e){const s=e;return(i,n)=>(o(),_("div",I,[c("div",N,[c("h3",R,g(s.title),1),c("p",F,g(s.desc),1)]),a(v,{to:s.btnRoute,color:"primary",class:"ml-2",style:{"border-radius":"10px"}},{default:t(()=>[f("配置")]),_:1},8,["to"])]))}}),L={style:{display:"flex","flex-direction":"row",padding:"16px",gap:"16px",width:"100%"}},P={name:"ConfigPage",components:{UiParentCard:y,ConfigGroupCard:C,ConfigGroupItem:k,ConfigDetailCard:m},data(){return{config_data:[],config_base:[],save_message_snack:!1,save_message:"",save_message_success:"",config_outline:[],namespace:""}},mounted(){this.getConfig()},methods:{switchConfig(e){l.get("/api/configs?namespace="+e).then(s=>{this.namespace=e,this.config_data=s.data.data,console.log(this.config_data)}).catch(s=>{save_message=s,save_message_snack=!0,save_message_success="error"})},getConfig(){l.get("/api/config_outline").then(e=>{this.config_outline=e.data.data,console.log(this.config_outline)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"}),l.get("/api/configs").then(e=>{this.config_base=e.data.data,console.log(this.config_data)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"})},updateConfig(){l.post("/api/configs",{base_config:this.config_base,config:this.config_data,namespace:this.namespace}).then(e=>{e.data.status==="success"?(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="success"):(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="error")}).catch(e=>{this.save_message=e,this.save_message_snack=!0,this.save_message_success="error"})}}},J=Object.assign(P,{setup(e){return(s,i)=>(o(),_(d,null,[a(T,null,{default:t(()=>[c("div",L,[(o(!0),_(d,null,p(s.config_outline,n=>(o(),u(C,{key:n.name,title:n.name},{default:t(()=>[(o(!0),_(d,null,p(n.body,r=>(o(),u(k,{title:r.title,desc:r.desc,namespace:r.namespace,onClick:U=>s.switchConfig(r.namespace)},null,8,["title","desc","namespace","onClick"]))),256))]),_:2},1032,["title"]))),128))]),a(G,{cols:"12",md:"12"},{default:t(()=>[a(m,{config:s.config_data},null,8,["config"]),a(m,{config:s.config_base},null,8,["config"])]),_:1})]),_:1}),a(v,{icon:"mdi-content-save",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary",onClick:s.updateConfig},null,8,["onClick"]),a(j,{timeout:2e3,elevation:"24",color:s.save_message_success,modelValue:s.save_message_snack,"onUpdate:modelValue":i[0]||(i[0]=n=>s.save_message_snack=n)},{default:t(()=>[f(g(s.save_message),1)]),_:1},8,["color","modelValue"])],64))}});export{J as default};
import{_ as y}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as h,o,c as u,w as t,a,a8 as b,b as c,K as x,e as f,t as g,G as V,A as w,L as S,a9 as $,J as B,s as _,d as v,F as d,u as p,f as G,V as T,aa as j,T as l}from"./index-dc96e1be.js";import{_ as m}from"./ConfigDetailCard-8467c848.js";const D={class:"d-sm-flex align-center justify-space-between"},C=h({__name:"ConfigGroupCard",props:{title:String},setup(e){const s=e;return(i,n)=>(o(),u(B,{variant:"outlined",elevation:"0",class:"withbg",style:{width:"50%"}},{default:t(()=>[a(b,{style:{padding:"10px 20px"}},{default:t(()=>[c("div",D,[a(x,null,{default:t(()=>[f(g(s.title),1)]),_:1}),a(V)])]),_:1}),a(w),a(S,null,{default:t(()=>[$(i.$slots,"default")]),_:3})]),_:3}))}}),I={style:{display:"flex","flex-direction":"row","justify-content":"space-between","align-items":"center","margin-bottom":"12px"}},N={style:{display:"flex","flex-direction":"row"}},R={style:{"margin-right":"10px",color:"black"}},F={style:{color:"#222"}},k=h({__name:"ConfigGroupItem",props:{title:String,desc:String,btnRoute:String,namespace:String},setup(e){const s=e;return(i,n)=>(o(),_("div",I,[c("div",N,[c("h3",R,g(s.title),1),c("p",F,g(s.desc),1)]),a(v,{to:s.btnRoute,color:"primary",class:"ml-2",style:{"border-radius":"10px"}},{default:t(()=>[f("配置")]),_:1},8,["to"])]))}}),L={style:{display:"flex","flex-direction":"row",padding:"16px",gap:"16px",width:"100%"}},P={name:"ConfigPage",components:{UiParentCard:y,ConfigGroupCard:C,ConfigGroupItem:k,ConfigDetailCard:m},data(){return{config_data:[],config_base:[],save_message_snack:!1,save_message:"",save_message_success:"",config_outline:[],namespace:""}},mounted(){this.getConfig()},methods:{switchConfig(e){l.get("/api/configs?namespace="+e).then(s=>{this.namespace=e,this.config_data=s.data.data,console.log(this.config_data)}).catch(s=>{save_message=s,save_message_snack=!0,save_message_success="error"})},getConfig(){l.get("/api/config_outline").then(e=>{this.config_outline=e.data.data,console.log(this.config_outline)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"}),l.get("/api/configs").then(e=>{this.config_base=e.data.data,console.log(this.config_data)}).catch(e=>{save_message=e,save_message_snack=!0,save_message_success="error"})},updateConfig(){l.post("/api/configs",{base_config:this.config_base,config:this.config_data,namespace:this.namespace}).then(e=>{e.data.status==="success"?(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="success"):(this.save_message=e.data.message,this.save_message_snack=!0,this.save_message_success="error")}).catch(e=>{this.save_message=e,this.save_message_snack=!0,this.save_message_success="error"})}}},J=Object.assign(P,{setup(e){return(s,i)=>(o(),_(d,null,[a(T,null,{default:t(()=>[c("div",L,[(o(!0),_(d,null,p(s.config_outline,n=>(o(),u(C,{key:n.name,title:n.name},{default:t(()=>[(o(!0),_(d,null,p(n.body,r=>(o(),u(k,{title:r.title,desc:r.desc,namespace:r.namespace,onClick:U=>s.switchConfig(r.namespace)},null,8,["title","desc","namespace","onClick"]))),256))]),_:2},1032,["title"]))),128))]),a(G,{cols:"12",md:"12"},{default:t(()=>[a(m,{config:s.config_data},null,8,["config"]),a(m,{config:s.config_base},null,8,["config"])]),_:1})]),_:1}),a(v,{icon:"mdi-content-save",size:"x-large",style:{position:"fixed",right:"52px",bottom:"52px"},color:"darkprimary",onClick:s.updateConfig},null,8,["onClick"]),a(j,{timeout:2e3,elevation:"24",color:s.save_message_success,modelValue:s.save_message_snack,"onUpdate:modelValue":i[0]||(i[0]=n=>s.save_message_snack=n)},{default:t(()=>[f(g(s.save_message),1)]),_:1},8,["color","modelValue"])],64))}});export{J as default};

View File

@@ -1 +1 @@
import{_ as t}from"./_plugin-vue_export-helper-c27b6911.js";import{o,c,w as s,V as i,a as r,b as e,d as l,e as a,f as d}from"./index-9075b0bb.js";const n="/assets/img-error-bg-ab6474a0.svg",_="/assets/img-error-blue-2675a7a9.svg",m="/assets/img-error-text-a6aebfa0.svg",g="/assets/img-error-purple-edee3fbc.svg";const p={},u={class:"text-center"},f=e("div",{class:"CardMediaWrapper"},[e("img",{src:n,alt:"grid",class:"w-100"}),e("img",{src:_,alt:"grid",class:"CardMediaParts"}),e("img",{src:m,alt:"build",class:"CardMediaBuild"}),e("img",{src:g,alt:"build",class:"CardMediaBuild"})],-1),h=e("h1",{class:"text-h1"},"Something is wrong",-1),v=e("p",null,[e("small",null,[a("The page you are looking was moved, removed, "),e("br"),a("renamed, or might never exist! ")])],-1);function x(b,V){return o(),c(i,{"no-gutters":"",class:"h-100vh"},{default:s(()=>[r(d,{class:"d-flex align-center justify-center"},{default:s(()=>[e("div",u,[f,h,v,r(l,{variant:"flat",color:"primary",class:"mt-4",to:"/","prepend-icon":"mdi-home"},{default:s(()=>[a(" Home")]),_:1})])]),_:1})]),_:1})}const C=t(p,[["render",x]]);export{C as default};
import{_ as t}from"./_plugin-vue_export-helper-c27b6911.js";import{o,c,w as s,V as i,a as r,b as e,d as l,e as a,f as d}from"./index-dc96e1be.js";const n="/assets/img-error-bg-ab6474a0.svg",_="/assets/img-error-blue-2675a7a9.svg",m="/assets/img-error-text-a6aebfa0.svg",g="/assets/img-error-purple-edee3fbc.svg";const p={},u={class:"text-center"},f=e("div",{class:"CardMediaWrapper"},[e("img",{src:n,alt:"grid",class:"w-100"}),e("img",{src:_,alt:"grid",class:"CardMediaParts"}),e("img",{src:m,alt:"build",class:"CardMediaBuild"}),e("img",{src:g,alt:"build",class:"CardMediaBuild"})],-1),h=e("h1",{class:"text-h1"},"Something is wrong",-1),v=e("p",null,[e("small",null,[a("The page you are looking was moved, removed, "),e("br"),a("renamed, or might never exist! ")])],-1);function x(b,V){return o(),c(i,{"no-gutters":"",class:"h-100vh"},{default:s(()=>[r(d,{class:"d-flex align-center justify-center"},{default:s(()=>[e("div",u,[f,h,v,r(l,{variant:"flat",color:"primary",class:"mt-4",to:"/","prepend-icon":"mdi-home"},{default:s(()=>[a(" Home")]),_:1})])]),_:1})]),_:1})}const C=t(p,[["render",x]]);export{C as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
import{_ as _t}from"./LogoDark.vue_vue_type_script_setup_true_lang-b3521e6c.js";import{x as ke,af as we,r as Ot,ag as Vt,D as A,ah as Ne,a2 as P,B as I,ai as Q,aj as St,I as Be,ak as Ie,al as Et,am as jt,an as At,ao as G,y as wt,o as Re,c as tt,w as C,a as j,O as He,b as ge,ap as Ft,d as Pt,e as Ge,s as Ct,aq as Tt,t as Nt,k as Bt,U as It,f as Fe,N as Rt,V as Pe,J as Ye,L as kt}from"./index-9075b0bb.js";import{a as Mt}from"./md5-48eb25f3.js";/**
import{_ as _t}from"./LogoDark.vue_vue_type_script_setup_true_lang-7df35c25.js";import{x as ke,af as we,r as Ot,ag as Vt,D as A,ah as Ne,a2 as P,B as I,ai as Q,aj as St,I as Be,ak as Ie,al as Et,am as jt,an as At,ao as G,y as wt,o as Re,c as tt,w as C,a as j,O as He,b as ge,ap as Ft,d as Pt,e as Ge,s as Ct,aq as Tt,t as Nt,k as Bt,U as It,f as Fe,N as Rt,V as Pe,J as Ye,L as kt}from"./index-dc96e1be.js";import{a as Mt}from"./md5-45627dcb.js";/**
* vee-validate v4.11.3
* (c) 2023 Abdelrahman Awad
* @license MIT

View File

@@ -1 +1 @@
import{av as _,x as d,D as n,o as c,s as m,a as f,w as p,Q as r,b as a,R as o,B as t,aw as h}from"./index-9075b0bb.js";const s={Sidebar_drawer:!0,Customizer_drawer:!1,mini_sidebar:!1,fontTheme:"Roboto",inputBg:!1},l=_({id:"customizer",state:()=>({Sidebar_drawer:s.Sidebar_drawer,Customizer_drawer:s.Customizer_drawer,mini_sidebar:s.mini_sidebar,fontTheme:"Poppins",inputBg:s.inputBg}),getters:{},actions:{SET_SIDEBAR_DRAWER(){this.Sidebar_drawer=!this.Sidebar_drawer},SET_MINI_SIDEBAR(e){this.mini_sidebar=e},SET_FONT(e){this.fontTheme=e}}}),u={class:"logo",style:{display:"flex","align-items":"center"}},b={style:{"font-size":"24px","font-weight":"1000"}},w={style:{"font-size":"20px","font-weight":"1000"}},S={style:{"font-size":"20px"}},z=d({__name:"LogoDark",setup(e){n("rgb(var(--v-theme-primary))"),n("rgb(var(--v-theme-secondary))");const i=l();return(g,B)=>(c(),m("div",u,[f(t(h),{to:"/",style:{"text-decoration":"none",color:"black"}},{default:p(()=>[r(a("span",b,"AstrBot 仪表盘",512),[[o,!t(i).mini_sidebar]]),r(a("span",w,"Astr",512),[[o,t(i).mini_sidebar]]),r(a("span",S,"Bot",512),[[o,t(i).mini_sidebar]])]),_:1})]))}});export{z as _,l as u};
import{av as _,x as d,D as n,o as c,s as m,a as f,w as p,Q as r,b as a,R as o,B as t,aw as h}from"./index-dc96e1be.js";const s={Sidebar_drawer:!0,Customizer_drawer:!1,mini_sidebar:!1,fontTheme:"Roboto",inputBg:!1},l=_({id:"customizer",state:()=>({Sidebar_drawer:s.Sidebar_drawer,Customizer_drawer:s.Customizer_drawer,mini_sidebar:s.mini_sidebar,fontTheme:"Poppins",inputBg:s.inputBg}),getters:{},actions:{SET_SIDEBAR_DRAWER(){this.Sidebar_drawer=!this.Sidebar_drawer},SET_MINI_SIDEBAR(e){this.mini_sidebar=e},SET_FONT(e){this.fontTheme=e}}}),u={class:"logo",style:{display:"flex","align-items":"center"}},b={style:{"font-size":"24px","font-weight":"1000"}},w={style:{"font-size":"20px","font-weight":"1000"}},S={style:{"font-size":"20px"}},z=d({__name:"LogoDark",setup(e){n("rgb(var(--v-theme-primary))"),n("rgb(var(--v-theme-secondary))");const i=l();return(g,B)=>(c(),m("div",u,[f(t(h),{to:"/",style:{"text-decoration":"none",color:"black"}},{default:p(()=>[r(a("span",b,"AstrBot 仪表盘",512),[[o,!t(i).mini_sidebar]]),r(a("span",w,"Astr",512),[[o,t(i).mini_sidebar]]),r(a("span",S,"Bot",512),[[o,t(i).mini_sidebar]])]),_:1})]))}});export{z as _,l as u};

View File

@@ -1 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-3bf6ea80.js";import{_ as i}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as n,D as a,o as c,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-9075b0bb.js";const p=["innerHTML"],v=n({__name:"MaterialIcons",setup(b){const s=a({title:"Material Icons"}),r=a('<iframe src="https://materialdesignicons.com/" frameborder="0" width="100%" height="1000"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Material Icons",disabled:!0,href:"#"}]);return(h,M)=>(c(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(i,{title:"Material Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,p)]),_:1})]),_:1})]),_:1})],64))}});export{v as default};
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-e31f96f8.js";import{_ as i}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as n,D as a,o as c,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-dc96e1be.js";const p=["innerHTML"],v=n({__name:"MaterialIcons",setup(b){const s=a({title:"Material Icons"}),r=a('<iframe src="https://materialdesignicons.com/" frameborder="0" width="100%" height="1000"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Material Icons",disabled:!0,href:"#"}]);return(h,M)=>(c(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(i,{title:"Material Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,p)]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -1 +1 @@
import{_ as B}from"./LogoDark.vue_vue_type_script_setup_true_lang-b3521e6c.js";import{x as y,D as o,o as b,s as U,a as e,w as a,b as n,B as $,d as u,f as d,A as _,e as f,V as r,O as m,ap as A,au as E,F,c as T,N as q,J as V,L as P}from"./index-9075b0bb.js";const z="/assets/social-google-a359a253.svg",N=["src"],S=n("span",{class:"ml-2"},"Sign up with Google",-1),D=n("h5",{class:"text-h5 text-center my-4 mb-8"},"Sign up with Email address",-1),G={class:"d-sm-inline-flex align-center mt-2 mb-7 mb-sm-0 font-weight-bold"},L=n("a",{href:"#",class:"ml-1 text-lightText"},"Terms and Condition",-1),O={class:"mt-5 text-right"},j=y({__name:"AuthRegister",setup(w){const c=o(!1),i=o(!1),p=o(""),v=o(""),g=o(),h=o(""),x=o(""),k=o([s=>!!s||"Password is required",s=>s&&s.length<=10||"Password must be less than 10 characters"]),C=o([s=>!!s||"E-mail is required",s=>/.+@.+\..+/.test(s)||"E-mail must be valid"]);function R(){g.value.validate()}return(s,l)=>(b(),U(F,null,[e(u,{block:"",color:"primary",variant:"outlined",class:"text-lightText googleBtn"},{default:a(()=>[n("img",{src:$(z),alt:"google"},null,8,N),S]),_:1}),e(r,null,{default:a(()=>[e(d,{class:"d-flex align-center"},{default:a(()=>[e(_,{class:"custom-devider"}),e(u,{variant:"outlined",class:"orbtn",rounded:"md",size:"small"},{default:a(()=>[f("OR")]),_:1}),e(_,{class:"custom-devider"})]),_:1})]),_:1}),D,e(E,{ref_key:"Regform",ref:g,"lazy-validation":"",action:"/dashboards/analytical",class:"mt-7 loginForm"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:h.value,"onUpdate:modelValue":l[0]||(l[0]=t=>h.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Firstname"},null,8,["modelValue"])]),_:1}),e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:x.value,"onUpdate:modelValue":l[1]||(l[1]=t=>x.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Lastname"},null,8,["modelValue"])]),_:1})]),_:1}),e(m,{modelValue:v.value,"onUpdate:modelValue":l[2]||(l[2]=t=>v.value=t),rules:C.value,label:"Email Address / Username",class:"mt-4 mb-4",required:"",density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary"},null,8,["modelValue","rules"]),e(m,{modelValue:p.value,"onUpdate:modelValue":l[3]||(l[3]=t=>p.value=t),rules:k.value,label:"Password",required:"",density:"comfortable",variant:"outlined",color:"primary","hide-details":"auto","append-icon":i.value?"mdi-eye":"mdi-eye-off",type:i.value?"text":"password","onClick:append":l[4]||(l[4]=t=>i.value=!i.value),class:"pwdInput"},null,8,["modelValue","rules","append-icon","type"]),n("div",G,[e(A,{modelValue:c.value,"onUpdate:modelValue":l[5]||(l[5]=t=>c.value=t),rules:[t=>!!t||"You must agree to continue!"],label:"Agree with?",required:"",color:"primary",class:"ms-n2","hide-details":""},null,8,["modelValue","rules"]),L]),e(u,{color:"secondary",block:"",class:"mt-2",variant:"flat",size:"large",onClick:l[6]||(l[6]=t=>R())},{default:a(()=>[f("Sign Up")]),_:1})]),_:1},512),n("div",O,[e(_),e(u,{variant:"plain",to:"/auth/login",class:"mt-2 text-capitalize mr-n2"},{default:a(()=>[f("Already have an account?")]),_:1})])],64))}});const I={class:"pa-7 pa-sm-12"},J=n("h2",{class:"text-secondary text-h2 mt-8"},"Sign up",-1),Y=n("h4",{class:"text-disabled text-h4 mt-3"},"Enter credentials to continue",-1),M=y({__name:"RegisterPage",setup(w){return(c,i)=>(b(),T(r,{class:"h-100vh","no-gutters":""},{default:a(()=>[e(d,{cols:"12",class:"d-flex align-center bg-lightprimary"},{default:a(()=>[e(q,null,{default:a(()=>[n("div",I,[e(r,{justify:"center"},{default:a(()=>[e(d,{cols:"12",lg:"10",xl:"6",md:"7"},{default:a(()=>[e(V,{elevation:"0",class:"loginBox"},{default:a(()=>[e(V,{variant:"outlined"},{default:a(()=>[e(P,{class:"pa-9"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",class:"text-center"},{default:a(()=>[e(B),J,Y]),_:1})]),_:1}),e(j)]),_:1})]),_:1})]),_:1})]),_:1})]),_:1})])]),_:1})]),_:1})]),_:1}))}});export{M as default};
import{_ as B}from"./LogoDark.vue_vue_type_script_setup_true_lang-7df35c25.js";import{x as y,D as o,o as b,s as U,a as e,w as a,b as n,B as $,d as u,f as d,A as _,e as f,V as r,O as m,ap as A,au as E,F,c as T,N as q,J as V,L as P}from"./index-dc96e1be.js";const z="/assets/social-google-a359a253.svg",N=["src"],S=n("span",{class:"ml-2"},"Sign up with Google",-1),D=n("h5",{class:"text-h5 text-center my-4 mb-8"},"Sign up with Email address",-1),G={class:"d-sm-inline-flex align-center mt-2 mb-7 mb-sm-0 font-weight-bold"},L=n("a",{href:"#",class:"ml-1 text-lightText"},"Terms and Condition",-1),O={class:"mt-5 text-right"},j=y({__name:"AuthRegister",setup(w){const c=o(!1),i=o(!1),p=o(""),v=o(""),g=o(),h=o(""),x=o(""),k=o([s=>!!s||"Password is required",s=>s&&s.length<=10||"Password must be less than 10 characters"]),C=o([s=>!!s||"E-mail is required",s=>/.+@.+\..+/.test(s)||"E-mail must be valid"]);function R(){g.value.validate()}return(s,l)=>(b(),U(F,null,[e(u,{block:"",color:"primary",variant:"outlined",class:"text-lightText googleBtn"},{default:a(()=>[n("img",{src:$(z),alt:"google"},null,8,N),S]),_:1}),e(r,null,{default:a(()=>[e(d,{class:"d-flex align-center"},{default:a(()=>[e(_,{class:"custom-devider"}),e(u,{variant:"outlined",class:"orbtn",rounded:"md",size:"small"},{default:a(()=>[f("OR")]),_:1}),e(_,{class:"custom-devider"})]),_:1})]),_:1}),D,e(E,{ref_key:"Regform",ref:g,"lazy-validation":"",action:"/dashboards/analytical",class:"mt-7 loginForm"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:h.value,"onUpdate:modelValue":l[0]||(l[0]=t=>h.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Firstname"},null,8,["modelValue"])]),_:1}),e(d,{cols:"12",sm:"6"},{default:a(()=>[e(m,{modelValue:x.value,"onUpdate:modelValue":l[1]||(l[1]=t=>x.value=t),density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary",label:"Lastname"},null,8,["modelValue"])]),_:1})]),_:1}),e(m,{modelValue:v.value,"onUpdate:modelValue":l[2]||(l[2]=t=>v.value=t),rules:C.value,label:"Email Address / Username",class:"mt-4 mb-4",required:"",density:"comfortable","hide-details":"auto",variant:"outlined",color:"primary"},null,8,["modelValue","rules"]),e(m,{modelValue:p.value,"onUpdate:modelValue":l[3]||(l[3]=t=>p.value=t),rules:k.value,label:"Password",required:"",density:"comfortable",variant:"outlined",color:"primary","hide-details":"auto","append-icon":i.value?"mdi-eye":"mdi-eye-off",type:i.value?"text":"password","onClick:append":l[4]||(l[4]=t=>i.value=!i.value),class:"pwdInput"},null,8,["modelValue","rules","append-icon","type"]),n("div",G,[e(A,{modelValue:c.value,"onUpdate:modelValue":l[5]||(l[5]=t=>c.value=t),rules:[t=>!!t||"You must agree to continue!"],label:"Agree with?",required:"",color:"primary",class:"ms-n2","hide-details":""},null,8,["modelValue","rules"]),L]),e(u,{color:"secondary",block:"",class:"mt-2",variant:"flat",size:"large",onClick:l[6]||(l[6]=t=>R())},{default:a(()=>[f("Sign Up")]),_:1})]),_:1},512),n("div",O,[e(_),e(u,{variant:"plain",to:"/auth/login",class:"mt-2 text-capitalize mr-n2"},{default:a(()=>[f("Already have an account?")]),_:1})])],64))}});const I={class:"pa-7 pa-sm-12"},J=n("h2",{class:"text-secondary text-h2 mt-8"},"Sign up",-1),Y=n("h4",{class:"text-disabled text-h4 mt-3"},"Enter credentials to continue",-1),M=y({__name:"RegisterPage",setup(w){return(c,i)=>(b(),T(r,{class:"h-100vh","no-gutters":""},{default:a(()=>[e(d,{cols:"12",class:"d-flex align-center bg-lightprimary"},{default:a(()=>[e(q,null,{default:a(()=>[n("div",I,[e(r,{justify:"center"},{default:a(()=>[e(d,{cols:"12",lg:"10",xl:"6",md:"7"},{default:a(()=>[e(V,{elevation:"0",class:"loginBox"},{default:a(()=>[e(V,{variant:"outlined"},{default:a(()=>[e(P,{class:"pa-9"},{default:a(()=>[e(r,null,{default:a(()=>[e(d,{cols:"12",class:"text-center"},{default:a(()=>[e(B),J,Y]),_:1})]),_:1}),e(j)]),_:1})]),_:1})]),_:1})]),_:1})]),_:1})])]),_:1})]),_:1})]),_:1}))}});export{M as default};

View File

@@ -1 +1 @@
import{_ as c}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-3bf6ea80.js";import{_ as f}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as m,D as s,o as l,s as r,a as e,w as a,f as i,V as o,F as d,u as _,J as p,X as b,b as h,t as g}from"./index-9075b0bb.js";const v=m({__name:"ShadowPage",setup(w){const n=s({title:"Shadow Page"}),u=s([{title:"Utilities",disabled:!1,href:"#"},{title:"Shadow",disabled:!0,href:"#"}]);return(V,x)=>(l(),r(d,null,[e(c,{title:n.value.title,breadcrumbs:u.value},null,8,["title","breadcrumbs"]),e(o,null,{default:a(()=>[e(i,{cols:"12",md:"12"},{default:a(()=>[e(f,{title:"Basic Shadow"},{default:a(()=>[e(o,{justify:"center"},{default:a(()=>[(l(),r(d,null,_(25,t=>e(i,{key:t,cols:"auto"},{default:a(()=>[e(p,{height:"100",width:"100",class:b(["mb-5",["d-flex justify-center align-center bg-primary",`elevation-${t}`]])},{default:a(()=>[h("div",null,g(t-1),1)]),_:2},1032,["class"])]),_:2},1024)),64))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{v as default};
import{_ as c}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-e31f96f8.js";import{_ as f}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as m,D as s,o as l,s as r,a as e,w as a,f as i,V as o,F as d,u as _,J as p,X as b,b as h,t as g}from"./index-dc96e1be.js";const v=m({__name:"ShadowPage",setup(w){const n=s({title:"Shadow Page"}),u=s([{title:"Utilities",disabled:!1,href:"#"},{title:"Shadow",disabled:!0,href:"#"}]);return(V,x)=>(l(),r(d,null,[e(c,{title:n.value.title,breadcrumbs:u.value},null,8,["title","breadcrumbs"]),e(o,null,{default:a(()=>[e(i,{cols:"12",md:"12"},{default:a(()=>[e(f,{title:"Basic Shadow"},{default:a(()=>[e(o,{justify:"center"},{default:a(()=>[(l(),r(d,null,_(25,t=>e(i,{key:t,cols:"auto"},{default:a(()=>[e(p,{height:"100",width:"100",class:b(["mb-5",["d-flex justify-center align-center bg-primary",`elevation-${t}`]])},{default:a(()=>[h("div",null,g(t-1),1)]),_:2},1032,["class"])]),_:2},1024)),64))]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{v as default};

View File

@@ -1 +1 @@
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-3bf6ea80.js";import{_ as n}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as c,D as a,o as i,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-9075b0bb.js";const b=["innerHTML"],w=c({__name:"TablerIcons",setup(p){const s=a({title:"Tabler Icons"}),r=a('<iframe src="https://tablericons.com/" frameborder="0" width="100%" height="600"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Tabler Icons",disabled:!0,href:"#"}]);return(h,T)=>(i(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(n,{title:"Tabler Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,b)]),_:1})]),_:1})]),_:1})],64))}});export{w as default};
import{_ as o}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-e31f96f8.js";import{_ as n}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as c,D as a,o as i,s as m,a as e,w as t,f as d,b as f,V as _,F as u}from"./index-dc96e1be.js";const b=["innerHTML"],w=c({__name:"TablerIcons",setup(p){const s=a({title:"Tabler Icons"}),r=a('<iframe src="https://tablericons.com/" frameborder="0" width="100%" height="600"></iframe>'),l=a([{title:"Icons",disabled:!1,href:"#"},{title:"Tabler Icons",disabled:!0,href:"#"}]);return(h,T)=>(i(),m(u,null,[e(o,{title:s.value.title,breadcrumbs:l.value},null,8,["title","breadcrumbs"]),e(_,null,{default:t(()=>[e(d,{cols:"12",md:"12"},{default:t(()=>[e(n,{title:"Tabler Icons"},{default:t(()=>[f("div",{innerHTML:r.value},null,8,b)]),_:1})]),_:1})]),_:1})],64))}});export{w as default};

View File

@@ -1 +1 @@
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-3bf6ea80.js";import{_ as v}from"./UiParentCard.vue_vue_type_script_setup_true_lang-73bcbbd5.js";import{x as f,o as i,c as g,w as e,a,a8 as y,K as b,e as w,t as d,A as C,L as V,a9 as L,J as _,D as o,s as h,f as k,b as t,F as x,u as B,X as H,V as T}from"./index-9075b0bb.js";const s=f({__name:"UiChildCard",props:{title:String},setup(r){const l=r;return(n,c)=>(i(),g(_,{variant:"outlined"},{default:e(()=>[a(y,{class:"py-3"},{default:e(()=>[a(b,{class:"text-h5"},{default:e(()=>[w(d(l.title),1)]),_:1})]),_:1}),a(C),a(V,null,{default:e(()=>[L(n.$slots,"default")]),_:3})]),_:3}))}}),D={class:"d-flex flex-column gap-1"},S={class:"text-caption pa-2 bg-lightprimary"},z=t("div",{class:"text-grey"},"Class",-1),N={class:"font-weight-medium"},$=t("div",null,[t("p",{class:"text-left"},"Left aligned on all viewport sizes."),t("p",{class:"text-center"},"Center aligned on all viewport sizes."),t("p",{class:"text-right"},"Right aligned on all viewport sizes."),t("p",{class:"text-sm-left"},"Left aligned on viewports SM (small) or wider."),t("p",{class:"text-right text-md-left"},"Left aligned on viewports MD (medium) or wider."),t("p",{class:"text-right text-lg-left"},"Left aligned on viewports LG (large) or wider."),t("p",{class:"text-right text-xl-left"},"Left aligned on viewports XL (extra-large) or wider.")],-1),M=t("div",{class:"d-flex justify-space-between flex-row"},[t("a",{href:"#",class:"text-decoration-none"},"Non-underlined link"),t("div",{class:"text-decoration-line-through"},"Line-through text"),t("div",{class:"text-decoration-overline"},"Overline text"),t("div",{class:"text-decoration-underline"},"Underline text")],-1),O=t("div",null,[t("p",{class:"text-high-emphasis"},"High-emphasis has an opacity of 87% in light theme and 100% in dark."),t("p",{class:"text-medium-emphasis"},"Medium-emphasis text and hint text have opacities of 60% in light theme and 70% in dark."),t("p",{class:"text-disabled"},"Disabled text has an opacity of 38% in light theme and 50% in dark.")],-1),j=f({__name:"TypographyPage",setup(r){const l=o({title:"Typography Page"}),n=o([["Heading 1","text-h1"],["Heading 2","text-h2"],["Heading 3","text-h3"],["Heading 4","text-h4"],["Heading 5","text-h5"],["Heading 6","text-h6"],["Subtitle 1","text-subtitle-1"],["Subtitle 2","text-subtitle-2"],["Body 1","text-body-1"],["Body 2","text-body-2"],["Button","text-button"],["Caption","text-caption"],["Overline","text-overline"]]),c=o([{title:"Utilities",disabled:!1,href:"#"},{title:"Typography",disabled:!0,href:"#"}]);return(U,F)=>(i(),h(x,null,[a(m,{title:l.value.title,breadcrumbs:c.value},null,8,["title","breadcrumbs"]),a(T,null,{default:e(()=>[a(k,{cols:"12",md:"12"},{default:e(()=>[a(v,{title:"Basic Typography"},{default:e(()=>[a(s,{title:"Heading"},{default:e(()=>[t("div",D,[(i(!0),h(x,null,B(n.value,([p,u])=>(i(),g(_,{variant:"outlined",key:p,class:"my-4"},{default:e(()=>[t("div",{class:H([u,"pa-2"])},d(p),3),t("div",S,[z,t("div",N,d(u),1)])]),_:2},1024))),128))])]),_:1}),a(s,{title:"Text-alignment",class:"mt-8"},{default:e(()=>[$]),_:1}),a(s,{title:"Decoration",class:"mt-8"},{default:e(()=>[M]),_:1}),a(s,{title:"Opacity",class:"mt-8"},{default:e(()=>[O]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{j as default};
import{_ as m}from"./BaseBreadcrumb.vue_vue_type_style_index_0_lang-e31f96f8.js";import{_ as v}from"./UiParentCard.vue_vue_type_script_setup_true_lang-f2b2db58.js";import{x as f,o as i,c as g,w as e,a,a8 as y,K as b,e as w,t as d,A as C,L as V,a9 as L,J as _,D as o,s as h,f as k,b as t,F as x,u as B,X as H,V as T}from"./index-dc96e1be.js";const s=f({__name:"UiChildCard",props:{title:String},setup(r){const l=r;return(n,c)=>(i(),g(_,{variant:"outlined"},{default:e(()=>[a(y,{class:"py-3"},{default:e(()=>[a(b,{class:"text-h5"},{default:e(()=>[w(d(l.title),1)]),_:1})]),_:1}),a(C),a(V,null,{default:e(()=>[L(n.$slots,"default")]),_:3})]),_:3}))}}),D={class:"d-flex flex-column gap-1"},S={class:"text-caption pa-2 bg-lightprimary"},z=t("div",{class:"text-grey"},"Class",-1),N={class:"font-weight-medium"},$=t("div",null,[t("p",{class:"text-left"},"Left aligned on all viewport sizes."),t("p",{class:"text-center"},"Center aligned on all viewport sizes."),t("p",{class:"text-right"},"Right aligned on all viewport sizes."),t("p",{class:"text-sm-left"},"Left aligned on viewports SM (small) or wider."),t("p",{class:"text-right text-md-left"},"Left aligned on viewports MD (medium) or wider."),t("p",{class:"text-right text-lg-left"},"Left aligned on viewports LG (large) or wider."),t("p",{class:"text-right text-xl-left"},"Left aligned on viewports XL (extra-large) or wider.")],-1),M=t("div",{class:"d-flex justify-space-between flex-row"},[t("a",{href:"#",class:"text-decoration-none"},"Non-underlined link"),t("div",{class:"text-decoration-line-through"},"Line-through text"),t("div",{class:"text-decoration-overline"},"Overline text"),t("div",{class:"text-decoration-underline"},"Underline text")],-1),O=t("div",null,[t("p",{class:"text-high-emphasis"},"High-emphasis has an opacity of 87% in light theme and 100% in dark."),t("p",{class:"text-medium-emphasis"},"Medium-emphasis text and hint text have opacities of 60% in light theme and 70% in dark."),t("p",{class:"text-disabled"},"Disabled text has an opacity of 38% in light theme and 50% in dark.")],-1),j=f({__name:"TypographyPage",setup(r){const l=o({title:"Typography Page"}),n=o([["Heading 1","text-h1"],["Heading 2","text-h2"],["Heading 3","text-h3"],["Heading 4","text-h4"],["Heading 5","text-h5"],["Heading 6","text-h6"],["Subtitle 1","text-subtitle-1"],["Subtitle 2","text-subtitle-2"],["Body 1","text-body-1"],["Body 2","text-body-2"],["Button","text-button"],["Caption","text-caption"],["Overline","text-overline"]]),c=o([{title:"Utilities",disabled:!1,href:"#"},{title:"Typography",disabled:!0,href:"#"}]);return(U,F)=>(i(),h(x,null,[a(m,{title:l.value.title,breadcrumbs:c.value},null,8,["title","breadcrumbs"]),a(T,null,{default:e(()=>[a(k,{cols:"12",md:"12"},{default:e(()=>[a(v,{title:"Basic Typography"},{default:e(()=>[a(s,{title:"Heading"},{default:e(()=>[t("div",D,[(i(!0),h(x,null,B(n.value,([p,u])=>(i(),g(_,{variant:"outlined",key:p,class:"my-4"},{default:e(()=>[t("div",{class:H([u,"pa-2"])},d(p),3),t("div",S,[z,t("div",N,d(u),1)])]),_:2},1024))),128))])]),_:1}),a(s,{title:"Text-alignment",class:"mt-8"},{default:e(()=>[$]),_:1}),a(s,{title:"Decoration",class:"mt-8"},{default:e(()=>[M]),_:1}),a(s,{title:"Opacity",class:"mt-8"},{default:e(()=>[O]),_:1})]),_:1})]),_:1})]),_:1})],64))}});export{j as default};

View File

@@ -1 +1 @@
import{x as n,o,c as i,w as e,a,a8 as d,b as c,K as u,e as p,t as _,a9 as s,A as f,L as V,J as m}from"./index-9075b0bb.js";const C={class:"d-sm-flex align-center justify-space-between"},h=n({__name:"UiParentCard",props:{title:String},setup(l){const r=l;return(t,x)=>(o(),i(m,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(d,null,{default:e(()=>[c("div",C,[a(u,null,{default:e(()=>[p(_(r.title),1)]),_:1}),s(t.$slots,"action")])]),_:3}),a(f),a(V,null,{default:e(()=>[s(t.$slots,"default")]),_:3})]),_:3}))}});export{h as _};
import{x as n,o,c as i,w as e,a,a8 as d,b as c,K as u,e as p,t as _,a9 as s,A as f,L as V,J as m}from"./index-dc96e1be.js";const C={class:"d-sm-flex align-center justify-space-between"},h=n({__name:"UiParentCard",props:{title:String},setup(l){const r=l;return(t,x)=>(o(),i(m,{variant:"outlined",elevation:"0",class:"withbg"},{default:e(()=>[a(d,null,{default:e(()=>[c("div",C,[a(u,null,{default:e(()=>[p(_(r.title),1)]),_:1}),s(t.$slots,"action")])]),_:3}),a(f),a(V,null,{default:e(()=>[s(t.$slots,"default")]),_:3})]),_:3}))}});export{h as _};

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
import{ar as K,as as Y,at as V}from"./index-9075b0bb.js";var C={exports:{}};const $={},k=Object.freeze(Object.defineProperty({__proto__:null,default:$},Symbol.toStringTag,{value:"Module"})),z=K(k);/**
import{ar as K,as as Y,at as V}from"./index-dc96e1be.js";var C={exports:{}};const $={},k=Object.freeze(Object.defineProperty({__proto__:null,default:$},Symbol.toStringTag,{value:"Module"})),z=K(k);/**
* [js-md5]{@link https://github.com/emn178/js-md5}
*
* @namespace md5

View File

@@ -11,7 +11,7 @@
href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&family=Poppins:wght@400;500;600;700&family=Roboto:wght@400;500;700&display=swap"
/>
<title>AstrBot - 仪表盘</title>
<script type="module" crossorigin src="/assets/index-9075b0bb.js"></script>
<script type="module" crossorigin src="/assets/index-dc96e1be.js"></script>
<link rel="stylesheet" href="/assets/index-0f1523f3.css">
</head>
<body>

View File

@@ -11,6 +11,11 @@ import threading
import time
import asyncio
from util.plugin_dev.api.v1.config import update_config
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
@dataclass
class DashBoardConfig():
@@ -22,11 +27,11 @@ class DashBoardConfig():
value: Optional[Union[list, dict, str, int, bool]] = None # 仅 item 才需要
val_type: Optional[str] = None # 仅 item 才需要
class DashBoardHelper():
def __init__(self, global_object, config: dict):
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.logger = global_object.logger
dashboard_data = global_object.dashboard_data
dashboard_data.configs = {
"data": []
@@ -40,15 +45,17 @@ class DashBoardHelper():
@self.dashboard.register("post_configs")
def on_post_configs(post_configs: dict):
try:
# self.logger.log(f"收到配置更新请求", gu.LEVEL_INFO, tag="可视化面板")
if 'base_config' in post_configs:
self.save_config(post_configs['base_config'], namespace='') # 基础配置
self.save_config(post_configs['config'], namespace=post_configs['namespace']) # 选定配置
self.parse_default_config(self.dashboard_data, self.cc.get_all())
self.save_config(
post_configs['base_config'], namespace='') # 基础配置
self.save_config(
post_configs['config'], namespace=post_configs['namespace']) # 选定配置
self.parse_default_config(
self.dashboard_data, self.cc.get_all())
# 重启
threading.Thread(target=self.dashboard.shutdown_bot, args=(2,), daemon=True).start()
threading.Thread(target=self.dashboard.shutdown_bot,
args=(2,), daemon=True).start()
except Exception as e:
# self.logger.log(f"在保存配置时发生错误:{e}", gu.LEVEL_ERROR, tag="可视化面板")
raise e
# 将 config.yaml、 中的配置解析到 dashboard_data.configs 中
@@ -100,18 +107,26 @@ class DashBoardHelper():
value=config['direct_message_mode'],
path="direct_message_mode",
),
DashBoardConfig(
config_type="item",
val_type="bool",
name="是否接收QQ群消息",
description="需要机器人有相应的群消息接收权限。在 q.qq.com 上查看。",
value=config['qqofficial_enable_group_message'],
path="qqofficial_enable_group_message",
),
]
)
qq_gocq_platform_group = DashBoardConfig(
config_type="group",
name="OneBot协议平台配置",
name="go-cqhttp",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用",
description="支持cq-http、shamrock等目前仅支持QQ平台",
description="",
value=config['gocqbot']['enable'],
path="gocqbot.enable",
),
@@ -374,47 +389,6 @@ class DashBoardHelper():
]
)
rev_chatgpt_accounts = config['rev_ChatGPT']['account']
new_accs = []
for i in rev_chatgpt_accounts:
if isinstance(i, dict) and 'access_token' in i:
new_accs.append(i['access_token'])
elif isinstance(i, str):
new_accs.append(i)
config['rev_ChatGPT']['account'] = new_accs
rev_chatgpt_llm_group = DashBoardConfig(
config_type="group",
name="逆向语言模型服务设置",
description="",
body=[
DashBoardConfig(
config_type="item",
val_type="bool",
name="启用逆向语言模型服务",
description="",
value=config['rev_ChatGPT']['enable'],
path="rev_ChatGPT.enable",
),
DashBoardConfig(
config_type="item",
val_type="str",
name="终结点Endpoint地址",
description="逆向服务的终结点服务器的地址。",
value=config['CHATGPT_BASE_URL'],
path="CHATGPT_BASE_URL",
),
DashBoardConfig(
config_type="item",
val_type="list",
name="assess_token",
description="assess_token",
value=config['rev_ChatGPT']['account'],
path="rev_ChatGPT.account",
),
]
)
baidu_aip_group = DashBoardConfig(
config_type="group",
name="百度内容审核",
@@ -428,9 +402,6 @@ class DashBoardHelper():
value=config['baidu_aip']['enable'],
path="baidu_aip.enable"
),
# "app_id": null,
# "api_key": null,
# "secret_key": null
DashBoardConfig(
config_type="item",
val_type="str",
@@ -495,16 +466,14 @@ class DashBoardHelper():
qq_gocq_platform_group,
general_platform_detail_group,
openai_official_llm_group,
rev_chatgpt_llm_group,
other_group,
baidu_aip_group
]
except Exception as e:
self.logger.log(f"配置文件解析错误:{e}", gu.LEVEL_ERROR)
logger.error(f"配置文件解析错误:{e}")
raise e
def save_config(self, post_config: list, namespace: str):
'''
根据 path 解析并保存配置
@@ -525,17 +494,21 @@ class DashBoardHelper():
continue
if config['val_type'] == "bool":
self._write_config(namespace, config['path'], config['value'])
self._write_config(
namespace, config['path'], config['value'])
elif config['val_type'] == "str":
self._write_config(namespace, config['path'], config['value'])
self._write_config(
namespace, config['path'], config['value'])
elif config['val_type'] == "int":
try:
self._write_config(namespace, config['path'], int(config['value']))
self._write_config(
namespace, config['path'], int(config['value']))
except:
raise ValueError(f"配置项 {config['name']} 的值必须是整数")
elif config['val_type'] == "float":
try:
self._write_config(namespace, config['path'], float(config['value']))
self._write_config(
namespace, config['path'], float(config['value']))
except:
raise ValueError(f"配置项 {config['name']} 的值必须是浮点数")
elif config['val_type'] == "list":
@@ -543,9 +516,11 @@ class DashBoardHelper():
self._write_config(namespace, config['path'], [])
elif not isinstance(config['value'], list):
raise ValueError(f"配置项 {config['name']} 的值必须是列表")
self._write_config(namespace, config['path'], config['value'])
self._write_config(
namespace, config['path'], config['value'])
else:
raise NotImplementedError(f"未知或者未实现的配置项类型:{config['val_type']}")
raise NotImplementedError(
f"未知或者未实现的配置项类型:{config['val_type']}")
def _write_config(self, namespace: str, key: str, value):
if namespace == "" or namespace.startswith("internal_"):

View File

@@ -1,21 +1,26 @@
from flask import Flask, request
from flask.logging import default_handler
from werkzeug.serving import make_server
from util import general_utils as gu
from dataclasses import dataclass
import logging
from cores.database.conn import dbConn
from util.cmd_config import CmdConfig
from util.updator import check_update, update_project, request_release_info
from cores.qqbot.types import *
import util.plugin_util as putil
import websockets
import json
import threading
import asyncio
import os, sys
import os
import sys
import time
from flask import Flask, request
from flask.logging import default_handler
from werkzeug.serving import make_server
from util import general_utils as gu
from dataclasses import dataclass
from persist.session import dbConn
from type.register import RegisteredPlugin
from typing import List
from util.cmd_config import CmdConfig
from util.updator import check_update, update_project, request_release_info, _reboot
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
@dataclass
class DashBoardData():
stats: dict
@@ -23,24 +28,24 @@ class DashBoardData():
logs: dict
plugins: List[RegisteredPlugin]
@dataclass
class Response():
status: str
message: str
data: dict
class AstrBotDashBoard():
def __init__(self, global_object: 'gu.GlobalObject'):
self.global_object = global_object
self.loop = asyncio.get_event_loop()
asyncio.set_event_loop(self.loop)
self.dashboard_data: DashBoardData = global_object.dashboard_data
self.dashboard_be = Flask(__name__, static_folder="dist", static_url_path="/")
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
self.dashboard_be = Flask(
__name__, static_folder="dist", static_url_path="/")
self.funcs = {}
self.cc = CmdConfig()
self.logger = global_object.logger
self.ws_clients = {} # remote_ip: ws
# 启动 websocket 服务器
self.ws_server = websockets.serve(self.__handle_msg, "0.0.0.0", 6186)
@@ -50,6 +55,22 @@ class AstrBotDashBoard():
# 返回页面
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/config")
def rt_config():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/logs")
def rt_logs():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/extension")
def rt_extension():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.get("/dashboard/default")
def rt_dashboard():
return self.dashboard_be.send_static_file("index.html")
@self.dashboard_be.post("/api/authenticate")
def authenticate():
username = self.cc.get("dashboard_username", "")
@@ -99,9 +120,11 @@ class AstrBotDashBoard():
# last_24_platform = db_inst.get_last_24h_stat_platform()
platforms = db_inst.get_platform_cnt_total()
self.dashboard_data.stats["session"] = []
self.dashboard_data.stats["session_total"] = db_inst.get_session_cnt_total()
self.dashboard_data.stats["session_total"] = db_inst.get_session_cnt_total(
)
self.dashboard_data.stats["message"] = last_24_message
self.dashboard_data.stats["message_total"] = db_inst.get_message_cnt_total()
self.dashboard_data.stats["message_total"] = db_inst.get_message_cnt_total(
)
self.dashboard_data.stats["platform"] = platforms
return Response(
@@ -172,9 +195,9 @@ class AstrBotDashBoard():
post_data = request.json
repo_url = post_data["url"]
try:
self.logger.log(f"正在安装插件 {repo_url}", tag="可视化面板")
putil.install_plugin(repo_url, self.dashboard_data.plugins)
self.logger.log(f"安装插件 {repo_url} 成功", tag="可视化面板")
logger.info(f"正在安装插件 {repo_url}")
putil.install_plugin(repo_url, global_object)
logger.info(f"安装插件 {repo_url} 成功")
return Response(
status="success",
message="安装成功~",
@@ -192,9 +215,10 @@ class AstrBotDashBoard():
post_data = request.json
plugin_name = post_data["name"]
try:
self.logger.log(f"正在卸载插件 {plugin_name}", tag="可视化面板")
putil.uninstall_plugin(plugin_name, self.dashboard_data.plugins)
self.logger.log(f"卸载插件 {plugin_name} 成功", tag="可视化面板")
logger.info(f"正在卸载插件 {plugin_name}")
putil.uninstall_plugin(
plugin_name, self.dashboard_data.plugins)
logger.info(f"卸载插件 {plugin_name} 成功")
return Response(
status="success",
message="卸载成功~",
@@ -212,9 +236,9 @@ class AstrBotDashBoard():
post_data = request.json
plugin_name = post_data["name"]
try:
self.logger.log(f"正在更新插件 {plugin_name}", tag="可视化面板")
putil.update_plugin(plugin_name, self.dashboard_data.plugins)
self.logger.log(f"更新插件 {plugin_name} 成功", tag="可视化面板")
logger.info(f"正在更新插件 {plugin_name}")
putil.update_plugin(plugin_name, global_object)
logger.info(f"更新插件 {plugin_name} 成功")
return Response(
status="success",
message="更新成功~",
@@ -231,7 +255,8 @@ class AstrBotDashBoard():
def log():
for item in self.ws_clients:
try:
asyncio.run_coroutine_threadsafe(self.ws_clients[item].send(request.data.decode()), self.loop)
asyncio.run_coroutine_threadsafe(
self.ws_clients[item].send(request.data.decode()), self.loop)
except Exception as e:
pass
return 'ok'
@@ -262,9 +287,9 @@ class AstrBotDashBoard():
version = ''
else:
latest = False
version = request.json["version"]
try:
update_project(request_release_info(), latest=latest, version=version)
update_project(request_release_info(latest),
latest=latest, version=version)
threading.Thread(target=self.shutdown_bot, args=(3,)).start()
return Response(
status="success",
@@ -278,15 +303,53 @@ class AstrBotDashBoard():
data=None
).__dict__
@self.dashboard_be.get("/api/llm/list")
def llm_list():
ret = []
for llm in self.global_object.llms:
ret.append(llm.llm_name)
return Response(
status="success",
message="",
data=ret
).__dict__
@self.dashboard_be.get("/api/llm")
def llm():
text = request.args["text"]
llm = request.args["llm"]
for llm_ in self.global_object.llms:
if llm_.llm_name == llm:
try:
# ret = await llm_.llm_instance.text_chat(text)
ret = asyncio.run_coroutine_threadsafe(
llm_.llm_instance.text_chat(text), self.loop).result()
return Response(
status="success",
message="",
data=ret
).__dict__
except Exception as e:
return Response(
status="error",
message=e.__str__(),
data=None
).__dict__
return Response(
status="error",
message="LLM not found.",
data=None
).__dict__
def shutdown_bot(self, delay_s: int):
time.sleep(delay_s)
py = sys.executable
os.execl(py, py, *sys.argv)
_reboot()
def _get_configs(self, namespace: str):
if namespace == "":
ret = [self.dashboard_data.configs['data'][5],
self.dashboard_data.configs['data'][6],]
ret = [self.dashboard_data.configs['data'][4],
self.dashboard_data.configs['data'][5],]
elif namespace == "internal_platform_qq_official":
ret = [self.dashboard_data.configs['data'][0],]
elif namespace == "internal_platform_qq_gocq":
@@ -295,8 +358,6 @@ class AstrBotDashBoard():
ret = [self.dashboard_data.configs['data'][2],]
elif namespace == "internal_llm_openai_official":
ret = [self.dashboard_data.configs['data'][3],]
elif namespace == "internal_llm_rev_chatgpt":
ret = [self.dashboard_data.configs['data'][4],]
else:
path = f"data/config/{namespace}.json"
if not os.path.exists(path):
@@ -328,13 +389,13 @@ class AstrBotDashBoard():
},
{
"title": "QQ_OFFICIAL",
"desc": "QQ官方API,仅支持频道",
"desc": "QQ官方API支持频道、群(需获得群权限)",
"namespace": "internal_platform_qq_official",
"tag": ""
},
{
"title": "OneBot协议",
"desc": "支持cq-http、shamrock等目前仅支持QQ平台",
"title": "go-cqhttp",
"desc": "第三方 QQ 协议实现。支持频道、群",
"namespace": "internal_platform_qq_gocq",
"tag": ""
}
@@ -349,12 +410,6 @@ class AstrBotDashBoard():
"desc": "也支持使用官方接口的中转服务",
"namespace": "internal_llm_openai_official",
"tag": ""
},
{
"title": "Rev ChatGPT",
"desc": "早期的逆向ChatGPT不推荐",
"namespace": "internal_llm_rev_chatgpt",
"tag": ""
}
]
}
@@ -376,21 +431,29 @@ class AstrBotDashBoard():
return func
return decorator
async def get_log_history(self):
try:
with open("logs/astrbot-core/astrbot-core.log", "r", encoding="utf-8") as f:
return f.readlines()[-100:]
except Exception as e:
logger.warning(f"读取日志历史失败: {e.__str__()}")
return []
async def __handle_msg(self, websocket, path):
address = websocket.remote_address
# self.logger.log(f"和 {address} 建立了 websocket 连接", tag="可视化面板")
self.ws_clients[address] = websocket
data = ''.join(self.logger.history).replace('\n', '\r\n')
data = await self.get_log_history()
data = ''.join(data).replace('\n', '\r\n')
await websocket.send(data)
while True:
try:
msg = await websocket.recv()
except websockets.exceptions.ConnectionClosedError:
# self.logger.log(f"和 {address} 的 websocket 连接已断开", tag="可视化面板")
# logger.info(f"和 {address} 的 websocket 连接已断开")
del self.ws_clients[address]
break
except Exception as e:
# self.logger.log(f"和 {path} 的 websocket 连接发生了错误: {e.__str__()}", tag="可视化面板")
# logger.info(f"和 {path} 的 websocket 连接发生了错误: {e.__str__()}")
del self.ws_clients[address]
break
@@ -401,10 +464,12 @@ class AstrBotDashBoard():
def run(self):
threading.Thread(target=self.run_ws_server, args=(self.loop,)).start()
self.logger.log("已启动 websocket 服务器", tag="可视化面板")
logger.info("已启动 websocket 服务器")
ip_address = gu.get_local_ip_addresses()
ip_str = f"http://{ip_address}:6185\n\thttp://localhost:6185"
self.logger.log(f"\n==================\n您可访问:\n\n\t{ip_str}\n\n来登录可视化面板,默认账号密码为空。\n注意: 所有配置项现已全量迁移至 cmd_config.json 文件下,可登录可视化面板在线修改配置。\n==================\n", tag="可视化面板")
http_server = make_server('0.0.0.0', 6185, self.dashboard_be, threaded=True)
http_server.serve_forever()
logger.info(
f"\n==================\n您可访问:\n\n\t{ip_str}\n\n来登录可视化面板,默认账号密码为空。\n注意: 所有配置项现已全量迁移至 cmd_config.json 文件下,可登录可视化面板在线修改配置。\n==================\n")
http_server = make_server(
'0.0.0.0', 6185, self.dashboard_be, threaded=True)
http_server.serve_forever()

View File

@@ -1,34 +1,29 @@
import os
import shutil
from nakuru.entities.components import *
from nakuru import (
GroupMessage,
FriendMessage
)
from botpy.message import Message, DirectMessage
flag_not_support = False
try:
from util.plugin_dev.api.v1.config import *
from util.plugin_dev.api.v1.bot import (
PluginMetadata,
PluginType,
AstrMessageEvent,
CommandResult,
)
from util.plugin_dev.api.v1.register import register_llm, unregister_llm
except ImportError:
flag_not_support = True
print("llms: 导入接口失败。请升级到 AstrBot 最新版本。")
print("导入接口失败。请升级到 AstrBot 最新版本。")
'''
注意改插件名噢格式XXXPlugin 或 Main
小提示:把此模板仓库 fork 之后 clone 到机器人文件夹下的 addons/plugins/ 目录下,然后用 Pycharm/VSC 等工具打开可获更棒的编程体验(自动补全等)
'''
class HelloWorldPlugin:
"""
初始化函数, 可以选择直接pass
"""
def __init__(self) -> None:
# 复制旧配置文件到 data 目录下。
if os.path.exists("keyword.json"):
@@ -45,6 +40,7 @@ class HelloWorldPlugin:
Tuple: Non e或者长度为 3 的元组。如果不响应, 返回 None 如果响应, 第 1 个参数为指令是否调用成功, 第 2 个参数为返回的消息链列表, 第 3 个参数为指令名称
例子:一个名为"yuanshen"的插件;当接收到消息为“原神 可莉”, 如果不想要处理此消息则返回False, None如果想要处理但是执行失败了返回True, tuple([False, "请求失败。", "yuanshen"]) 执行成功了返回True, tuple([True, "结果文本", "yuanshen"])
"""
def run(self, ame: AstrMessageEvent):
if ame.message_str == "helloworld":
return CommandResult(
@@ -57,7 +53,8 @@ class HelloWorldPlugin:
return self.handle_keyword_command(ame)
ret = self.check_keyword(ame.message_str)
if ret: return ret
if ret:
return ret
return CommandResult(
hit=False,
@@ -118,8 +115,8 @@ keyword d hi
return command_result
def save_keyword(self):
json.dump(self.keywords, open("data/keyword.json", "w"), ensure_ascii=False)
json.dump(self.keywords, open(
"data/keyword.json", "w"), ensure_ascii=False)
def check_keyword(self, message_str: str):
for k in self.keywords:
@@ -160,6 +157,7 @@ keyword d hi
"homepage": str, # 插件主页 [ 可选 ]
}
"""
def info(self):
return {
"name": "helloworld",

View File

@@ -2,38 +2,35 @@ import re
import threading
import asyncio
import time
import aiohttp
import util.unfit_words as uw
import os
import sys
import io
import traceback
import util.function_calling.gplugin as gplugin
import util.agent.web_searcher as web_searcher
import util.plugin_util as putil
from PIL import Image as PILImage
from typing import Union
from nakuru import (
GroupMessage,
FriendMessage,
GuildMessage,
)
from nakuru.entities.components import Plain, At, Image
from addons.baidu_aip_judge import BaiduJudge
from model.provider.provider import Provider
from model.command.command import Command
from util import general_utils as gu
from util.general_utils import Logger, upload, run_monitor
from util.general_utils import upload, run_monitor
from util.cmd_config import CmdConfig as cc
from util.cmd_config import init_astrbot_config_items
from .types import *
from type.types import GlobalObject
from type.register import *
from type.message import AstrBotMessage
from type.config import *
from addons.dashboard.helper import DashBoardHelper
from addons.dashboard.server import DashBoardData
from cores.database.conn import dbConn
from persist.session import dbConn
from model.platform._message_result import MessageResult
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
# 用户发言频率
user_frequency = {}
@@ -42,11 +39,7 @@ frequency_time = 60
# 计数默认值
frequency_count = 10
# 版本
version = '3.1.10'
# 语言模型
REV_CHATGPT = 'rev_chatgpt'
OPENAI_OFFICIAL = 'openai_official'
NONE_LLM = 'none_llm'
chosen_provider = None
@@ -61,34 +54,26 @@ baidu_judge = None
# CLI
PLATFORM_CLI = 'cli'
init_astrbot_config_items()
# 全局对象
_global_object: GlobalObject = None
logger: Logger = Logger()
# 语言模型选择
def privider_chooser(cfg):
l = []
if 'rev_ChatGPT' in cfg and cfg['rev_ChatGPT']['enable']:
l.append('rev_chatgpt')
if 'openai' in cfg and len(cfg['openai']['key']) > 0 and cfg['openai']['key'][0] is not None:
l.append('openai_official')
return l
def init():
'''
初始化机器人
'''
def init(cfg):
global llm_instance, llm_command_instance
global baidu_judge, chosen_provider
global frequency_count, frequency_time
global _global_object
global logger
# 迁移旧配置
gu.try_migrate_config(cfg)
# 使用新配置
init_astrbot_config_items()
cfg = cc.get_all()
_event_loop = asyncio.new_event_loop()
@@ -96,10 +81,10 @@ def init(cfg):
# 初始化 global_object
_global_object = GlobalObject()
_global_object.version = version
_global_object.version = VERSION
_global_object.base_config = cfg
_global_object.logger = logger
logger.log("AstrBot v"+version, gu.LEVEL_INFO)
logger.info("AstrBot v" + VERSION)
if 'reply_prefix' in cfg:
# 适配旧版配置
@@ -110,31 +95,36 @@ def init(cfg):
else:
_global_object.reply_prefix = cfg['reply_prefix']
# 语言模型提供商
logger.log("正在载入语言模型...", gu.LEVEL_INFO)
prov = privider_chooser(cfg)
if REV_CHATGPT in prov:
logger.log("初始化:逆向 ChatGPT", gu.LEVEL_INFO)
if cfg['rev_ChatGPT']['enable']:
if 'account' in cfg['rev_ChatGPT']:
from model.provider.rev_chatgpt import ProviderRevChatGPT
from model.command.rev_chatgpt import CommandRevChatGPT
llm_instance[REV_CHATGPT] = ProviderRevChatGPT(cfg['rev_ChatGPT'], base_url=cc.get("CHATGPT_BASE_URL", None))
llm_command_instance[REV_CHATGPT] = CommandRevChatGPT(llm_instance[REV_CHATGPT], _global_object)
chosen_provider = REV_CHATGPT
_global_object.llms.append(RegisteredLLM(llm_name=REV_CHATGPT, llm_instance=llm_instance[REV_CHATGPT], origin="internal"))
default_personality_str = cc.get("default_personality_str", "")
if default_personality_str == "":
_global_object.default_personality = None
else:
input("请退出本程序, 然后在配置文件中填写rev_ChatGPT相关配置")
_global_object.default_personality = {
"name": "default",
"prompt": default_personality_str,
}
# 语言模型提供商
logger.info("正在载入语言模型...")
prov = privider_chooser(cfg)
if OPENAI_OFFICIAL in prov:
logger.log("初始化OpenAI官方", gu.LEVEL_INFO)
logger.info("初始化OpenAI官方")
if cfg['openai']['key'] is not None and cfg['openai']['key'] != [None]:
from model.provider.openai_official import ProviderOpenAIOfficial
from model.command.openai_official import CommandOpenAIOfficial
llm_instance[OPENAI_OFFICIAL] = ProviderOpenAIOfficial(cfg['openai'])
llm_command_instance[OPENAI_OFFICIAL] = CommandOpenAIOfficial(llm_instance[OPENAI_OFFICIAL], _global_object)
_global_object.llms.append(RegisteredLLM(llm_name=OPENAI_OFFICIAL, llm_instance=llm_instance[OPENAI_OFFICIAL], origin="internal"))
llm_instance[OPENAI_OFFICIAL] = ProviderOpenAIOfficial(
cfg['openai'])
llm_command_instance[OPENAI_OFFICIAL] = CommandOpenAIOfficial(
llm_instance[OPENAI_OFFICIAL], _global_object)
_global_object.llms.append(RegisteredLLM(
llm_name=OPENAI_OFFICIAL, llm_instance=llm_instance[OPENAI_OFFICIAL], origin="internal"))
chosen_provider = OPENAI_OFFICIAL
instance = llm_instance[OPENAI_OFFICIAL]
assert isinstance(instance, ProviderOpenAIOfficial)
instance.DEFAULT_PERSONALITY = _global_object.default_personality
instance.curr_personality = instance.DEFAULT_PERSONALITY
# 检查provider设置偏好
p = cc.get("chosen_provider", None)
if p is not None and p in llm_instance:
@@ -144,11 +134,12 @@ def init(cfg):
if 'baidu_aip' in cfg and 'enable' in cfg['baidu_aip'] and cfg['baidu_aip']['enable']:
try:
baidu_judge = BaiduJudge(cfg['baidu_aip'])
logger.log("百度内容审核初始化成功", gu.LEVEL_INFO)
logger.info("百度内容审核初始化成功")
except BaseException as e:
logger.log("百度内容审核初始化失败", gu.LEVEL_ERROR)
logger.info("百度内容审核初始化失败")
threading.Thread(target=upload, args=(_global_object, ), daemon=True).start()
threading.Thread(target=upload, args=(
_global_object, ), daemon=True).start()
# 得到发言频率配置
if 'limit' in cfg:
@@ -163,7 +154,7 @@ def init(cfg):
else:
_global_object.unique_session = False
except BaseException as e:
logger.log("独立会话配置错误: "+str(e), gu.LEVEL_ERROR)
logger.info("独立会话配置错误: "+str(e))
nick_qq = cc.get("nick_qq", None)
if nick_qq == None:
@@ -178,42 +169,37 @@ def init(cfg):
global llm_wake_prefix
llm_wake_prefix = cc.get("llm_wake_prefix", "")
logger.log("正在载入插件...", gu.LEVEL_INFO)
logger.info("正在载入插件...")
# 加载插件
_command = Command(None, _global_object)
ok, err = putil.plugin_reload(_global_object.cached_plugins)
ok, err = putil.plugin_reload(_global_object)
if ok:
logger.log(f"成功载入 {len(_global_object.cached_plugins)} 个插件", gu.LEVEL_INFO)
logger.info(
f"成功载入 {len(_global_object.cached_plugins)} 个插件")
else:
logger.log(err, gu.LEVEL_ERROR)
logger.info(err)
if chosen_provider is None:
llm_command_instance[NONE_LLM] = _command
chosen_provider = NONE_LLM
logger.log("正在载入机器人消息平台", gu.LEVEL_INFO)
# logger.log("提示:需要添加管理员 ID 才能使用 update/plugin 等指令),可在可视化面板添加。(如已添加可忽略)", gu.LEVEL_WARNING)
logger.info("正在载入机器人消息平台")
# logger.info("提示:需要添加管理员 ID 才能使用 update/plugin 等指令),可在可视化面板添加。(如已添加可忽略)")
platform_str = ""
# GOCQ
if 'gocqbot' in cfg and cfg['gocqbot']['enable']:
logger.log("启用 QQ_GOCQ 机器人消息平台", gu.LEVEL_INFO)
threading.Thread(target=run_gocq_bot, args=(cfg, _global_object), daemon=True).start()
logger.info("启用 QQ_GOCQ 机器人消息平台")
threading.Thread(target=run_gocq_bot, args=(
cfg, _global_object), daemon=True).start()
platform_str += "QQ_GOCQ,"
# QQ频道
if 'qqbot' in cfg and cfg['qqbot']['enable'] and cfg['qqbot']['appid'] != None:
logger.log("启用 QQ_OFFICIAL 机器人消息平台", gu.LEVEL_INFO)
threading.Thread(target=run_qqchan_bot, args=(cfg, _global_object), daemon=True).start()
logger.info("启用 QQ_OFFICIAL 机器人消息平台")
threading.Thread(target=run_qqchan_bot, args=(
cfg, _global_object), daemon=True).start()
platform_str += "QQ_OFFICIAL,"
default_personality_str = cc.get("default_personality_str", "")
if default_personality_str == "":
_global_object.default_personality = None
else:
_global_object.default_personality = {
"name": "default",
"prompt": default_personality_str,
}
# 初始化dashboard
_global_object.dashboard_data = DashBoardData(
stats={},
@@ -222,36 +208,47 @@ def init(cfg):
plugins=_global_object.cached_plugins,
)
dashboard_helper = DashBoardHelper(_global_object, config=cc.get_all())
dashboard_thread = threading.Thread(target=dashboard_helper.run, daemon=True)
dashboard_thread = threading.Thread(
target=dashboard_helper.run, daemon=True)
dashboard_thread.start()
# 运行 monitor
threading.Thread(target=run_monitor, args=(_global_object,), daemon=False).start()
threading.Thread(target=run_monitor, args=(
_global_object,), daemon=True).start()
logger.log("如果有任何问题, 请在 https://github.com/Soulter/AstrBot 上提交 issue 或加群 322154837。", gu.LEVEL_INFO)
logger.log("请给 https://github.com/Soulter/AstrBot 点个 star。", gu.LEVEL_INFO)
logger.info(
"如果有任何问题, 请在 https://github.com/Soulter/AstrBot 上提交 issue 或加群 322154837。")
logger.info("请给 https://github.com/Soulter/AstrBot 点个 star。")
if platform_str == '':
platform_str = "(未启动任何平台,请前往面板添加)"
logger.log(f"🎉 项目启动完成")
logger.info(f"🎉 项目启动完成")
dashboard_thread.join()
'''
运行 QQ_OFFICIAL 机器人
'''
def run_qqchan_bot(cfg: dict, global_object: GlobalObject):
try:
from model.platform.qq_official import QQOfficial
qqchannel_bot = QQOfficial(cfg=cfg, message_handler=oper_msg, global_object=global_object)
global_object.platforms.append(RegisteredPlatform(platform_name="qqchan", platform_instance=qqchannel_bot, origin="internal"))
qqchannel_bot = QQOfficial(
cfg=cfg, message_handler=oper_msg, global_object=global_object)
global_object.platforms.append(RegisteredPlatform(
platform_name="qqchan", platform_instance=qqchannel_bot, origin="internal"))
qqchannel_bot.run()
except BaseException as e:
logger.log("启动QQ频道机器人时出现错误, 原因如下: " + str(e), gu.LEVEL_CRITICAL, tag="QQ频道")
logger.log(r"如果您是初次启动请前往可视化面板填写配置。详情请看https://astrbot.soulter.top/center/。" + str(e), gu.LEVEL_CRITICAL)
logger.error("启动 QQ 频道机器人时出现错误, 原因如下: " + str(e))
logger.error(r"如果您是初次启动请前往可视化面板填写配置。详情请看https://astrbot.soulter.top/center/。")
'''
运行 QQ_GOCQ 机器人
'''
def run_gocq_bot(cfg: dict, _global_object: GlobalObject):
from model.platform.qq_gocq import QQGOCQ
@@ -259,26 +256,33 @@ def run_gocq_bot(cfg: dict, _global_object: GlobalObject):
host = cc.get("gocq_host", "127.0.0.1")
port = cc.get("gocq_websocket_port", 6700)
http_port = cc.get("gocq_http_port", 5700)
logger.log(f"正在检查连接...host: {host}, ws port: {port}, http port: {http_port}", tag="QQ")
logger.info(
f"正在检查连接...host: {host}, ws port: {port}, http port: {http_port}")
while True:
if not gu.port_checker(port=port, host=host) or not gu.port_checker(port=http_port, host=host):
if not noticed:
noticed = True
logger.log(f"连接到{host}:{port}(或{http_port})失败。程序会每隔 5s 自动重试。", gu.LEVEL_CRITICAL, tag="QQ")
logger.warning(
f"连接到{host}:{port}(或{http_port})失败。程序会每隔 5s 自动重试。")
time.sleep(5)
else:
logger.log("检查完毕,未发现问题。", tag="QQ")
logger.info("已连接到 gocq。")
break
try:
qq_gocq = QQGOCQ(cfg=cfg, message_handler=oper_msg, global_object=_global_object)
_global_object.platforms.append(RegisteredPlatform(platform_name="gocq", platform_instance=qq_gocq, origin="internal"))
qq_gocq = QQGOCQ(cfg=cfg, message_handler=oper_msg,
global_object=_global_object)
_global_object.platforms.append(RegisteredPlatform(
platform_name="gocq", platform_instance=qq_gocq, origin="internal"))
qq_gocq.run()
except BaseException as e:
input("启动QQ机器人出现错误"+str(e))
'''
检查发言频率
'''
def check_frequency(id) -> bool:
ts = int(time.time())
if id in user_frequency:
@@ -297,6 +301,7 @@ def check_frequency(id) -> bool:
user_frequency[id] = t
return True
async def record_message(platform: str, session_id: str):
# TODO: 这里会非常吃资源。然而 sqlite3 不支持多线程,所以暂时这样写。
curr_ts = int(time.time())
@@ -304,7 +309,7 @@ async def record_message(platform: str, session_id: str):
db_inst.increment_stat_session(platform, session_id, 1)
db_inst.increment_stat_message(curr_ts, 1)
db_inst.increment_stat_platform(curr_ts, platform, 1)
_global_object.cnt_total += 1
async def oper_msg(message: AstrBotMessage,
session_id: str,
@@ -332,7 +337,6 @@ async def oper_msg(message: AstrBotMessage,
reg_platform = p
break
if not reg_platform:
_global_object.logger.log(f"未找到平台 {platform} 的实例。", gu.LEVEL_ERROR)
raise Exception(f"未找到平台 {platform} 的实例。")
# 统计数据,如频道消息量
@@ -350,12 +354,10 @@ async def oper_msg(message: AstrBotMessage,
# 检查是否是更换语言模型的请求
temp_switch = ""
if message_str.startswith('/gpt') or message_str.startswith('/revgpt'):
if message_str.startswith('/gpt'):
target = chosen_provider
if message_str.startswith('/gpt'):
target = OPENAI_OFFICIAL
elif message_str.startswith('/revgpt'):
target = REV_CHATGPT
l = message_str.split(' ')
if len(l) > 1 and l[1] != "":
# 临时对话模式,先记录下之前的语言模型,回答完毕后再切回
@@ -370,8 +372,13 @@ async def oper_msg(message: AstrBotMessage,
llm_result_str = ""
# check commands and plugins
message_str_no_wake_prefix = message_str
for wake_prefix in _global_object.nick: # nick: tuple
if message_str.startswith(wake_prefix):
message_str_no_wake_prefix = message_str.removeprefix(wake_prefix)
break
hit, command_result = await llm_command_instance[chosen_provider].check_command(
message_str,
message_str_no_wake_prefix,
session_id,
role,
reg_platform,
@@ -390,7 +397,7 @@ async def oper_msg(message: AstrBotMessage,
if not check:
return MessageResult(f"你的提问得到的回复未通过【百度AI内容审核】服务, 不予回复。\n\n{msg}")
if chosen_provider == NONE_LLM:
logger.log("一条消息由于 Bot 未启动任何语言模型并且未触发指令而将被忽略。", gu.LEVEL_WARNING)
logger.info("一条消息由于 Bot 未启动任何语言模型并且未触发指令而将被忽略。")
return
try:
if llm_wake_prefix != "" and not message_str.startswith(llm_wake_prefix):
@@ -412,17 +419,17 @@ async def oper_msg(message: AstrBotMessage,
web_sch_flag = True
else:
message_str += " " + cc.get("llm_env_prompt", "")
if chosen_provider == REV_CHATGPT or chosen_provider == OPENAI_OFFICIAL:
if chosen_provider == OPENAI_OFFICIAL:
if _global_object.web_search or web_sch_flag:
official_fc = chosen_provider == OPENAI_OFFICIAL
llm_result_str = await gplugin.web_search(message_str, llm_instance[chosen_provider], session_id, official_fc)
llm_result_str = await web_searcher.web_search(message_str, llm_instance[chosen_provider], session_id, official_fc)
else:
llm_result_str = await llm_instance[chosen_provider].text_chat(message_str, session_id, image_url, default_personality = _global_object.default_personality)
llm_result_str = await llm_instance[chosen_provider].text_chat(message_str, session_id, image_url)
llm_result_str = _global_object.reply_prefix + llm_result_str
except BaseException as e:
logger.log(f"调用异常:{traceback.format_exc()}", gu.LEVEL_ERROR)
return MessageResult(f"调用语言模型例程时出现异常。原因: {str(e)}")
logger.error(f"调用异常:{traceback.format_exc()}")
return MessageResult(f"调用异常。详细原因:{str(e)}")
# 切换回原来的语言模型
if temp_switch != "":
@@ -435,24 +442,14 @@ async def oper_msg(message: AstrBotMessage,
return
command = command_result[2]
if command == "update latest r":
def update_restart():
py = sys.executable
os.execl(py, py, *sys.argv)
return MessageResult(command_result[1] + "\n\n即将自动重启。", callback=update_restart)
if not command_result[0]:
return MessageResult(f"指令调用错误: \n{str(command_result[1])}")
# 画图指令
if isinstance(command_result[1], list) and len(command_result) == 3 and command == 'draw':
for i in command_result[1]:
if command == 'draw':
# 保存到本地
async with aiohttp.ClientSession() as session:
async with session.get(i) as resp:
if resp.status == 200:
image = PILImage.open(io.BytesIO(await resp.read()))
return MessageResult([Image.fromFileSystem(gu.save_temp_img(image))])
path = await gu.download_image_by_url(command_result[1])
return MessageResult([Image.fromFileSystem(path)])
# 其他指令
else:
try:
@@ -474,4 +471,4 @@ async def oper_msg(message: AstrBotMessage,
try:
return MessageResult(llm_result_str)
except BaseException as e:
logger.log("回复消息错误: \n"+str(e), gu.LEVEL_ERROR)
logger.info("回复消息错误: \n"+str(e))

View File

@@ -1,168 +0,0 @@
from model.provider.provider import Provider as LLMProvider
from model.platform._platfrom import Platform
from nakuru import (
GroupMessage,
FriendMessage,
GuildMessage,
)
from nakuru.entities.components import BaseMessageComponent
from typing import Union, List, ClassVar
from types import ModuleType
from enum import Enum
from dataclasses import dataclass
class MessageType(Enum):
GROUP_MESSAGE = 'GroupMessage' # 群组形式的消息
FRIEND_MESSAGE = 'FriendMessage' # 私聊、好友等单聊消息
GUILD_MESSAGE = 'GuildMessage' # 频道消息
@dataclass
class MessageMember():
user_id: str # 发送者id
nickname: str = None
class AstrBotMessage():
'''
AstrBot 的消息对象
'''
tag: str # 消息来源标签
type: MessageType # 消息类型
self_id: str # 机器人的识别id
session_id: str # 会话id
message_id: str # 消息id
sender: MessageMember # 发送者
message: List[BaseMessageComponent] # 消息链使用 Nakuru 的消息链格式
message_str: str # 最直观的纯文本消息字符串
raw_message: object
timestamp: int # 消息时间戳
def __str__(self) -> str:
return str(self.__dict__)
class PluginType(Enum):
PLATFORM = 'platfrom' # 平台类插件。
LLM = 'llm' # 大语言模型类插件
COMMON = 'common' # 其他插件
@dataclass
class PluginMetadata:
'''
插件的元数据。
'''
# required
plugin_name: str
plugin_type: PluginType
author: str # 插件作者
desc: str # 插件简介
version: str # 插件版本
# optional
repo: str = None # 插件仓库地址
def __str__(self) -> str:
return f"PluginMetadata({self.plugin_name}, {self.plugin_type}, {self.desc}, {self.version}, {self.repo})"
@dataclass
class RegisteredPlugin:
'''
注册在 AstrBot 中的插件。
'''
metadata: PluginMetadata
plugin_instance: object
module_path: str
module: ModuleType
root_dir_name: str
def __str__(self) -> str:
return f"RegisteredPlugin({self.metadata}, {self.module_path}, {self.root_dir_name})"
RegisteredPlugins = List[RegisteredPlugin]
@dataclass
class RegisteredPlatform:
'''
注册在 AstrBot 中的平台。平台应当实现 Platform 接口。
'''
platform_name: str
platform_instance: Platform
origin: str = None # 注册来源
@dataclass
class RegisteredLLM:
'''
注册在 AstrBot 中的大语言模型调用。大语言模型应当实现 LLMProvider 接口。
'''
llm_name: str
llm_instance: LLMProvider
origin: str = None # 注册来源
class GlobalObject:
'''
存放一些公用的数据,用于在不同模块(如core与command)之间传递
'''
version: str # 机器人版本
nick: str # 用户定义的机器人的别名
base_config: dict # config.json 中导出的配置
cached_plugins: List[RegisteredPlugin] # 加载的插件
platforms: List[RegisteredPlatform]
llms: List[RegisteredLLM]
web_search: bool # 是否开启了网页搜索
reply_prefix: str # 回复前缀
unique_session: bool # 是否开启了独立会话
cnt_total: int # 总消息数
default_personality: dict
dashboard_data = None
logger: None
def __init__(self):
self.nick = None # gocq 的昵称
self.base_config = None # config.yaml
self.cached_plugins = [] # 缓存的插件
self.web_search = False # 是否开启了网页搜索
self.reply_prefix = None
self.unique_session = False
self.cnt_total = 0
self.platforms = []
self.llms = []
self.default_personality = None
self.dashboard_data = None
self.stat = {}
class AstrMessageEvent():
'''
消息事件。
'''
context: GlobalObject # 一些公用数据
message_str: str # 纯消息字符串
message_obj: AstrBotMessage # 消息对象
platform: RegisteredPlatform # 来源平台
role: str # 基本身份。`admin` 或 `member`
session_id: int # 会话 id
def __init__(self,
message_str: str,
message_obj: AstrBotMessage,
platform: RegisteredPlatform,
role: str,
context: GlobalObject,
session_id: str = None):
self.context = context
self.message_str = message_str
self.message_obj = message_obj
self.platform = platform
self.role = role
self.session_id = session_id
class CommandResult():
'''
用于在Command中返回多个值
'''
def __init__(self, hit: bool, success: bool, message_chain: list, command_name: str = "unknown_command") -> None:
self.hit = hit
self.success = success
self.message_chain = message_chain
self.command_name = command_name
def _result_tuple(self):
return (self.success, self.message_chain, self.command_name)

View File

@@ -1,26 +0,0 @@
{
"llms_claude_cookie": {
"config_type": "item",
"name": "llms_claude_cookie",
"description": "Claude 的 Cookie",
"path": "llms_claude_cookie",
"value": "hihi",
"val_type": "str"
},
"llms_huggingchat_email": {
"config_type": "item",
"name": "llms_huggingchat_email",
"description": "HuggingChat 的邮箱",
"path": "llms_huggingchat_email",
"value": "",
"val_type": "str"
},
"llms_huggingchat_psw": {
"config_type": "item",
"name": "llms_huggingchat_psw",
"description": "HuggingChat 的密码",
"path": "llms_huggingchat_psw",
"value": "",
"val_type": "str"
}
}

163
main.py
View File

@@ -1,96 +1,107 @@
import os, sys
from pip._internal import main as pipmain
import os
import sys
import warnings
import traceback
import threading
from logging import Formatter, Logger
from util.cmd_config import CmdConfig, try_migrate_config
warnings.filterwarnings("ignore")
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
logger: Logger = None
logo_tmpl = """
___ _______.___________..______ .______ ______ .___________.
/ \ / | || _ \ | _ \ / __ \ | |
/ ^ \ | (----`---| |----`| |_) | | |_) | | | | | `---| |----`
/ /_\ \ \ \ | | | / | _ < | | | | | |
/ _____ \ .----) | | | | |\ \----.| |_) | | `--' | | |
/__/ \__\ |_______/ |__| | _| `._____||______/ \______/ |__|
"""
def make_necessary_dirs():
'''
创建必要的目录。
'''
os.makedirs("data/config", exist_ok=True)
os.makedirs("temp", exist_ok=True)
def update_dept():
'''
更新依赖库。
'''
# 获取 Python 可执行文件路径
py = sys.executable
requirements_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "requirements.txt")
print(requirements_path)
# 更新依赖库
mirror = "https://mirrors.aliyun.com/pypi/simple/"
os.system(f"{py} -m pip install -r {requirements_path} -i {mirror}")
def main():
# config.yaml 配置文件加载和环境确认
try:
import cores.qqbot.core as qqBot
import yaml
import util.general_utils as gu
ymlfile = open(abs_path+"configs/config.yaml", 'r', encoding='utf-8')
cfg = yaml.safe_load(ymlfile)
import botpy, logging
import astrbot.core as bot_core
# delete qqbotpy's logger
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
except ImportError as import_error:
traceback.print_exc()
print(import_error)
input("第三方库未完全安装完毕,请退出程序重试。")
logger.error(import_error)
logger.error("检测到一些依赖库没有安装。由于兼容性问题AstrBot 此版本将不会自动为您安装依赖库。请您先自行安装,然后重试。")
logger.info("如何安装?如果:")
logger.info("- Windows 启动器部署且使用启动器下载了 Python的在 launcher.exe 所在目录下的地址框输入 powershell然后执行 .\python\python.exe -m pip install .\AstrBot\requirements.txt")
logger.info("- Windows 启动器部署且使用自己之前下载的 Python的在 launcher.exe 所在目录下的地址框输入 powershell然后执行 python -m pip install .\AstrBot\requirements.txt")
logger.info("- 自行 clone 源码部署的python -m pip install -r requirements.txt")
logger.info("- 如果还不会,加群 322154837 ")
input("按任意键退出。")
exit()
except FileNotFoundError as file_not_found:
print(file_not_found)
logger.error(file_not_found)
input("配置文件不存在,请检查是否已经下载配置文件。")
exit()
except BaseException as e:
raise e
# 设置代理
if 'http_proxy' in cfg and cfg['http_proxy'] != '':
os.environ['HTTP_PROXY'] = cfg['http_proxy']
if 'https_proxy' in cfg and cfg['https_proxy'] != '':
os.environ['HTTPS_PROXY'] = cfg['https_proxy']
os.environ['NO_PROXY'] = 'https://api.sgroup.qq.com'
# 检查并创建 temp 文件夹
if not os.path.exists(abs_path + "temp"):
os.mkdir(abs_path+"temp")
if not os.path.exists(abs_path + "data"):
os.mkdir(abs_path+"data")
if not os.path.exists(abs_path + "data/config"):
os.mkdir(abs_path+"data/config")
# 启动主程序cores/qqbot/core.py
qqBot.init(cfg)
def check_env(ch_mirror=False):
if not (sys.version_info.major == 3 and sys.version_info.minor >= 9):
print("请使用Python3.9+运行本项目")
input("按任意键退出...")
logger.error(traceback.format_exc())
input("未知错误。")
exit()
if os.path.exists('requirements.txt'):
pth = 'requirements.txt'
else:
pth = 'QQChannelChatGPT'+ os.sep +'requirements.txt'
print("正在检查或下载第三方库,请耐心等待...")
try:
if ch_mirror:
print("使用阿里云镜像")
pipmain(['install', '-r', pth, '-i', 'https://mirrors.aliyun.com/pypi/simple/'])
else:
pipmain(['install', '-r', pth])
except BaseException as e:
print(e)
while True:
res = input("安装失败。\n如报错ValueError: check_hostname requires server_hostname请尝试先关闭代理后重试。\n1.输入y回车重试\n2. 输入c回车使用国内镜像源下载\n3. 输入其他按键回车继续往下执行。")
if res == "y":
try:
pipmain(['install', '-r', pth])
break
except BaseException as e:
print(e)
continue
elif res == "c":
try:
pipmain(['install', '-r', pth, '-i', 'https://mirrors.aliyun.com/pypi/simple/'])
break
except BaseException as e:
print(e)
continue
else:
break
print("第三方库检查完毕。")
# 启动主程序cores/qqbot/core.py
bot_core.init()
def check_env():
if not (sys.version_info.major == 3 and sys.version_info.minor >= 9):
logger.error("请使用 Python3.9+ 运行本项目。按任意键退出。")
input("")
exit()
if __name__ == "__main__":
args = sys.argv
update_dept()
make_necessary_dirs()
try_migrate_config()
cc = CmdConfig()
http_proxy = cc.get("http_proxy")
https_proxy = cc.get("https_proxy")
if http_proxy:
os.environ['HTTP_PROXY'] = http_proxy
if https_proxy:
os.environ['HTTPS_PROXY'] = https_proxy
os.environ['NO_PROXY'] = 'https://api.sgroup.qq.com'
from SparkleLogging.utils.core import LogManager
logger = LogManager.GetLogger(
log_name='astrbot-core',
out_to_console=True,
custom_formatter=Formatter('[%(asctime)s| %(name)s - %(levelname)s|%(filename)s:%(lineno)d]: %(message)s', datefmt="%H:%M:%S")
)
logger.info(logo_tmpl)
logger.info(f"使用代理: {http_proxy}, {https_proxy}")
if '-cn' in args:
check_env(True)
else:
check_env()
t = threading.Thread(target=main, daemon=False)
t = threading.Thread(target=main, daemon=True)
t.start()
try:
t.join()
except KeyboardInterrupt as e:
logger.info("退出 AstrBot。")
exit()

View File

@@ -11,29 +11,31 @@ from nakuru.entities.components import (
Image
)
from util import general_utils as gu
from util.image_render.helper import text_to_image_base
from model.provider.provider import Provider
from util.cmd_config import CmdConfig as cc
from util.general_utils import Logger
from cores.qqbot.types import (
GlobalObject,
AstrMessageEvent,
PluginType,
CommandResult,
RegisteredPlugin,
RegisteredPlatform
)
from type.message import *
from type.types import GlobalObject
from type.command import *
from type.plugin import *
from type.register import *
from typing import List, Tuple
from typing import List
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
PLATFORM_QQCHAN = 'qqchan'
PLATFORM_GOCQ = 'gocq'
# 指令功能的基类,通用的(不区分语言模型)的指令就在这实现
class Command:
def __init__(self, provider: Provider, global_object: GlobalObject = None):
self.provider = provider
self.global_object = global_object
self.logger: Logger = global_object.logger
async def check_command(self,
message,
@@ -63,6 +65,8 @@ class Command:
result = await plugin.plugin_instance.run(ame)
else:
result = await asyncio.to_thread(plugin.plugin_instance.run, ame)
if not result:
continue
if isinstance(result, CommandResult):
hit = result.hit
res = result._result_tuple()
@@ -72,6 +76,8 @@ class Command:
else:
raise TypeError("插件返回值格式错误。")
if hit:
plugin.trig()
logger.debug("hit plugin: " + plugin.metadata.plugin_name)
return True, res
except TypeError as e:
# 参数不匹配,尝试使用旧的参数方案
@@ -83,19 +89,25 @@ class Command:
if hit:
return True, res
except BaseException as e:
self.logger.log(f"{plugin.metadata.plugin_name} 插件异常,原因: {str(e)}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。", level=gu.LEVEL_WARNING)
logger.error(
f"{plugin.metadata.plugin_name} 插件异常,原因: {str(e)}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。")
except BaseException as e:
self.logger.log(f"{plugin.metadata.plugin_name} 插件异常,原因: {str(e)}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。", level=gu.LEVEL_WARNING)
logger.error(
f"{plugin.metadata.plugin_name} 插件异常,原因: {str(e)}\n如果你没有相关装插件的想法, 请直接忽略此报错, 不影响其他功能的运行。")
if self.command_start_with(message, "nick"):
return True, self.set_nick(message, platform, role)
if self.command_start_with(message, "plugin"):
return True, self.plugin_oper(message, role, cached_plugins, platform)
return True, await self.plugin_oper(message, role, self.global_object, platform)
if self.command_start_with(message, "myid") or self.command_start_with(message, "!myid"):
return True, self.get_my_id(message_obj, platform)
if self.command_start_with(message, "web"): # 网页搜索
return True, self.web_search(message)
if not self.provider and self.command_start_with(message, "help"):
if self.command_start_with(message, "update"):
return True, self.update(message, role)
if message == "t2i":
return True, "t2i", self.t2i_toggle(message, role)
if not self.provider and message == "help":
return True, await self.help()
return False, None
@@ -111,36 +123,33 @@ class Command:
self.global_object.web_search = False
return True, "已关闭网页搜索", "web"
def t2i_toggle(self, message, role):
p = cc.get("qq_pic_mode", True)
if p:
cc.put("qq_pic_mode", False)
return True, "已关闭文本转图片模式。", "t2i"
cc.put("qq_pic_mode", True)
return True, "已开启文本转图片模式。", "t2i"
def get_my_id(self, message_obj, platform):
try:
user_id = str(message_obj.user_id)
user_id = str(message_obj.sender.user_id)
return True, f"你在此平台上的ID{user_id}", "plugin"
except BaseException as e:
return False, f"{platform}上获取你的ID失败原因: {str(e)}", "plugin"
def get_new_conf(self, message, role):
if role != "admin":
return False, f"你的身份组{role}没有权限使用此指令。", "newconf"
l = message.split(" ")
if len(l) <= 1:
obj = cc.get_all()
p = gu.create_text_image("【cmd_config.json】", json.dumps(obj, indent=4, ensure_ascii=False))
return True, [Image.fromFileSystem(p)], "newconf"
'''
插件指令
'''
def plugin_oper(self, message: str, role: str, cached_plugins: List[RegisteredPlugin], platform: str):
async def plugin_oper(self, message: str, role: str, ctx: GlobalObject, platform: str):
l = message.split(" ")
if len(l) < 2:
p = gu.create_text_image("插件指令面板", "安装插件: \nplugin i 插件Github地址\n卸载插件: \nplugin d 插件名 \n重载插件: \nplugin reload\n查看插件列表:\nplugin l\n更新插件: plugin u 插件名\n")
return True, [Image.fromFileSystem(p)], "plugin"
p = await text_to_image_base("# 插件指令面板 \n- 安装插件: `plugin i 插件Github地址`\n- 卸载插件: `plugin d 插件名`\n- 重载插件: `plugin reload`\n- 查看插件列表:`plugin l`\n - 更新插件: `plugin u 插件名`\n")
with open(p, 'rb') as f:
return True, [Image.fromBytes(f.read())], "plugin"
else:
if l[1] == "i":
if role != "admin":
return False, f"你的身份组{role}没有权限安装插件", "plugin"
try:
putil.install_plugin(l[2], cached_plugins)
putil.install_plugin(l[2], )
return True, "插件拉取并载入成功~", "plugin"
except BaseException as e:
return False, f"拉取插件失败,原因: {str(e)}", "plugin"
@@ -148,35 +157,37 @@ class Command:
if role != "admin":
return False, f"你的身份组{role}没有权限删除插件", "plugin"
try:
putil.uninstall_plugin(l[2], cached_plugins)
putil.uninstall_plugin(l[2], ctx)
return True, "插件卸载成功~", "plugin"
except BaseException as e:
return False, f"卸载插件失败,原因: {str(e)}", "plugin"
elif l[1] == "u":
try:
putil.update_plugin(l[2], cached_plugins)
putil.update_plugin(l[2], ctx)
return True, "\n更新插件成功!!", "plugin"
except BaseException as e:
return False, f"更新插件失败,原因: {str(e)}\n建议: 使用 plugin i 指令进行覆盖安装(插件数据可能会丢失)", "plugin"
elif l[1] == "l":
try:
plugin_list_info = ""
for plugin in cached_plugins:
plugin_list_info += f"{plugin.metadata.plugin_name}: \n名称: {plugin.metadata.plugin_name}\n简介: {plugin.metadata.plugin_desc}\n版本: {plugin.metadata.version}\n作者: {plugin.metadata.author}\n"
p = gu.create_text_image("已激活插件列表】", plugin_list_info + "\n使用plugin v 插件名 查看插件帮助\n")
return True, [Image.fromFileSystem(p)], "plugin"
for plugin in ctx.cached_plugins:
plugin_list_info += f"### {plugin.metadata.plugin_name} \n- 名称: {plugin.metadata.plugin_name}\n- 简介: {plugin.metadata.desc}\n- 版本: {plugin.metadata.version}\n- 作者: {plugin.metadata.author}\n"
p = await text_to_image_base(f"# 已激活插件\n{plugin_list_info}\n> 使用plugin v 插件名 查看插件帮助\n")
with open(p, 'rb') as f:
return True, [Image.fromBytes(f.read())], "plugin"
except BaseException as e:
return False, f"获取插件列表失败,原因: {str(e)}", "plugin"
elif l[1] == "v":
try:
info = None
for i in cached_plugins:
for i in ctx.cached_plugins:
if i.metadata.plugin_name == l[2]:
info = i.metadata
break
if info:
p = gu.create_text_image(f"【插件信息】", f"名称: {info['name']}\n{info['desc']}\n版本: {info['version']}\n作者: {info['author']}\n\n帮助:\n{info['help']}")
return True, [Image.fromFileSystem(p)], "plugin"
p = await text_to_image_base(f"# `{info.plugin_name}` 插件信息\n- 类型: {info.plugin_type}\n- 简介{info.desc}\n- 版本: {info.version}\n- 作者: {info.author}")
with open(p, 'rb') as f:
return True, [Image.fromBytes(f.read())], "plugin"
else:
return False, "未找到该插件", "plugin"
except BaseException as e:
@@ -185,10 +196,11 @@ class Command:
'''
nick: 存储机器人的昵称
'''
def set_nick(self, message: str, platform: str, role: str = "member"):
def set_nick(self, message: str, platform: RegisteredPlatform, role: str = "member"):
if role != "admin":
return True, "你无权使用该指令 :P", "nick"
if platform == PLATFORM_GOCQ:
if str(platform) == PLATFORM_GOCQ:
l = message.split(" ")
if len(l) == 1:
return True, "【设置机器人昵称】示例:\n支持多昵称\nnick 昵称1 昵称2 昵称3", "nick"
@@ -196,7 +208,7 @@ class Command:
cc.put("nick_qq", nick)
self.global_object.nick = tuple(nick)
return True, f"设置成功!现在你可以叫我这些昵称来提问我啦~", "nick"
elif platform == PLATFORM_QQCHAN:
elif str(platform) == PLATFORM_QQCHAN:
nick = message.split(" ")[2]
return False, "QQ频道平台不支持为机器人设置昵称。", "nick"
@@ -205,12 +217,9 @@ class Command:
"help": "帮助",
"keyword": "设置关键词/关键指令回复",
"update": "更新项目",
"nick": "设置机器人昵称",
"nick": "设置机器人唤醒词",
"plugin": "插件安装、卸载和重载",
"web on/off": "LLM 网页搜索能力",
"reset": "重置 LLM 对话",
"/gpt": "切换到 OpenAI 官方接口",
"/revgpt": "切换到网页版ChatGPT",
}
async def help_messager(self, commands: dict, platform: str, cached_plugins: List[RegisteredPlugin] = None):
@@ -220,24 +229,25 @@ class Command:
notice = (await resp.json())["notice"]
except BaseException as e:
notice = ""
msg = "# Help Center\n## 指令列表\n"
msg = "## 指令列表\n"
for key, value in commands.items():
msg += f"`{key}` - {value}\n"
msg += f"- `{key}`: {value}\n"
# plugins
if cached_plugins != None:
if cached_plugins:
plugin_list_info = ""
for plugin in cached_plugins:
plugin_list_info += f"`{plugin.metadata.plugin_name}` {plugin.metadata.desc}\n"
if plugin_list_info.strip() != "":
plugin_list_info += f"- `{plugin.metadata.plugin_name}`: {plugin.metadata.desc}\n"
if plugin_list_info.strip():
msg += "\n## 插件列表\n> 使用 plugin v 插件名 查看插件帮助\n"
msg += plugin_list_info
msg += notice
try:
p = gu.create_markdown_image(msg)
return [Image.fromFileSystem(p),]
p = await text_to_image_base(msg)
with open(p, 'rb') as f:
return [Image.fromBytes(f.read()),]
except BaseException as e:
self.logger.log(str(e))
logger.error(str(e))
return msg
def command_start_with(self, message: str, *args):
@@ -256,7 +266,7 @@ class Command:
if len(l) == 1:
try:
update_info = util.updator.check_update()
update_info += "\nTips:\n输入「update latest」更新到最新版本\n输入「update <版本号如v3.1.3>」切换到指定版本\n输入「update r」重启机器人\n"
update_info += "\n> Tips: 输入「update latest」更新到最新版本输入「update <版本号如v3.1.3>」切换到指定版本输入「update r」重启机器人\n"
return True, update_info, "update"
except BaseException as e:
return False, "检查更新失败: "+str(e), "update"
@@ -273,8 +283,10 @@ class Command:
else:
if l[1].lower().startswith('v'):
try:
release_data = util.updator.request_release_info(latest=False)
util.updator.update_project(release_data, latest=False, version=l[1])
release_data = util.updator.request_release_info(
latest=False)
util.updator.update_project(
release_data, latest=False, version=l[1])
return True, "更新成功重启生效。可输入「update r」重启", "update"
except BaseException as e:
return False, "更新失败: "+str(e), "update"

View File

@@ -1,13 +1,24 @@
from model.command.command import Command
from model.provider.openai_official import ProviderOpenAIOfficial
from cores.qqbot.personality import personalities
from cores.qqbot.types import GlobalObject
from model.provider.openai_official import ProviderOpenAIOfficial, MODELS
from util.personality import personalities
from type.types import GlobalObject
from type.command import CommandItem
from SparkleLogging.utils.core import LogManager
from logging import Logger
from openai._exceptions import NotFoundError
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
class CommandOpenAIOfficial(Command):
def __init__(self, provider: ProviderOpenAIOfficial, global_object: GlobalObject):
self.provider = provider
self.global_object = global_object
self.personality_str = ""
self.commands = [
CommandItem("reset", self.reset, "重置 LLM 会话。", "内置"),
CommandItem("his", self.his, "查看与 LLM 的历史记录。", "内置"),
CommandItem("status", self.status, "查看 GPT 配置信息和用量状态。", "内置"),
]
super().__init__(provider, global_object)
async def check_command(self,
@@ -27,6 +38,8 @@ class CommandOpenAIOfficial(Command):
message_obj
)
logger.debug(f"基础指令hit: {hit}, res: {res}")
# 这里是这个 LLM 的专属指令
if hit:
return True, res
@@ -34,12 +47,8 @@ class CommandOpenAIOfficial(Command):
return True, await self.reset(session_id, message)
elif self.command_start_with(message, "his", "历史"):
return True, self.his(message, session_id)
elif self.command_start_with(message, "token"):
return True, self.token(session_id)
elif self.command_start_with(message, "gpt"):
return True, self.gpt()
elif self.command_start_with(message, "status"):
return True, self.status()
return True, self.status(session_id)
elif self.command_start_with(message, "help", "帮助"):
return True, await self.help()
elif self.command_start_with(message, "unset"):
@@ -50,100 +59,109 @@ class CommandOpenAIOfficial(Command):
return True, self.update(message, role)
elif self.command_start_with(message, "", "draw"):
return True, await self.draw(message)
elif self.command_start_with(message, "key"):
return True, self.key(message)
elif self.command_start_with(message, "switch"):
return True, await self.switch(message)
elif self.command_start_with(message, "models"):
return True, await self.print_models()
elif self.command_start_with(message, "model"):
return True, await self.set_model(message)
return False, None
async def get_models(self):
try:
models = await self.provider.client.models.list()
except NotFoundError as e:
bu = str(self.provider.client.base_url)
self.provider.client.base_url = bu + "/v1"
models = await self.provider.client.models.list()
finally:
return filter(lambda x: x.id.startswith("gpt"), models.data)
async def print_models(self):
models = await self.get_models()
i = 1
ret = "OpenAI GPT 类可用模型"
for model in models:
ret += f"\n{i}. {model.id}"
i += 1
ret += "\nTips: 使用 /model 模型名/编号,即可实时更换模型。如目标模型不存在于上表,请输入模型名。"
logger.debug(ret)
return True, ret, "models"
async def set_model(self, message: str):
l = message.split(" ")
if len(l) == 1:
return True, "请输入 /model 模型名/编号", "model"
model = str(l[1])
if model.isdigit():
models = await self.get_models()
models = list(models)
if int(model) <= len(models) and int(model) >= 1:
model = models[int(model)-1]
self.provider.set_model(model.id)
return True, f"模型已设置为 {model.id}", "model"
else:
self.provider.set_model(model)
return True, f"模型已设置为 {model} (自定义)", "model"
async def help(self):
commands = super().general_commands()
commands[''] = '画画'
commands['key'] = '添加OpenAI key'
commands['set'] = '人格设置面板'
commands['gpt'] = '查看gpt配置信息'
commands['status'] = '查看key使用状态'
commands['token'] = '查看本轮会话token'
return True, await super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help"
commands[''] = '调用 OpenAI DallE 模型生成图片'
commands['/set'] = '人格设置面板'
commands['/status'] = '查看 Api Key 状态和配置信息'
commands['/token'] = '查看本轮会话 token'
commands['/reset'] = '重置当前与 LLM 的会话但保留人格system prompt'
commands['/reset p'] = '重置当前与 LLM 的会话,并清除人格。'
commands['/models'] = '获取当前可用的模型'
commands['/model'] = '更换模型'
return True, await super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help"
async def reset(self, session_id: str, message: str = "reset"):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "reset"
l = message.split(" ")
if len(l) == 1:
await self.provider.forget(session_id)
await self.provider.forget(session_id, keep_system_prompt=True)
return True, "重置成功", "reset"
if len(l) == 2 and l[1] == "p":
self.provider.forget(session_id)
if self.personality_str != "":
self.set(self.personality_str, session_id) # 重新设置人格
return True, "重置成功", "reset"
await self.provider.forget(session_id)
def his(self, message: str, session_id: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "his"
#分页每页5条
msg = ''
size_per_page = 3
page = 1
if message[4:]:
page = int(message[4:])
# 检查是否有过历史记录
if session_id not in self.provider.session_dict:
msg = f"历史记录为空"
return True, msg, "his"
l = self.provider.session_dict[session_id]
max_page = len(l)//size_per_page + 1 if len(l)%size_per_page != 0 else len(l)//size_per_page
p = self.provider.get_prompts_by_cache_list(self.provider.session_dict[session_id], divide=True, paging=True, size=size_per_page, page=page)
return True, f"历史记录如下:\n{p}\n{page}页 | 共{max_page}\n*输入/his 2跳转到第2页", "his"
l = message.split(" ")
if len(l) == 2:
try:
page = int(l[1])
except BaseException as e:
return True, "页码不合法", "his"
contexts, total_num = self.provider.dump_contexts_page(session_id, size_per_page, page=page)
t_pages = total_num // size_per_page + 1
return True, f"历史记录如下:\n{contexts}\n{page} 页 | 共 {t_pages}\n*输入 /his 2 跳转到第 2 页", "his"
def token(self, session_id: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "token"
return True, f"会话的token数: {self.provider.get_user_usage_tokens(self.provider.session_dict[session_id])}\n系统最大缓存token数: {self.provider.max_tokens}", "token"
def gpt(self):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "gpt"
return True, f"OpenAI GPT配置:\n {self.provider.chatGPT_configs}", "gpt"
def status(self):
def status(self, session_id: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "status"
chatgpt_cfg_str = ""
key_stat = self.provider.get_key_stat()
index = 1
max = 9000000
gg_count = 0
total = 0
tag = ''
for key in key_stat.keys():
sponsor = ''
total += key_stat[key]['used']
if key_stat[key]['exceed']:
gg_count += 1
continue
if 'sponsor' in key_stat[key]:
sponsor = key_stat[key]['sponsor']
chatgpt_cfg_str += f" |-{index}: {key[-8:]} {key_stat[key]['used']}/{max} {sponsor}{tag}\n"
index += 1
return True, f"⭐使用情况({str(gg_count)}个已用):\n{chatgpt_cfg_str}", "status"
keys_data = self.provider.get_keys_data()
ret = "OpenAI Key"
for k in keys_data:
status = "🟢" if keys_data[k] else "🔴"
ret += "\n|- " + k[:8] + " " + status
def key(self, message: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "reset"
l = message.split(" ")
if len(l) == 1:
msg = "感谢您赞助keykey为官方API使用请以以下格式赞助:\n/key xxxxx"
return True, msg, "key"
key = l[1]
if self.provider.check_key(key):
self.provider.append_key(key)
return True, f"*★,°*:.☆( ̄▽ ̄)/$:*.°★* 。\n该Key被验证为有效。感谢你的赞助~"
else:
return True, "该Key被验证为无效。也许是输入错误了或者重试。", "key"
conf = self.provider.get_configs()
ret += "\n当前模型:" + conf['model']
if conf['model'] in MODELS:
ret += "\n最大上下文窗口:" + str(MODELS[conf['model']]) + " tokens"
if session_id in self.provider.session_memory and len(self.provider.session_memory[session_id]):
ret += "\n你的会话上下文:" + str(self.provider.session_memory[session_id][-1]['usage_tokens']) + " tokens"
return True, ret, "status"
async def switch(self, message: str):
'''
@@ -160,14 +178,13 @@ class CommandOpenAIOfficial(Command):
return True, ret, "switch"
elif len(l) == 2:
try:
key_stat = self.provider.get_key_stat()
key_stat = self.provider.get_keys_data()
index = int(l[1])
if index > len(key_stat) or index < 1:
return True, "账号序号不合法。", "switch"
else:
try:
new_key = list(key_stat.keys())[index-1]
ret = await self.provider.check_key(new_key)
self.provider.set_key(new_key)
except BaseException as e:
return True, "账号切换失败,原因: " + str(e), "switch"
@@ -216,58 +233,19 @@ class CommandOpenAIOfficial(Command):
'name': ps,
'prompt': personalities[ps]
}
self.provider.session_dict[session_id] = []
new_record = {
"user": {
"role": "user",
"content": personalities[ps],
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id].append(new_record)
self.personality_str = message
self.provider.personality_set(ps, session_id)
return True, f"人格{ps}已设置。", "set"
else:
self.provider.curr_personality = {
'name': '自定义人格',
'prompt': ps
}
new_record = {
"user": {
"role": "user",
"content": ps,
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
}
self.provider.session_dict[session_id] = []
self.provider.session_dict[session_id].append(new_record)
self.personality_str = message
self.provider.personality_set(ps, session_id)
return True, f"自定义人格已设置。 \n人格信息: {ps}", "set"
async def draw(self, message):
async def draw(self, message: str):
if self.provider is None:
return False, "未启用 OpenAI 官方 API", "draw"
if message.startswith("/"):
message = message[2:]
elif message.startswith(""):
message = message[1:]
try:
# 画图模式传回3个参数
img_url = await self.provider.image_chat(message)
message = message.removeprefix("/").removeprefix("")
img_url = await self.provider.image_generate(message)
return True, img_url, "draw"
except Exception as e:
if 'exceeded' in str(e):
return f"OpenAI API错误。原因\n{str(e)} \n超额了。可自己搭建一个机器人(Github仓库QQChannelChatGPT)"
return False, f"图片生成失败: {e}", "draw"

View File

@@ -1,132 +0,0 @@
from model.command.command import Command
from model.provider.rev_chatgpt import ProviderRevChatGPT
from cores.qqbot.personality import personalities
from cores.qqbot.types import GlobalObject
class CommandRevChatGPT(Command):
def __init__(self, provider: ProviderRevChatGPT, global_object: GlobalObject):
self.provider = provider
self.global_object = global_object
self.personality_str = ""
super().__init__(provider, global_object)
async def check_command(self,
message: str,
session_id: str,
role: str,
platform: str,
message_obj):
self.platform = platform
hit, res = await super().check_command(
message,
session_id,
role,
platform,
message_obj
)
if hit:
return True, res
if self.command_start_with(message, "help", "帮助"):
return True, await self.help()
elif self.command_start_with(message, "reset"):
return True, self.reset(session_id, message)
elif self.command_start_with(message, "update"):
return True, self.update(message, role)
elif self.command_start_with(message, "set"):
return True, self.set(message, session_id)
elif self.command_start_with(message, "switch"):
return True, self.switch(message, session_id)
return False, None
def reset(self, session_id, message: str):
l = message.split(" ")
if len(l) == 1:
self.provider.forget(session_id)
return True, "重置完毕。", "reset"
if len(l) == 2 and l[1] == "p":
self.provider.forget(session_id)
ret = self.provider.text_chat(self.personality_str)
return True, f"重置完毕(保留人格)。\n\n{ret}", "reset"
def set(self, message: str, session_id: str):
l = message.split(" ")
if len(l) == 1:
return True, f"设置人格: \n/set 人格名或人格文本。例如/set 编剧\n人格列表: /set list\n人格详细信息: \
/set view 人格名\n重置会话(清除人格): /reset\n重置会话(保留人格): /reset p", "set"
elif l[1] == "list":
msg = "人格列表:\n"
for key in personalities.keys():
msg += f" |-{key}\n"
msg += '\n\n*输入/set view 人格名查看人格详细信息'
msg += '\n*不定时更新人格库,请及时更新本项目。'
return True, msg, "set"
elif l[1] == "view":
if len(l) == 2:
return True, "请输入/set view 人格名", "set"
ps = l[2].strip()
if ps in personalities:
msg = f"人格【{ps}】详细信息:\n"
msg += f"{personalities[ps]}\n"
else:
msg = f"人格【{ps}】不存在。"
return True, msg, "set"
else:
ps = l[1].strip()
if ps in personalities:
self.reset(session_id, "reset")
self.personality_str = personalities[ps]
ret = self.provider.text_chat(self.personality_str, session_id)
return True, f"人格【{ps}】已设置。\n\n{ret}", "set"
else:
self.reset(session_id, "reset")
self.personality_str = ps
ret = self.provider.text_chat(ps, session_id)
return True, f"人格信息已设置。\n\n{ret}", "set"
def switch(self, message: str, session_id: str):
'''
切换账号
'''
l = message.split(" ")
rev_chatgpt = self.provider.get_revchatgpt()
if len(l) == 1:
ret = "当前账号:\n"
index = 0
curr_ = None
for revstat in rev_chatgpt:
index += 1
ret += f"[{index}]. {revstat['id']}\n"
# if session_id in revstat['user']:
# curr_ = revstat['id']
for user in revstat['user']:
if session_id == user['id']:
curr_ = revstat['id']
break
if curr_ is None:
ret += "当前您未选择账号。输入/switch <账号序号>切换账号。"
else:
ret += f"当前您选择的账号为:{curr_}。输入/switch <账号序号>切换账号。"
return True, ret, "switch"
elif len(l) == 2:
try:
index = int(l[1])
if index > len(self.provider.rev_chatgpt) or index < 1:
return True, "账号序号不合法。", "switch"
else:
# pop
for revstat in self.provider.rev_chatgpt:
if session_id in revstat['user']:
revstat['user'].remove(session_id)
# append
self.provider.rev_chatgpt[index - 1]['user'].append(session_id)
return True, f"切换账号成功。当前账号为:{self.provider.rev_chatgpt[index - 1]['id']}", "switch"
except BaseException:
return True, "账号序号不合法。", "switch"
else:
return True, "参数过多。", "switch"
async def help(self):
commands = super().general_commands()
commands['set'] = '设置人格'
return True, await super().help_messager(commands, self.platform, self.global_object.cached_plugins), "help"

View File

@@ -5,25 +5,33 @@ from nakuru import (
FriendMessage
)
import botpy.message
from cores.qqbot.types import MessageType, AstrBotMessage, MessageMember
from type.message import *
from typing import List, Union
import time
from util.general_utils import save_temp_img
import time, base64
# QQ官方消息类型转换
def qq_official_message_parse(message: List[BaseMessageComponent]):
plain_text = ""
image_path = None # only one img supported
for i in message:
if isinstance(i, Plain):
plain_text += i.text
elif isinstance(i, Image) and image_path == None:
if i.path is not None:
elif isinstance(i, Image) and not image_path:
if i.path:
image_path = i.path
elif i.file and i.file.startswith("base64://"):
img_data = base64.b64decode(i.file[9:])
image_path = save_temp_img(img_data)
else:
image_path = i.file
image_path = save_temp_img(i.file)
return plain_text, image_path
# QQ官方消息类型 2 AstrBotMessage
def qq_official_message_parse_rev(message: Union[botpy.message.Message, botpy.message.GroupMessage],
message_type: MessageType) -> AstrBotMessage:
abm = AstrBotMessage()
@@ -60,7 +68,8 @@ def qq_official_message_parse_rev(message: Union[botpy.message.Message, botpy.me
except:
abm.self_id = ""
plain_content = message.content.replace("<@!"+str(abm.self_id)+">", "").strip()
plain_content = message.content.replace(
"<@!"+str(abm.self_id)+">", "").strip()
msg.append(Plain(plain_content))
if message.attachments:
for i in message.attachments:
@@ -80,6 +89,7 @@ def qq_official_message_parse_rev(message: Union[botpy.message.Message, botpy.me
raise ValueError(f"Unknown message type: {message_type}")
return abm
def nakuru_message_parse_rev(message: Union[GuildMessage, GroupMessage, FriendMessage]) -> AstrBotMessage:
abm = AstrBotMessage()
abm.type = MessageType(message.type)

View File

@@ -1,6 +1,7 @@
from dataclasses import dataclass
from typing import Union, Optional
@dataclass
class MessageResult():
result_message: Union[str, list]

View File

@@ -14,34 +14,40 @@ class Platform():
初始化平台的各种接口
'''
self.message_handler = message_handler
self.cnt_receive = 0
self.cnt_reply = 0
pass
@abc.abstractmethod
async def handle_msg():
async def handle_msg(self):
'''
处理到来的消息
'''
self.cnt_receive += 1
pass
@abc.abstractmethod
async def reply_msg():
async def reply_msg(self):
'''
回复消息(被动发送)
'''
self.cnt_reply += 1
pass
@abc.abstractmethod
async def send_msg(target: Union[GuildMessage, GroupMessage, FriendMessage, str], message: Union[str, list]):
async def send_msg(self, target: Union[GuildMessage, GroupMessage, FriendMessage, str], message: Union[str, list]):
'''
发送消息(主动发送)
'''
self.cnt_reply += 1
pass
@abc.abstractmethod
async def send(target: Union[GuildMessage, GroupMessage, FriendMessage, str], message: Union[str, list]):
async def send(self, target: Union[GuildMessage, GroupMessage, FriendMessage, str], message: Union[str, list]):
'''
发送消息(主动发送)同 send_msg()
'''
self.cnt_reply += 1
pass
def parse_message_outline(self, message: Union[GuildMessage, GroupMessage, FriendMessage, str, list]) -> str:

View File

@@ -1,5 +1,6 @@
from nakuru.entities.components import Plain, At, Image, Node
from util import general_utils as gu
from util.image_render.helper import text_to_image_base
from util.cmd_config import CmdConfig
import asyncio
from nakuru import (
@@ -11,11 +12,16 @@ from nakuru import (
Notify
)
from typing import Union
from type.types import GlobalObject
import time
from ._platfrom import Platform
from ._message_parse import nakuru_message_parse_rev
from cores.qqbot.types import MessageType, AstrBotMessage, MessageMember
from type.message import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
class FakeSource:
@@ -25,7 +31,7 @@ class FakeSource:
class QQGOCQ(Platform):
def __init__(self, cfg: dict, message_handler: callable, global_object) -> None:
def __init__(self, cfg: dict, message_handler: callable, global_object: GlobalObject) -> None:
super().__init__(message_handler)
self.loop = asyncio.new_event_loop()
@@ -34,15 +40,8 @@ class QQGOCQ(Platform):
self.waiting = {}
self.cc = CmdConfig()
self.cfg = cfg
self.logger: gu.Logger = global_object.logger
try:
self.nick_qq = cfg['nick_qq']
except:
self.nick_qq = ["ai","!",""]
nick_qq = self.nick_qq
if isinstance(nick_qq, str):
nick_qq = [nick_qq]
self.context = global_object
self.unique_session = cfg['uniqueSessionMode']
self.pic_mode = cfg['qq_pic_mode']
@@ -106,9 +105,12 @@ class QQGOCQ(Platform):
self.client.run()
async def handle_msg(self, message: AstrBotMessage):
self.logger.log(f"{message.sender.nickname}/{message.sender.user_id} -> {self.parse_message_outline(message)}", tag="QQ_GOCQ")
await super().handle_msg()
logger.info(
f"{message.sender.nickname}/{message.sender.user_id} -> {self.parse_message_outline(message)}")
assert isinstance(message.raw_message, (GroupMessage, FriendMessage, GuildMessage))
assert isinstance(message.raw_message,
(GroupMessage, FriendMessage, GuildMessage))
is_group = message.type != MessageType.FRIEND_MESSAGE
# 判断是否响应消息
@@ -118,22 +120,23 @@ class QQGOCQ(Platform):
else:
for i in message.message:
if isinstance(i, At):
if message.type == "GuildMessage":
if i.qq == message.raw_message.user_id or i.qq == message.raw_message.self_tiny_id:
if message.type.value == "GuildMessage":
if str(i.qq) == str(message.raw_message.user_id) or str(i.qq) == str(message.raw_message.self_tiny_id):
resp = True
if message.type == "FriendMessage":
if i.qq == message.self_id:
if message.type.value == "FriendMessage":
if str(i.qq) == str(message.self_id):
resp = True
if message.type == "GroupMessage":
if i.qq == message.self_id:
if message.type.value == "GroupMessage":
if str(i.qq) == str(message.self_id):
resp = True
elif isinstance(i, Plain):
for nick in self.nick_qq:
elif isinstance(i, Plain) and self.context.nick:
for nick in self.context.nick:
if nick != '' and i.text.strip().startswith(nick):
resp = True
break
if not resp: return
if not resp:
return
# 解析 session_id
if self.unique_session or not is_group:
@@ -175,6 +178,7 @@ class QQGOCQ(Platform):
async def reply_msg(self,
message: Union[AstrBotMessage, GuildMessage, GroupMessage, FriendMessage],
result_message: list):
await super().reply_msg()
"""
插件开发者请使用send方法, 可以不用直接调用这个方法。
"""
@@ -185,7 +189,8 @@ class QQGOCQ(Platform):
res = result_message
self.logger.log(f"{source.user_id} <- {self.parse_message_outline(res)}", tag="QQ_GOCQ")
logger.info(
f"{source.user_id} <- {self.parse_message_outline(res)}")
if isinstance(source, int):
source = FakeSource("GroupMessage", source)
@@ -209,7 +214,8 @@ class QQGOCQ(Platform):
news.append(i)
plains_str = "".join(plains).strip()
if plains_str != "" and len(plains_str) > 50:
p = gu.create_markdown_image("".join(plains))
# p = gu.create_markdown_image("".join(plains))
p = await text_to_image_base(plains_str)
news.append(Image.fromFileSystem(p))
res = news
@@ -252,6 +258,7 @@ class QQGOCQ(Platform):
提供给插件的发送QQ消息接口。
参数说明第一个参数可以是消息对象也可以是QQ群号。第二个参数是消息内容消息内容可以是消息链列表也可以是纯文字信息
'''
await super().reply_msg()
try:
await self.reply_msg(message, result_message)
except BaseException as e:
@@ -263,22 +270,17 @@ class QQGOCQ(Platform):
'''
同 send_msg()
'''
await super().reply_msg()
await self.reply_msg(to, res)
def create_text_image(title: str, text: str, max_width=30, font_size=20):
async def create_text_image(text: str):
'''
文本转图片。
title: 标题
text: 文本内容
max_width: 文本宽度最大值默认30
font_size: 字体大小默认20
返回:文件路径
'''
try:
img = gu.word2img(title, text, max_width, font_size)
p = gu.save_temp_img(img)
return p
return await text_to_image_base(text)
except Exception as e:
raise e
@@ -311,4 +313,3 @@ class QQGOCQ(Platform):
return ret
except BaseException as e:
raise e

View File

@@ -5,6 +5,8 @@ import botpy.message
import re
import asyncio
import aiohttp
import botpy.types
import botpy.types.message
from util import general_utils as gu
from botpy.types.message import Reference
@@ -15,9 +17,14 @@ from ._message_parse import(
qq_official_message_parse_rev,
qq_official_message_parse
)
from cores.qqbot.types import MessageType, AstrBotMessage, MessageMember
from type.message import *
from typing import Union, List
from nakuru.entities.components import BaseMessageComponent
from nakuru.entities.components import *
from util.image_render.helper import text_to_image_base
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
# QQ 机器人官方框架
class botClient(Client):
@@ -37,9 +44,11 @@ class botClient(Client):
# 收到私聊消息
async def on_direct_message_create(self, message: botpy.message.DirectMessage):
# 转换层
abm = qq_official_message_parse_rev(message, MessageType.FRIEND_MESSAGE)
abm = qq_official_message_parse_rev(
message, MessageType.FRIEND_MESSAGE)
await self.platform.handle_msg(abm)
class QQOfficial(Platform):
def __init__(self, cfg: dict, message_handler: callable, global_object) -> None:
@@ -55,19 +64,16 @@ class QQOfficial(Platform):
self.token = cfg['qqbot']['token']
self.secret = cfg['qqbot_secret']
self.unique_session = cfg['uniqueSessionMode']
self.logger: gu.Logger = global_object.logger
qq_group = cfg['qqofficial_enable_group_message']
self.pic_mode = cfg['qq_pic_mode']
try:
if qq_group:
self.intents = botpy.Intents(
public_messages=True,
public_guild_messages=True,
direct_message=cfg['direct_message_mode']
)
self.client = botClient(
intents=self.intents,
bot_log=False
)
except BaseException:
else:
self.intents = botpy.Intents(
public_guild_messages=True,
direct_message=cfg['direct_message_mode']
@@ -76,6 +82,7 @@ class QQOfficial(Platform):
intents=self.intents,
bot_log=False
)
self.client.set_platform(self)
def run(self):
@@ -97,11 +104,14 @@ class QQOfficial(Platform):
)
async def handle_msg(self, message: AstrBotMessage):
assert isinstance(message.raw_message, (botpy.message.Message, botpy.message.GroupMessage, botpy.message.DirectMessage))
await super().handle_msg()
assert isinstance(message.raw_message, (botpy.message.Message,
botpy.message.GroupMessage, botpy.message.DirectMessage))
is_group = message.type != MessageType.FRIEND_MESSAGE
_t = "/私聊" if not is_group else ""
self.logger.log(f"{message.sender.nickname}({message.sender.user_id}{_t}) -> {self.parse_message_outline(message)}", tag="QQ_OFFICIAL")
logger.info(
f"{message.sender.nickname}({message.sender.user_id}{_t}) -> {self.parse_message_outline(message)}")
# 解析出 session_id
if self.unique_session or not is_group:
@@ -147,45 +157,72 @@ class QQOfficial(Platform):
'''
回复频道消息
'''
await super().reply_msg()
if isinstance(message, AstrBotMessage):
source = message.raw_message
else:
source = message
assert isinstance(source, (botpy.message.Message, botpy.message.GroupMessage, botpy.message.DirectMessage))
self.logger.log(f"{message.sender.nickname}({message.sender.user_id}) <- {self.parse_message_outline(res)}", tag="QQ_OFFICIAL")
assert isinstance(source, (botpy.message.Message,
botpy.message.GroupMessage, botpy.message.DirectMessage))
logger.info(
f"{message.sender.nickname}({message.sender.user_id}) <- {self.parse_message_outline(res)}")
plain_text = ''
image_path = ''
msg_ref = None
# if isinstance(res, list):
# plain_text, image_path = qq_official_message_parse(res)
# elif isinstance(res, str):
# plain_text = res
# if self.cfg['qq_pic_mode']:
# # 文本转图片,并且加上原来的图片
# if plain_text != '' or image_path != '':
# if image_path is not None and image_path != '':
# if image_path.startswith("http"):
# plain_text += "\n\n" + "![](" + image_path + ")"
# else:
# plain_text += "\n\n" + \
# "![](file:///" + image_path + ")"
# # image_path = gu.create_markdown_image("".join(plain_text))
# image_path = await text_to_image_base("".join(plain_text))
# plain_text = ""
# else:
# if image_path is not None and image_path != '':
# msg_ref = None
# if image_path.startswith("http"):
# async with aiohttp.ClientSession() as session:
# async with session.get(image_path) as response:
# if response.status == 200:
# image = PILImage.open(io.BytesIO(await response.read()))
# image_path = gu.save_temp_img(image)
if self.pic_mode:
plains = []
news = []
if isinstance(res, str):
res = [Plain(text=res, convert=False),]
for i in res:
if isinstance(i, Plain):
plains.append(i.text)
else:
news.append(i)
plains_str = "".join(plains).strip()
if plains_str and len(plains_str) > 50:
p = await text_to_image_base(plains_str, return_url=False)
with open(p, "rb") as f:
news.append(Image.fromBytes(f.read()))
res = news
if isinstance(res, list):
plain_text, image_path = qq_official_message_parse(res)
elif isinstance(res, str):
else:
plain_text = res
if self.cfg['qq_pic_mode']:
# 文本转图片,并且加上原来的图片
if plain_text != '' or image_path != '':
if image_path is not None and image_path != '':
if image_path.startswith("http"):
plain_text += "\n\n" + "![](" + image_path + ")"
else:
plain_text += "\n\n" + "![](file:///" + image_path + ")"
image_path = gu.create_markdown_image("".join(plain_text))
plain_text = ""
else:
if image_path is not None and image_path != '':
msg_ref = None
if image_path.startswith("http"):
async with aiohttp.ClientSession() as session:
async with session.get(image_path) as response:
if response.status == 200:
image = PILImage.open(io.BytesIO(await response.read()))
image_path = gu.save_temp_img(image)
if source is not None and image_path == '': # file_image与message_reference不能同时传入
msg_ref = Reference(message_id=source.id, ignore_get_message_error=False)
msg_ref = Reference(message_id=source.id,
ignore_get_message_error=False)
# 到这里,我们得到了 plain_textimage_pathmsg_ref
data = {
@@ -202,7 +239,7 @@ class QQOfficial(Platform):
data['guild_id'] = source.guild_id
else:
raise ValueError(f"未知的消息类型: {message.type}")
if image_path != '':
if image_path:
data['file_image'] = image_path
try:
@@ -229,7 +266,8 @@ class QQOfficial(Platform):
data['content'] = str.join(" ", plain_text)
await self._send_wrapper(**data)
except BaseException as e:
plain_text = re.sub(r'(https|http)?:\/\/(\w|\.|\/|\?|\=|\&|\%)*\b', '[被隐藏的链接]', str(e), flags=re.MULTILINE)
plain_text = re.sub(
r'(https|http)?:\/\/(\w|\.|\/|\?|\=|\&|\%)*\b', '[被隐藏的链接]', str(e), flags=re.MULTILINE)
plain_text = plain_text.replace(".", "·")
data['content'] = plain_text
await self._send_wrapper(**data)
@@ -250,12 +288,14 @@ class QQOfficial(Platform):
elif 'channel_id' in kwargs:
# 频道消息
if 'file_image' in kwargs:
kwargs['file_image'] = kwargs['file_image'].replace("file://", "")
kwargs['file_image'] = kwargs['file_image'].replace(
"file://", "")
await self.client.api.post_message(**kwargs)
else:
# 频道私聊消息
if 'file_image' in kwargs:
kwargs['file_image'] = kwargs['file_image'].replace("file://", "")
kwargs['file_image'] = kwargs['file_image'].replace(
"file://", "")
await self.client.api.post_dms(**kwargs)
async def send_msg(self,

View File

@@ -5,87 +5,109 @@ import time
import tiktoken
import threading
import traceback
import base64
from openai import AsyncOpenAI
from openai.types.images_response import ImagesResponse
from openai.types.chat.chat_completion import ChatCompletion
from openai._exceptions import *
from cores.database.conn import dbConn
from persist.session import dbConn
from model.provider.provider import Provider
from util import general_utils as gu
from util.cmd_config import CmdConfig
from util.general_utils import Logger
from SparkleLogging.utils.core import LogManager
from logging import Logger
from typing import List, Dict
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
abs_path = os.path.dirname(os.path.realpath(sys.argv[0])) + '/'
MODELS = {
"gpt-4o": 128000,
"gpt-4o-2024-05-13": 128000,
"gpt-4-turbo": 128000,
"gpt-4-turbo-2024-04-09": 128000,
"gpt-4-turbo-preview": 128000,
"gpt-4-0125-preview": 128000,
"gpt-4-1106-preview": 128000,
"gpt-4-vision-preview": 128000,
"gpt-4-1106-vision-preview": 128000,
"gpt-4": 8192,
"gpt-4-0613": 8192,
"gpt-4-32k": 32768,
"gpt-4-32k-0613": 32768,
"gpt-3.5-turbo-0125": 16385,
"gpt-3.5-turbo": 16385,
"gpt-3.5-turbo-1106": 16385,
"gpt-3.5-turbo-instruct": 4096,
"gpt-3.5-turbo-16k": 16385,
"gpt-3.5-turbo-0613": 16385,
"gpt-3.5-turbo-16k-0613": 16385,
}
class ProviderOpenAIOfficial(Provider):
def __init__(self, cfg):
self.cc = CmdConfig()
self.logger = Logger()
def __init__(self, cfg) -> None:
super().__init__()
self.key_list = []
# 如果 cfg['key'] 中有长度为 1 的字符串,那么是格式错误,直接报错
for key in cfg['key']:
if len(key) == 1:
raise BaseException("检查到了长度为 1 的Key。配置文件中的 openai.key 处的格式错误 (符号 - 的后面要加空格)。")
if cfg['key'] != '' and cfg['key'] != None:
self.key_list = cfg['key']
if len(self.key_list) == 0:
raise Exception("您打开了 OpenAI 模型服务,但是未填写 key。请前往填写。")
os.makedirs("data/openai", exist_ok=True)
self.key_stat = {}
for k in self.key_list:
self.key_stat[k] = {'exceed': False, 'used': 0}
self.cc = CmdConfig
self.key_data_path = "data/openai/keys.json"
self.api_keys = []
self.chosen_api_key = None
self.base_url = None
self.keys_data = {} # 记录超额
self.api_base = None
if 'api_base' in cfg and cfg['api_base'] != 'none' and cfg['api_base'] != '':
self.api_base = cfg['api_base']
self.logger.log(f"设置 api_base 为: {self.api_base}", tag="OpenAI")
if cfg['key']: self.api_keys = cfg['key']
if cfg['api_base']: self.base_url = cfg['api_base']
if not self.api_keys:
logger.warn("看起来你没有添加 OpenAI 的 API 密钥OpenAI LLM 能力将不会启用。")
else:
self.chosen_api_key = self.api_keys[0]
for key in self.api_keys:
self.keys_data[key] = True
# 创建 OpenAI Client
self.client = AsyncOpenAI(
api_key=self.key_list[0],
base_url=self.api_base
api_key=self.chosen_api_key,
base_url=self.base_url
)
self.openai_model_configs: dict = cfg['chatGPTConfigs']
self.logger.log(f'加载 OpenAI Chat Configs: {self.openai_model_configs}', tag="OpenAI")
self.openai_configs = cfg
# 会话缓存
self.session_dict = {}
# 最大缓存token
self.max_tokens = cfg['total_tokens_limit']
# 历史记录持久化间隔时间
self.history_dump_interval = 20
self.enc = tiktoken.get_encoding("cl100k_base")
self.model_configs: Dict = cfg['chatGPTConfigs']
super().set_curr_model(self.model_configs['model'])
self.image_generator_model_configs: Dict = self.cc.get('openai_image_generate', None)
self.session_memory: Dict[str, List] = {} # 会话记忆
self.session_memory_lock = threading.Lock()
self.max_tokens = self.model_configs['max_tokens'] # 上下文窗口大小
self.tokenizer = tiktoken.get_encoding("cl100k_base") # todo: 根据 model 切换分词器
self.DEFAULT_PERSONALITY = {
"name": "default",
"prompt": "你是一个很有帮助的 AI 助手。"
}
self.curr_personality = self.DEFAULT_PERSONALITY
self.session_personality = {} # 记录了某个session是否已设置人格。
# 从 SQLite DB 读取历史记录
try:
db1 = dbConn()
for session in db1.get_all_session():
self.session_dict[session[0]] = json.loads(session[1])['data']
self.logger.log("读取历史记录成功。", tag="OpenAI")
self.session_memory_lock.acquire()
self.session_memory[session[0]] = json.loads(session[1])['data']
self.session_memory_lock.release()
except BaseException as e:
self.logger.log("读取历史记录失败,但不影响使用。", level=gu.LEVEL_ERROR, tag="OpenAI")
logger.warn(f"读取 OpenAI LLM 对话历史记录 失败{e}。仍可正常使用。")
# 创建转储定时器线程
# 定时保存历史记录
threading.Thread(target=self.dump_history, daemon=True).start()
# 人格
self.curr_personality = {}
# 转储历史记录
def dump_history(self):
'''
转储历史记录
'''
time.sleep(10)
db = dbConn()
while True:
try:
# print("转储历史记录...")
for key in self.session_dict:
data = self.session_dict[key]
for key in self.session_memory:
data = self.session_memory[key]
data_json = {
'data': data
}
@@ -93,236 +115,100 @@ class ProviderOpenAIOfficial(Provider):
db.update_session(key, json.dumps(data_json))
else:
db.insert_session(key, json.dumps(data_json))
# print("转储历史记录完毕")
logger.debug("已保存 OpenAI 会话历史记录")
except BaseException as e:
print(e)
# 每隔10分钟转储一次
time.sleep(10*self.history_dump_interval)
finally:
time.sleep(10*60)
def personality_set(self, default_personality: dict, session_id: str):
if not default_personality: return
if session_id not in self.session_memory:
self.session_memory[session_id] = []
self.curr_personality = default_personality
self.session_personality = {} # 重置
encoded_prompt = self.tokenizer.encode(default_personality['prompt'])
tokens_num = len(encoded_prompt)
model = self.model_configs['model']
if model in MODELS and tokens_num > MODELS[model] - 500:
default_personality['prompt'] = self.tokenizer.decode(encoded_prompt[:MODELS[model] - 500])
new_record = {
"user": {
"role": "user",
"role": "system",
"content": default_personality['prompt'],
},
"AI": {
"role": "assistant",
"content": "好的,接下来我会扮演这个角色。"
},
'type': "personality",
'usage_tokens': 0,
'single-tokens': 0
'usage_tokens': 0, # 到该条目的总 token 数
'single-tokens': 0 # 该条目的 token 数
}
self.session_dict[session_id].append(new_record)
async def text_chat(self, prompt,
session_id = None,
image_url = None,
function_call=None,
extra_conf: dict = None,
default_personality: dict = None):
if session_id is None:
session_id = "unknown"
if "unknown" in self.session_dict:
del self.session_dict["unknown"]
# 会话机制
if session_id not in self.session_dict:
self.session_dict[session_id] = []
self.session_memory[session_id].append(new_record)
if len(self.session_dict[session_id]) == 0:
# 设置默认人格
if default_personality is not None:
self.personality_set(default_personality, session_id)
async def encode_image_bs64(self, image_url: str) -> str:
'''
将图片转换为 base64
'''
if image_url.startswith("http"):
image_url = await gu.download_image_by_url(image_url)
# 使用 tictoken 截断消息
_encoded_prompt = self.enc.encode(prompt)
if self.openai_model_configs['max_tokens'] < len(_encoded_prompt):
prompt = self.enc.decode(_encoded_prompt[:int(self.openai_model_configs['max_tokens']*0.80)])
self.logger.log(f"注意,有一部分 prompt 文本由于超出 token 限制而被截断。", level=gu.LEVEL_WARNING, tag="OpenAI")
with open(image_url, "rb") as f:
image_bs64 = base64.b64encode(f.read()).decode()
return "data:image/jpeg;base64," + image_bs64
cache_data_list, new_record, req = self.wrap(prompt, session_id, image_url)
self.logger.log(f"cache: {str(cache_data_list)}", level=gu.LEVEL_DEBUG, tag="OpenAI")
self.logger.log(f"request: {str(req)}", level=gu.LEVEL_DEBUG, tag="OpenAI")
retry = 0
response = None
err = ''
async def retrieve_context(self, session_id: str):
'''
根据 session_id 获取保存的 OpenAI 格式的上下文
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
# 截断倍率
truncate_rate = 0.75
use_gpt4v = False
for i in req:
if isinstance(i['content'], list):
use_gpt4v = True
break
if image_url is not None:
use_gpt4v = True
if use_gpt4v:
conf = self.openai_model_configs.copy()
conf['model'] = 'gpt-4-vision-preview'
else:
conf = self.openai_model_configs
if extra_conf is not None:
conf.update(extra_conf)
while retry < 10:
try:
if function_call is None:
response = await self.client.chat.completions.create(
messages=req,
**conf
)
else:
response = await self.client.chat.completions.create(
messages=req,
tools = function_call,
**conf
)
break
except Exception as e:
traceback.print_exc()
if 'Invalid content type. image_url is only supported by certain models.' in str(e):
raise e
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
self.logger.log("当前 Key 已超额或异常, 正在切换", level=gu.LEVEL_WARNING, tag="OpenAI")
self.key_stat[self.client.api_key]['exceed'] = True
is_switched = self.handle_switch_key()
if not is_switched:
raise e
retry -= 1
elif 'maximum context length' in str(e):
self.logger.log("token 超限, 清空对应缓存,并进行消息截断", tag="OpenAI")
self.session_dict[session_id] = []
prompt = prompt[:int(len(prompt)*truncate_rate)]
truncate_rate -= 0.05
cache_data_list, new_record, req = self.wrap(prompt, session_id)
elif 'Limit: 3 / min. Please try again in 20s.' in str(e) or "OpenAI response error" in str(e):
time.sleep(30)
# 转换为 openai 要求的格式
context = []
is_lvm = await self.is_lvm()
for record in self.session_memory[session_id]:
if "user" in record and record['user']:
if not is_lvm and "content" in record['user'] and isinstance(record['user']['content'], list):
logger.warn(f"由于当前模型 {self.model_configs['model']}不支持视觉,将忽略上下文中的图片输入。如果一直弹出此警告,可以尝试 reset 指令。")
continue
else:
self.logger.log(str(e), level=gu.LEVEL_ERROR, tag="OpenAI")
time.sleep(2)
err = str(e)
retry += 1
if retry >= 10:
self.logger.log(r"如果报错, 且您的机器在中国大陆内, 请确保您的电脑已经设置好代理软件(梯子), 并在配置文件设置了系统代理地址。详见 https://github.com/Soulter/QQChannelChatGPT/wiki", tag="OpenAI")
raise BaseException("连接出错: "+str(err))
assert isinstance(response, ChatCompletion)
self.logger.log(f"OPENAI RESPONSE: {response.usage}", level=gu.LEVEL_DEBUG, tag="OpenAI")
context.append(record['user'])
if "AI" in record and record['AI']:
context.append(record['AI'])
# 结果分类
choice = response.choices[0]
if choice.message.content != None:
# 文本形式
chatgpt_res = str(choice.message.content).strip()
elif choice.message.tool_calls != None and len(choice.message.tool_calls) > 0:
# tools call (function calling)
return choice.message.tool_calls[0].function
return context
self.key_stat[self.client.api_key]['used'] += response.usage.total_tokens
current_usage_tokens = response.usage.total_tokens
async def is_lvm(self):
'''
是否是 LVM
'''
return self.model_configs['model'].startswith("gpt-4")
# 超过指定tokens 尽可能的保留最多的条目直到小于max_tokens
if current_usage_tokens > self.max_tokens:
t = current_usage_tokens
index = 0
while t > self.max_tokens:
if index >= len(cache_data_list):
break
# 保留人格信息
if cache_data_list[index]['type'] != 'personality':
t -= int(cache_data_list[index]['single_tokens'])
del cache_data_list[index]
else:
index += 1
# 删除完后更新相关字段
self.session_dict[session_id] = cache_data_list
async def get_models(self):
'''
获取所有模型
'''
models = await self.client.models.list()
logger.info(f"OpenAI 模型列表:{models}")
return models
# 添加新条目进入缓存的prompt
new_record['AI'] = {
'role': 'assistant',
'content': chatgpt_res,
async def assemble_context(self, session_id: str, prompt: str, image_url: str = None):
'''
组装上下文,并且根据当前上下文窗口大小截断
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
tokens_num = len(self.tokenizer.encode(prompt))
previous_total_tokens_num = 0 if not self.session_memory[session_id] else self.session_memory[session_id][-1]['usage_tokens']
message = {
"usage_tokens": previous_total_tokens_num + tokens_num,
"single_tokens": tokens_num,
"AI": None
}
new_record['usage_tokens'] = current_usage_tokens
if len(cache_data_list) > 0:
new_record['single_tokens'] = current_usage_tokens - int(cache_data_list[-1]['usage_tokens'])
else:
new_record['single_tokens'] = current_usage_tokens
cache_data_list.append(new_record)
self.session_dict[session_id] = cache_data_list
return chatgpt_res
async def image_chat(self, prompt, img_num = 1, img_size = "1024x1024"):
retry = 0
image_url = ''
image_generate_configs = self.cc.get("openai_image_generate", None)
while retry < 5:
try:
response: ImagesResponse = await self.client.images.generate(
prompt=prompt,
**image_generate_configs
)
image_url = []
for i in range(img_num):
image_url.append(response.data[i].url)
break
except Exception as e:
self.logger.log(str(e), level=gu.LEVEL_ERROR)
if 'You exceeded' in str(e) or 'Billing hard limit has been reached' in str(
e) or 'No API key provided' in str(e) or 'Incorrect API key provided' in str(e):
self.logger.log("当前 Key 已超额或者不正常, 正在切换", level=gu.LEVEL_WARNING, tag="OpenAI")
self.key_stat[self.client.api_key]['exceed'] = True
is_switched = self.handle_switch_key()
if not is_switched:
raise e
elif 'Your request was rejected as a result of our safety system.' in str(e):
self.logger.log("您的请求被 OpenAI 安全系统拒绝, 请稍后再试", level=gu.LEVEL_WARNING, tag="OpenAI")
raise e
else:
retry += 1
if retry >= 5:
raise BaseException("连接超时")
return image_url
async def forget(self, session_id = None) -> bool:
if session_id is None:
return False
self.session_dict[session_id] = []
return True
def get_prompts_by_cache_list(self, cache_data_list, divide=False, paging=False, size=5, page=1):
'''
获取缓存的会话
'''
prompts = ""
if paging:
page_begin = (page-1)*size
page_end = page*size
if page_begin < 0:
page_begin = 0
if page_end > len(cache_data_list):
page_end = len(cache_data_list)
cache_data_list = cache_data_list[page_begin:page_end]
for item in cache_data_list:
prompts += str(item['user']['role']) + ":\n" + str(item['user']['content']) + "\n"
prompts += str(item['AI']['role']) + ":\n" + str(item['AI']['content']) + "\n"
if divide:
prompts += "----------\n"
return prompts
def wrap(self, prompt, session_id, image_url = None):
if image_url is not None:
prompt = [
if image_url:
user_content = {
"role": "user",
"content": [
{
"type": "text",
"text": prompt
@@ -330,73 +216,285 @@ class ProviderOpenAIOfficial(Provider):
{
"type": "image_url",
"image_url": {
"url": image_url
"url": await self.encode_image_bs64(image_url)
}
}
]
# 获得缓存信息
context = self.session_dict[session_id]
new_record = {
"user": {
"role": "user",
"content": prompt,
},
"AI": {},
'type': "common",
'usage_tokens': 0,
}
req_list = []
for i in context:
if 'user' in i:
req_list.append(i['user'])
if 'AI' in i:
req_list.append(i['AI'])
req_list.append(new_record['user'])
return context, new_record, req_list
else:
user_content = {
"role": "user",
"content": prompt
}
def handle_switch_key(self):
is_all_exceed = True
for key in self.key_stat:
if key == None or self.key_stat[key]['exceed']:
continue
is_all_exceed = False
self.client.api_key = key
self.logger.log(f"切换到 Key: {key}(已使用 token: {self.key_stat[key]['used']})", level=gu.LEVEL_INFO, tag="OpenAI")
message["user"] = user_content
self.session_memory[session_id].append(message)
# 根据 模型的上下文窗口 淘汰掉多余的记录
curr_model = self.model_configs['model']
if curr_model in MODELS:
maxium_tokens_num = MODELS[curr_model] - 300 # 至少预留 300 给 completion
# if message['usage_tokens'] > maxium_tokens_num:
# 淘汰多余的记录,使得最终的 usage_tokens 不超过 maxium_tokens_num - 300
# contexts = self.session_memory[session_id]
# need_to_remove_idx = 0
# freed_tokens_num = contexts[0]['single-tokens']
# while freed_tokens_num < message['usage_tokens'] - maxium_tokens_num:
# need_to_remove_idx += 1
# freed_tokens_num += contexts[need_to_remove_idx]['single-tokens']
# # 更新之后的所有记录的 usage_tokens
# for i in range(len(contexts)):
# if i > need_to_remove_idx:
# contexts[i]['usage_tokens'] -= freed_tokens_num
# logger.debug(f"淘汰上下文记录 {need_to_remove_idx+1} 条,释放 {freed_tokens_num} 个 token。当前上下文总 token 为 {contexts[-1]['usage_tokens']}。")
# self.session_memory[session_id] = contexts[need_to_remove_idx+1:]
while len(self.session_memory[session_id]) and self.session_memory[session_id][-1]['usage_tokens'] > maxium_tokens_num:
self.pop_record(session_id)
async def pop_record(self, session_id: str, pop_system_prompt: bool = False):
'''
弹出第一条记录
'''
if session_id not in self.session_memory:
raise Exception("会话 ID 不存在")
if len(self.session_memory[session_id]) == 0:
return None
for i in range(len(self.session_memory[session_id])):
# 检查是否是 system prompt
if not pop_system_prompt and self.session_memory[session_id][i]['user']['role'] == "system":
# 如果只有一个 system prompt才不删掉
f = False
for j in range(i+1, len(self.session_memory[session_id])):
if self.session_memory[session_id][j]['user']['role'] == "system":
f = True
break
if is_all_exceed:
self.logger.log("所有 Key 已超额", level=gu.LEVEL_CRITICAL, tag="OpenAI")
if not f:
continue
record = self.session_memory[session_id].pop(i)
break
# 更新之后所有记录的 usage_tokens
for i in range(len(self.session_memory[session_id])):
self.session_memory[session_id][i]['usage_tokens'] -= record['single-tokens']
logger.debug(f"淘汰上下文记录 1 条,释放 {record['single-tokens']} 个 token。当前上下文总 token 为 {self.session_memory[session_id][-1]['usage_tokens']}")
return record
async def text_chat(self,
prompt: str,
session_id: str,
image_url: None=None,
tools: None=None,
extra_conf: Dict = None,
**kwargs
) -> str:
super().accu_model_stat()
if not session_id:
session_id = "unknown"
if "unknown" in self.session_memory:
del self.session_memory["unknown"]
if session_id not in self.session_memory:
self.session_memory[session_id] = []
if session_id not in self.session_personality or not self.session_personality[session_id]:
self.personality_set(self.curr_personality, session_id)
self.session_personality[session_id] = True
# 如果 prompt 超过了最大窗口,截断。
# 1. 可以保证之后 pop 的时候不会出现问题
# 2. 可以保证不会超过最大 token 数
_encoded_prompt = self.tokenizer.encode(prompt)
curr_model = self.model_configs['model']
if curr_model in MODELS and len(_encoded_prompt) > MODELS[curr_model] - 300:
_encoded_prompt = _encoded_prompt[:MODELS[curr_model] - 300]
prompt = self.tokenizer.decode(_encoded_prompt)
# 组装上下文,并且根据当前上下文窗口大小截断
await self.assemble_context(session_id, prompt, image_url)
# 获取上下文openai 格式
contexts = await self.retrieve_context(session_id)
conf = self.model_configs
if extra_conf: conf.update(extra_conf)
# start request
retry = 0
rate_limit_retry = 0
while retry < 3 or rate_limit_retry < 5:
logger.debug(conf)
logger.debug(contexts)
if tools:
completion_coro = self.client.chat.completions.create(
messages=contexts,
tools=tools,
**conf
)
else:
completion_coro = self.client.chat.completions.create(
messages=contexts,
**conf
)
try:
completion = await completion_coro
break
except AuthenticationError as e:
api_key = self.chosen_api_key[10:] + "..."
logger.error(f"OpenAI API Key {api_key} 验证错误。详细原因:{e}。正在切换到下一个可用的 Key如果有的话")
self.keys_data[self.chosen_api_key] = False
ok = await self.switch_to_next_key()
if ok: continue
else: raise Exception("所有 OpenAI API Key 目前都不可用。")
except BadRequestError as e:
logger.warn(f"OpenAI 请求异常:{e}")
if "image_url is only supported by certain models." in str(e):
raise Exception(f"当前模型 { self.model_configs['model'] } 不支持图片输入,请更换模型。")
retry += 1
except RateLimitError as e:
if "You exceeded your current quota" in str(e):
self.keys_data[self.chosen_api_key] = False
ok = await self.switch_to_next_key()
if ok: continue
else: raise Exception("所有 OpenAI API Key 目前都不可用。")
logger.error(f"OpenAI API Key {self.chosen_api_key} 达到请求速率限制或者官方服务器当前超载。详细原因:{e}")
await self.switch_to_next_key()
rate_limit_retry += 1
time.sleep(1)
except Exception as e:
retry += 1
if retry >= 3:
logger.error(traceback.format_exc())
raise Exception(f"OpenAI 请求失败:{e}。重试次数已达到上限。")
if "maximum context length" in str(e):
logger.warn(f"OpenAI 请求失败:{e}。上下文长度超过限制。尝试弹出最早的记录然后重试。")
self.pop_record(session_id)
logger.warning(f"OpenAI 请求失败:{e}。重试第 {retry} 次。")
time.sleep(1)
assert isinstance(completion, ChatCompletion)
logger.debug(f"openai completion: {completion.usage}")
choice = completion.choices[0]
usage_tokens = completion.usage.total_tokens
completion_tokens = completion.usage.completion_tokens
self.session_memory[session_id][-1]['usage_tokens'] = usage_tokens
self.session_memory[session_id][-1]['single_tokens'] += completion_tokens
if choice.message.content:
# 返回文本
completion_text = str(choice.message.content).strip()
elif choice.message.tool_calls and choice.message.tool_calls:
# tools call (function calling)
return choice.message.tool_calls[0].function
self.session_memory[session_id][-1]['AI'] = {
"role": "assistant",
"content": completion_text
}
return completion_text
async def switch_to_next_key(self):
'''
切换到下一个 API Key
'''
if not self.api_keys:
logger.error("OpenAI API Key 不存在。")
return False
for key in self.keys_data:
if self.keys_data[key]:
# 没超额
self.chosen_api_key = key
self.client.api_key = key
logger.info(f"OpenAI 切换到 API Key {key[:10]}... 成功。")
return True
return False
async def image_generate(self, prompt: str, session_id: str = None, **kwargs) -> str:
'''
生成图片
'''
retry = 0
conf = self.image_generator_model_configs
super().accu_model_stat(model=conf['model'])
if not conf:
logger.error("OpenAI 图片生成模型配置不存在。")
raise Exception("OpenAI 图片生成模型配置不存在。")
while retry < 3:
try:
images_response = await self.client.images.generate(
prompt=prompt,
**conf
)
image_url = images_response.data[0].url
return image_url
except Exception as e:
retry += 1
if retry >= 3:
logger.error(traceback.format_exc())
raise Exception(f"OpenAI 图片生成请求失败:{e}。重试次数已达到上限。")
logger.warning(f"OpenAI 图片生成请求失败:{e}。重试第 {retry} 次。")
time.sleep(1)
async def forget(self, session_id=None, keep_system_prompt: bool=False) -> bool:
if session_id is None: return False
self.session_memory[session_id] = []
if keep_system_prompt:
self.personality_set(self.curr_personality, session_id)
else:
self.curr_personality = self.DEFAULT_PERSONALITY
return True
def dump_contexts_page(self, session_id: str, size=5, page=1,):
'''
获取缓存的会话
'''
# contexts_str = ""
# for i, key in enumerate(self.session_memory):
# if i < (page-1)*size or i >= page*size:
# continue
# contexts_str += f"Session ID: {key}\n"
# for record in self.session_memory[key]:
# if "user" in record:
# contexts_str += f"User: {record['user']['content']}\n"
# if "AI" in record:
# contexts_str += f"AI: {record['AI']['content']}\n"
# contexts_str += "---\n"
contexts_str = ""
if session_id in self.session_memory:
for record in self.session_memory[session_id]:
if "user" in record and record['user']:
text = record['user']['content'][:100] + "..." if len(record['user']['content']) > 100 else record['user']['content']
contexts_str += f"User: {text}\n"
if "AI" in record and record['AI']:
text = record['AI']['content'][:100] + "..." if len(record['AI']['content']) > 100 else record['AI']['content']
contexts_str += f"Assistant: {text}\n"
else:
contexts_str = "会话 ID 不存在。"
return contexts_str, len(self.session_memory[session_id])
def set_model(self, model: str):
self.model_configs['model'] = model
self.cc.put_by_dot_str("openai.chatGPTConfigs.model", model)
super().set_curr_model(model)
def get_configs(self):
return self.openai_configs
return self.model_configs
def get_key_stat(self):
return self.key_stat
def get_key_list(self):
return self.key_list
def get_keys_data(self):
return self.keys_data
def get_curr_key(self):
return self.client.api_key
return self.chosen_api_key
def set_key(self, key):
self.client.api_key = key
# 添加key
def append_key(self, key, sponsor):
self.key_list.append(key)
self.key_stat[key] = {'exceed': False, 'used': 0, 'sponsor': sponsor}
# 检查key是否可用
async def check_key(self, key):
client_ = AsyncOpenAI(
api_key=key,
base_url=self.api_base
)
messages = [{"role": "user", "content": "please just echo `test`"}]
await client_.chat.completions.create(
messages=messages,
**self.openai_model_configs
)
return True

View File

@@ -1,9 +1,32 @@
from collections import defaultdict
class Provider:
def __init__(self) -> None:
self.model_stat = defaultdict(int) # 用于记录 LLM Model 使用数据
self.curr_model_name = "unknown"
def reset_model_stat(self):
self.model_stat.clear()
def set_curr_model(self, model_name: str):
self.curr_model_name = model_name
def get_curr_model(self):
'''
返回当前正在使用的 LLM
'''
return self.curr_model_name
def accu_model_stat(self, model: str = None):
if not model:
model = self.get_curr_model()
self.model_stat[model] += 1
async def text_chat(self,
prompt: str,
session_id: str,
image_url: None,
function_call: None,
image_url: None = None,
tools: None = None,
extra_conf: dict = None,
default_personality: dict = None,
**kwargs) -> str:
@@ -14,11 +37,11 @@ class Provider:
[optional]
image_url: 图片url识图
function_call: 函数调用
tools: 函数调用工具
extra_conf: 额外配置
default_personality: 默认人格
'''
raise NotImplementedError
raise NotImplementedError()
async def image_generate(self, prompt, session_id, **kwargs) -> str:
'''
@@ -26,10 +49,10 @@ class Provider:
prompt: 提示词
session_id: 会话id
'''
raise NotImplementedError
raise NotImplementedError()
async def forget(self, session_id=None) -> bool:
'''
重置会话
'''
raise NotImplementedError
raise NotImplementedError()

View File

@@ -1,224 +0,0 @@
from revChatGPT.V1 import Chatbot
from revChatGPT import typings
from model.provider.provider import Provider
from util import general_utils as gu
from util import cmd_config as cc
import time
class ProviderRevChatGPT(Provider):
def __init__(self, config, base_url = None):
if base_url == "":
base_url = None
self.rev_chatgpt: list[dict] = []
self.cc = cc.CmdConfig()
for i in range(0, len(config['account'])):
try:
gu.log(f"创建逆向ChatGPT负载{str(i+1)}中...", level=gu.LEVEL_INFO, tag="RevChatGPT")
if isinstance(config['account'][i], str):
# 默认是 access_token
rev_account_config = {
'access_token': config['account'][i],
}
else:
if 'password' in config['account'][i]:
gu.log(f"创建逆向ChatGPT负载{str(i+1)}失败: 已不支持账号密码登录请使用access_token方式登录。", level=gu.LEVEL_ERROR, tag="RevChatGPT")
continue
rev_account_config = {
'access_token': config['account'][i]['access_token'],
}
if self.cc.get("rev_chatgpt_model") != "":
rev_account_config['model'] = self.cc.get("rev_chatgpt_model")
if len(self.cc.get("rev_chatgpt_plugin_ids")) > 0:
rev_account_config['plugin_ids'] = self.cc.get("rev_chatgpt_plugin_ids")
if self.cc.get("rev_chatgpt_PUID") != "":
rev_account_config['PUID'] = self.cc.get("rev_chatgpt_PUID")
if len(self.cc.get("rev_chatgpt_unverified_plugin_domains")) > 0:
rev_account_config['unverified_plugin_domains'] = self.cc.get("rev_chatgpt_unverified_plugin_domains")
cb = Chatbot(config=rev_account_config, base_url=base_url)
# cb.captcha_solver = self.__captcha_solver
# 后八位c
g_id = rev_account_config['access_token'][-8:]
revstat = {
'id': g_id,
'obj': cb,
'busy': False,
'user': []
}
self.rev_chatgpt.append(revstat)
except BaseException as e:
gu.log(f"创建逆向ChatGPT负载{str(i+1)}失败: {str(e)}", level=gu.LEVEL_ERROR, tag="RevChatGPT")
def forget(self, session_id = None) -> bool:
for i in self.rev_chatgpt:
for user in i['user']:
if session_id == user['id']:
try:
i['obj'].reset_chat()
return True
except BaseException as e:
gu.log(f"重置RevChatGPT失败。原因: {str(e)}", level=gu.LEVEL_ERROR, tag="RevChatGPT")
return False
return False
def get_revchatgpt(self) -> list:
return self.rev_chatgpt
def request_text(self, prompt: str, bot) -> str:
resp = ''
err_count = 0
retry_count = 5
while err_count < retry_count:
try:
for data in bot.ask(prompt):
resp = data["message"]
break
except typings.Error as e:
if e.code == typings.ErrorType.INVALID_ACCESS_TOKEN_ERROR:
raise e
if e.code == typings.ErrorType.EXPIRED_ACCESS_TOKEN_ERROR:
raise e
if e.code == typings.ErrorType.PROHIBITED_CONCURRENT_QUERY_ERROR:
raise e
if "Your authentication token has expired. Please try signing in again." in str(e):
raise e
if "The message you submitted was too long" in str(e):
raise e
if "You've reached our limit of messages per hour." in str(e):
raise e
if "Rate limited by proxy" in str(e):
gu.log(f"触发请求频率限制, 60秒后自动重试。", level=gu.LEVEL_WARNING, tag="RevChatGPT")
time.sleep(60)
err_count += 1
gu.log(f"请求异常: {str(e)},正在重试。({str(err_count)})", level=gu.LEVEL_WARNING, tag="RevChatGPT")
if err_count >= retry_count:
raise e
except BaseException as e:
err_count += 1
gu.log(f"请求异常: {str(e)},正在重试。({str(err_count)})", level=gu.LEVEL_WARNING, tag="RevChatGPT")
if err_count >= retry_count:
raise e
if resp == '':
resp = "RevChatGPT请求异常。"
# print("[RevChatGPT] "+str(resp))
return resp
def text_chat(self, prompt,
session_id = None,
image_url = None,
function_call=None,
extra_conf: dict = None,
default_personality: dict = None) -> str:
# 选择一个人少的账号。
selected_revstat = None
min_revstat = None
min_ = None
new_user = False
conversation_id = ''
parent_id = ''
for revstat in self.rev_chatgpt:
for user in revstat['user']:
if session_id == user['id']:
selected_revstat = revstat
conversation_id = user['conversation_id']
parent_id = user['parent_id']
break
if min_ is None:
min_ = len(revstat['user'])
min_revstat = revstat
elif len(revstat['user']) < min_:
min_ = len(revstat['user'])
min_revstat = revstat
# if session_id in revstat['user']:
# selected_revstat = revstat
# break
if selected_revstat is None:
selected_revstat = min_revstat
selected_revstat['user'].append({
'id': session_id,
'conversation_id': '',
'parent_id': ''
})
new_user = True
gu.log(f"选择账号{str(selected_revstat)}", tag="RevChatGPT", level=gu.LEVEL_DEBUG)
while selected_revstat['busy']:
gu.log(f"账号忙碌,等待中...", tag="RevChatGPT", level=gu.LEVEL_DEBUG)
time.sleep(1)
selected_revstat['busy'] = True
if not new_user:
# 非新用户,则使用其专用的会话
selected_revstat['obj'].conversation_id = conversation_id
selected_revstat['obj'].parent_id = parent_id
else:
# 新用户,则使用新的会话
selected_revstat['obj'].reset_chat()
res = ''
err_msg = ''
err_cnt = 0
while err_cnt < 15:
try:
res = self.request_text(prompt, selected_revstat['obj'])
selected_revstat['busy'] = False
# 记录新用户的会话
if new_user:
i = 0
for user in selected_revstat['user']:
if user['id'] == session_id:
selected_revstat['user'][i]['conversation_id'] = selected_revstat['obj'].conversation_id
selected_revstat['user'][i]['parent_id'] = selected_revstat['obj'].parent_id
break
i += 1
return res.strip()
except BaseException as e:
if "Your authentication token has expired. Please try signing in again." in str(e):
raise Exception(f"此账号(access_token后8位为{selected_revstat['id']})的access_token已过期请重新获取或者切换账号。")
if "The message you submitted was too long" in str(e):
raise Exception("发送的消息太长,请分段发送。")
if "You've reached our limit of messages per hour." in str(e):
raise Exception("触发RevChatGPT请求频率限制。请1小时后再试或者切换账号。")
gu.log(f"请求异常: {str(e)}", level=gu.LEVEL_WARNING, tag="RevChatGPT")
err_cnt += 1
time.sleep(3)
raise Exception(f'回复失败。原因:{err_msg}。如果您设置了多个账号,可以使用/switch指令切换账号。输入/switch查看详情。')
# while self.is_all_busy():
# time.sleep(1)
# res = ''
# err_msg = ''
# cursor = 0
# for revstat in self.rev_chatgpt:
# cursor += 1
# if not revstat['busy']:
# try:
# revstat['busy'] = True
# res = self.request_text(prompt, revstat['obj'])
# revstat['busy'] = False
# return res.strip()
# # todo: 细化错误管理
# except BaseException as e:
# revstat['busy'] = False
# gu.log(f"请求出现问题: {str(e)}", level=gu.LEVEL_WARNING, tag="RevChatGPT")
# err_msg += f"账号{cursor} - 错误原因: {str(e)}"
# continue
# else:
# err_msg += f"账号{cursor} - 错误原因: 忙碌"
# continue
# raise Exception(f'回复失败。错误跟踪:{err_msg}')
def is_all_busy(self) -> bool:
for revstat in self.rev_chatgpt:
if not revstat['busy']:
return False
return True

View File

@@ -1,12 +1,16 @@
import sqlite3
import yaml
import os
import shutil
import time
from typing import Tuple
class dbConn():
def __init__(self):
# 读取参数,并支持中文
conn = sqlite3.connect("data.db")
db_path = "data/data.db"
if os.path.exists("data.db"):
shutil.copy("data.db", db_path)
conn = sqlite3.connect(db_path)
conn.text_factory = str
self.conn = conn
c = conn.cursor()
@@ -107,7 +111,6 @@ class dbConn():
)
conn.commit()
def increment_stat_session(self, platform, session_id, cnt):
# if not exist, insert
conn = self.conn
@@ -291,4 +294,3 @@ class dbConn():
def close(self):
self.conn.close()

View File

@@ -4,15 +4,16 @@ requests
openai~=1.2.3
qq-botpy
chardet~=5.1.0
Pillow~=9.4.0
GitPython~=3.1.31
Pillow
GitPython
nakuru-project
beautifulsoup4
googlesearch-python
tiktoken
readability-lxml
revChatGPT~=6.8.6
baidu-aip~=4.16.9
baidu-aip
websockets
flask
psutil
lxml_html_clean
SparkleLogging

28
type/command.py Normal file
View File

@@ -0,0 +1,28 @@
from typing import Union, List, Callable
from dataclasses import dataclass
@dataclass
class CommandItem():
'''
用来描述单个指令
'''
command_name: Union[str, tuple] # 指令名
callback: Callable # 回调函数
description: str # 描述
origin: str # 注册来源
class CommandResult():
'''
用于在Command中返回多个值
'''
def __init__(self, hit: bool, success: bool = False, message_chain: list = [], command_name: str = "unknown_command") -> None:
self.hit = hit
self.success = success
self.message_chain = message_chain
self.command_name = command_name
def _result_tuple(self):
return (self.success, self.message_chain, self.command_name)

1
type/config.py Normal file
View File

@@ -0,0 +1 @@
VERSION = '3.2.4'

62
type/message.py Normal file
View File

@@ -0,0 +1,62 @@
from enum import Enum
from typing import List
from dataclasses import dataclass
from nakuru.entities.components import BaseMessageComponent
from type.register import RegisteredPlatform
from type.types import GlobalObject
class MessageType(Enum):
GROUP_MESSAGE = 'GroupMessage' # 群组形式的消息
FRIEND_MESSAGE = 'FriendMessage' # 私聊、好友等单聊消息
GUILD_MESSAGE = 'GuildMessage' # 频道消息
@dataclass
class MessageMember():
user_id: str # 发送者id
nickname: str = None
class AstrBotMessage():
'''
AstrBot 的消息对象
'''
tag: str # 消息来源标签
type: MessageType # 消息类型
self_id: str # 机器人的识别id
session_id: str # 会话id
message_id: str # 消息id
sender: MessageMember # 发送者
message: List[BaseMessageComponent] # 消息链使用 Nakuru 的消息链格式
message_str: str # 最直观的纯文本消息字符串
raw_message: object
timestamp: int # 消息时间戳
def __str__(self) -> str:
return str(self.__dict__)
class AstrMessageEvent():
'''
消息事件。
'''
context: GlobalObject # 一些公用数据
message_str: str # 纯消息字符串
message_obj: AstrBotMessage # 消息对象
platform: RegisteredPlatform # 来源平台
role: str # 基本身份。`admin` 或 `member`
session_id: int # 会话 id
def __init__(self,
message_str: str,
message_obj: AstrBotMessage,
platform: RegisteredPlatform,
role: str,
context: GlobalObject,
session_id: str = None):
self.context = context
self.message_str = message_str
self.message_obj = message_obj
self.platform = platform
self.role = role
self.session_id = session_id

27
type/plugin.py Normal file
View File

@@ -0,0 +1,27 @@
from enum import Enum
from dataclasses import dataclass
class PluginType(Enum):
PLATFORM = 'platform' # 平台类插件。
LLM = 'llm' # 大语言模型类插件
COMMON = 'common' # 其他插件
@dataclass
class PluginMetadata:
'''
插件的元数据。
'''
# required
plugin_name: str
plugin_type: PluginType
author: str # 插件作者
desc: str # 插件简介
version: str # 插件版本
# optional
repo: str = None # 插件仓库地址
def __str__(self) -> str:
return f"PluginMetadata({self.plugin_name}, {self.plugin_type}, {self.desc}, {self.version}, {self.repo})"

53
type/register.py Normal file
View File

@@ -0,0 +1,53 @@
from model.provider.provider import Provider as LLMProvider
from model.platform._platfrom import Platform
from type.plugin import *
from typing import List
from types import ModuleType
from dataclasses import dataclass
@dataclass
class RegisteredPlugin:
'''
注册在 AstrBot 中的插件。
'''
metadata: PluginMetadata
plugin_instance: object
module_path: str
module: ModuleType
root_dir_name: str
trig_cnt: int = 0
def reset_trig_cnt(self):
self.trig_cnt = 0
def trig(self):
self.trig_cnt += 1
def __str__(self) -> str:
return f"RegisteredPlugin({self.metadata}, {self.module_path}, {self.root_dir_name})"
RegisteredPlugins = List[RegisteredPlugin]
@dataclass
class RegisteredPlatform:
'''
注册在 AstrBot 中的平台。平台应当实现 Platform 接口。
'''
platform_name: str
platform_instance: Platform
origin: str = None # 注册来源
def __str__(self) -> str:
return self.platform_name
@dataclass
class RegisteredLLM:
'''
注册在 AstrBot 中的大语言模型调用。大语言模型应当实现 LLMProvider 接口。
'''
llm_name: str
llm_instance: LLMProvider
origin: str = None # 注册来源

36
type/types.py Normal file
View File

@@ -0,0 +1,36 @@
from type.register import *
from typing import List
from logging import Logger
class GlobalObject:
'''
存放一些公用的数据,用于在不同模块(如core与command)之间传递
'''
version: str # 机器人版本
nick: tuple # 用户定义的机器人的别名
base_config: dict # config.json 中导出的配置
cached_plugins: List[RegisteredPlugin] # 加载的插件
platforms: List[RegisteredPlatform]
llms: List[RegisteredLLM]
web_search: bool # 是否开启了网页搜索
reply_prefix: str # 回复前缀
unique_session: bool # 是否开启了独立会话
default_personality: dict
dashboard_data = None
logger: Logger = None
def __init__(self):
self.nick = None # gocq 的昵称
self.base_config = None # config.yaml
self.cached_plugins = [] # 缓存的插件
self.web_search = False # 是否开启了网页搜索
self.reply_prefix = None
self.unique_session = False
self.platforms = []
self.llms = []
self.default_personality = None
self.dashboard_data = None
self.stat = {}

View File

@@ -3,6 +3,8 @@ import json
import util.general_utils as gu
import time
class FuncCallJsonFormatError(Exception):
def __init__(self, msg):
self.msg = msg
@@ -10,6 +12,7 @@ class FuncCallJsonFormatError(Exception):
def __str__(self):
return self.msg
class FuncNotFoundError(Exception):
def __init__(self, msg):
self.msg = msg
@@ -17,6 +20,7 @@ class FuncNotFoundError(Exception):
def __str__(self):
return self.msg
class FuncCall():
def __init__(self, provider) -> None:
self.func_list = []
@@ -24,7 +28,8 @@ class FuncCall():
def add_func(self, name: str = None, func_args: list = None, desc: str = None, func_obj=None) -> None:
if name == None or func_args == None or desc == None or func_obj == None:
raise FuncCallJsonFormatError("name, func_args, desc must be provided.")
raise FuncCallJsonFormatError(
"name, func_args, desc must be provided.")
params = {
"type": "object", # hardcore here
"properties": {}
@@ -120,7 +125,8 @@ class FuncCall():
res = self.provider.text_chat(prompt, session_id)
if res.find('```') != -1:
res = res[res.find('```json') + 7: res.rfind('```')]
gu.log("REVGPT func_call json result", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
gu.log("REVGPT func_call json result",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
print(res)
res = json.loads(res)
break
@@ -151,11 +157,13 @@ class FuncCall():
func_target = func["func_obj"]
break
if func_target == None:
raise FuncNotFoundError(f"Request function {func_name} not found.")
raise FuncNotFoundError(
f"Request function {func_name} not found.")
t_res = str(func_target(**args))
invoke_func_res += f"{func_name} 调用结果:\n```\n{t_res}\n```\n"
invoke_func_res_list.append(invoke_func_res)
gu.log(f"[FUNC| {func_name} invoked]", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
gu.log(f"[FUNC| {func_name} invoked]",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
# print(str(t_res))
if is_summary:
@@ -181,12 +189,16 @@ class FuncCall():
try:
res = self.provider.text_chat(after_prompt, session_id)
# 截取```之间的内容
gu.log("DEBUG BEGIN", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
gu.log(
"DEBUG BEGIN", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
print(res)
gu.log("DEBUG END", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
gu.log(
"DEBUG END", bg=gu.BG_COLORS["yellow"], fg=gu.FG_COLORS["white"])
if res.find('```') != -1:
res = res[res.find('```json') + 7: res.rfind('```')]
gu.log("REVGPT after_func_call json result", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
res = res[res.find('```json') +
7: res.rfind('```')]
gu.log("REVGPT after_func_call json result",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
after_prompt_res = res
after_prompt_res = json.loads(after_prompt_res)
break
@@ -197,7 +209,8 @@ class FuncCall():
if "The message you submitted was too long" in str(e):
# 如果返回的内容太长了,那么就截取一部分
time.sleep(3)
invoke_func_res = invoke_func_res[:int(len(invoke_func_res) / 2)]
invoke_func_res = invoke_func_res[:int(
len(invoke_func_res) / 2)]
after_prompt = """
函数返回以下内容"""+invoke_func_res+"""
请以AI助手的身份结合返回的内容对用户提问做详细全面的回答
@@ -218,11 +231,13 @@ class FuncCall():
if "func_call_again" in after_prompt_res and after_prompt_res["func_call_again"]:
# 如果需要重新调用函数
# 重新调用函数
gu.log("REVGPT func_call_again", bg=gu.BG_COLORS["purple"], fg=gu.FG_COLORS["white"])
gu.log("REVGPT func_call_again",
bg=gu.BG_COLORS["purple"], fg=gu.FG_COLORS["white"])
res = self.func_call(question, func_definition)
return res, True
gu.log("REVGPT func callback:", bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
gu.log("REVGPT func callback:",
bg=gu.BG_COLORS["green"], fg=gu.FG_COLORS["white"])
# print(after_prompt_res["res"])
return after_prompt_res["res"], True
else:
@@ -230,8 +245,3 @@ class FuncCall():
else:
# print(res["res"])
return res["res"], False

183
util/agent/web_searcher.py Normal file
View File

@@ -0,0 +1,183 @@
import traceback
import random
import json
import asyncio
import aiohttp
import os
from readability import Document
from bs4 import BeautifulSoup
from openai.types.chat.chat_completion_message_tool_call import Function
from util.agent.func_call import FuncCall
from util.search_engine_scraper.config import HEADERS, USER_AGENTS
from util.search_engine_scraper.bing import Bing
from util.search_engine_scraper.sogo import Sogo
from util.search_engine_scraper.google import Google
from model.provider.provider import Provider
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
bing_search = Bing()
sogo_search = Sogo()
google = Google()
proxy = os.environ.get("HTTPS_PROXY", None)
def tidy_text(text: str) -> str:
'''
清理文本,去除空格、换行符等
'''
return text.strip().replace("\n", " ").replace("\r", " ").replace(" ", " ")
# def special_fetch_zhihu(link: str) -> str:
# '''
# function-calling 函数, 用于获取知乎文章的内容
# '''
# response = requests.get(link, headers=HEADERS)
# response.encoding = "utf-8"
# soup = BeautifulSoup(response.text, "html.parser")
# if "zhuanlan.zhihu.com" in link:
# r = soup.find(class_="Post-RichTextContainer")
# else:
# r = soup.find(class_="List-item").find(class_="RichContent-inner")
# if r is None:
# print("debug: zhihu none")
# raise Exception("zhihu none")
# return tidy_text(r.text)
async def search_from_bing(keyword: str) -> str:
'''
tools, 从 bing 搜索引擎搜索
'''
logger.info("web_searcher - search_from_bing: " + keyword)
results = []
try:
results = await google.search(keyword, 5)
except BaseException as e:
logger.error(f"google search error: {e}, try the next one...")
if len(results) == 0:
logger.debug("search google failed")
try:
results = await bing_search.search(keyword, 5)
except BaseException as e:
logger.error(f"bing search error: {e}, try the next one...")
if len(results) == 0:
logger.debug("search bing failed")
try:
results = await sogo_search.search(keyword, 5)
except BaseException as e:
logger.error(f"sogo search error: {e}")
if len(results) == 0:
logger.debug("search sogo failed")
return "没有搜索到结果"
ret = ""
idx = 1
for i in results:
logger.info(f"web_searcher - scraping web: {i.title} - {i.url}")
try:
site_result = await fetch_website_content(i.url)
except:
site_result = ""
site_result = site_result[:600] + "..." if len(site_result) > 600 else site_result
ret += f"{idx}. {i.title} \n{i.snippet}\n{site_result}\n\n"
idx += 1
return ret
async def fetch_website_content(url):
header = HEADERS
header.update({'User-Agent': random.choice(USER_AGENTS)})
async with aiohttp.ClientSession() as session:
async with session.get(url, headers=HEADERS, timeout=6, proxy=proxy) as response:
html = await response.text(encoding="utf-8")
doc = Document(html)
ret = doc.summary(html_partial=True)
soup = BeautifulSoup(ret, 'html.parser')
ret = tidy_text(soup.get_text())
return ret
async def web_search(prompt, provider: Provider, session_id, official_fc=False):
'''
official_fc: 使用官方 function-calling
'''
new_func_call = FuncCall(provider)
new_func_call.add_func("web_search", [{
"type": "string",
"name": "keyword",
"description": "搜索关键词"
}],
"通过搜索引擎搜索。如果问题需要获取近期、实时的消息,在网页上搜索(如天气、新闻或任何需要通过网页获取信息的问题),则调用此函数;如果没有,不要调用此函数。",
search_from_bing
)
new_func_call.add_func("fetch_website_content", [{
"type": "string",
"name": "url",
"description": "要获取内容的网页链接"
}],
"获取网页的内容。如果问题带有合法的网页链接并且用户有需求了解网页内容(例如: `帮我总结一下 https://github.com 的内容`), 就调用此函数。如果没有,不要调用此函数。",
fetch_website_content
)
has_func = False
function_invoked_ret = ""
if official_fc:
# we use official function-calling
result = await provider.text_chat(prompt, session_id, tools=new_func_call.get_func())
if isinstance(result, Function):
logger.debug(f"web_searcher - function-calling: {result}")
func_obj = None
for i in new_func_call.func_list:
if i["name"] == result.name:
func_obj = i["func_obj"]
break
if not func_obj:
return await provider.text_chat(prompt, session_id) + "\n(网页搜索失败, 此为默认回复)"
try:
args = json.loads(result.arguments)
function_invoked_ret = await func_obj(**args)
has_func = True
except BaseException as e:
traceback.print_exc()
return await provider.text_chat(prompt, session_id) + "\n(网页搜索失败, 此为默认回复)"
else:
return result
else:
# we use our own function-calling
try:
args = {
'question': prompt,
'func_definition': new_func_call.func_dump(),
'is_task': False,
'is_summary': False,
}
function_invoked_ret, has_func = await asyncio.to_thread(new_func_call.func_call, **args)
except BaseException as e:
res = await provider.text_chat(prompt) + "\n(网页搜索失败, 此为默认回复)"
return res
has_func = True
if has_func:
await provider.forget(session_id)
summary_prompt = f"""
你是一个专业且高效的助手,你的任务是
1. 根据下面的相关材料对用户的问题 `{prompt}` 进行总结;
2. 简单地发表你对这个问题的简略看法。
# 例子
1. 从网上的信息来看,可以知道...我个人认为...你觉得呢?
2. 根据网上的最新信息,可以得知...我觉得...你怎么看?
# 限制
1. 限制在 200 字以内;
2. 请**直接输出总结**,不要输出多余的内容和提示语。
# 相关材料
{function_invoked_ret}"""
ret = await provider.text_chat(summary_prompt, session_id)
return ret
return function_invoked_ret

View File

@@ -1,8 +1,9 @@
import os
import json
import yaml
from typing import Union
cpath = "cmd_config.json"
cpath = "data/cmd_config.json"
def check_exist():
if not os.path.exists(cpath):
@@ -10,6 +11,7 @@ def check_exist():
json.dump({}, f, indent=4, ensure_ascii=False)
f.flush()
class CmdConfig():
@staticmethod
@@ -82,16 +84,13 @@ class CmdConfig():
json.dump(d, f, indent=4, ensure_ascii=False)
f.flush()
def init_astrbot_config_items():
# 加载默认配置
cc = CmdConfig()
cc.init_attributes("qq_forward_threshold", 200)
cc.init_attributes("qq_welcome", "欢迎加入本群!\n欢迎给https://github.com/Soulter/QQChannelChatGPT项目一个Star😊~\n输入help查看帮助~\n")
cc.init_attributes("qq_welcome", "")
cc.init_attributes("qq_pic_mode", False)
cc.init_attributes("rev_chatgpt_model", "")
cc.init_attributes("rev_chatgpt_plugin_ids", [])
cc.init_attributes("rev_chatgpt_PUID", "")
cc.init_attributes("rev_chatgpt_unverified_plugin_domains", [])
cc.init_attributes("gocq_host", "127.0.0.1")
cc.init_attributes("gocq_http_port", 5700)
cc.init_attributes("gocq_websocket_port", 6700)
@@ -102,6 +101,7 @@ def init_astrbot_config_items():
cc.init_attributes("other_admins", [])
cc.init_attributes("CHATGPT_BASE_URL", "")
cc.init_attributes("qqbot_secret", "")
cc.init_attributes("qqofficial_enable_group_message", False)
cc.init_attributes("admin_qq", "")
cc.init_attributes("nick_qq", ["!", "", "ai"])
cc.init_attributes("admin_qqchan", "")
@@ -118,3 +118,28 @@ def init_astrbot_config_items():
cc.init_attributes("https_proxy", "")
cc.init_attributes("dashboard_username", "")
cc.init_attributes("dashboard_password", "")
def try_migrate_config():
'''
将 cmd_config.json 迁移至 data/cmd_config.json
'''
print("try migrate configs")
if os.path.exists("cmd_config.json"):
with open("cmd_config.json", "r", encoding="utf-8-sig") as f:
data = json.load(f)
with open("data/cmd_config.json", "w", encoding="utf-8-sig") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
try:
os.remove("cmd_config.json")
except Exception as e:
pass
if not os.path.exists("cmd_config.json") and not os.path.exists("data/cmd_config.json"):
# 从 configs/config.yaml 上拿数据
configs_pth = os.path.abspath(os.path.join(os.path.abspath(__file__), "../../configs/config.yaml"))
with open(configs_pth, encoding='utf-8') as f:
data = yaml.load(f, Loader=yaml.Loader)
print(data)
with open("data/cmd_config.json", "w", encoding="utf-8-sig") as f:
json.dump(data, f, indent=2, ensure_ascii=False)

View File

@@ -1,290 +0,0 @@
import requests
import util.general_utils as gu
import traceback
import time
import json
import asyncio
from googlesearch import search, SearchResult
from readability import Document
from bs4 import BeautifulSoup
from openai.types.chat.chat_completion_message_tool_call import Function
from util.function_calling.func_call import (
FuncCall,
FuncCallJsonFormatError,
FuncNotFoundError
)
from model.provider.provider import Provider
def tidy_text(text: str) -> str:
'''
清理文本,去除空格、换行符等
'''
return text.strip().replace("\n", " ").replace("\r", " ").replace(" ", " ")
def special_fetch_zhihu(link: str) -> str:
'''
function-calling 函数, 用于获取知乎文章的内容
'''
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
response = requests.get(link, headers=headers)
response.encoding = "utf-8"
soup = BeautifulSoup(response.text, "html.parser")
if "zhuanlan.zhihu.com" in link:
r = soup.find(class_="Post-RichTextContainer")
else:
r = soup.find(class_="List-item").find(class_="RichContent-inner")
if r is None:
print("debug: zhihu none")
raise Exception("zhihu none")
return tidy_text(r.text)
def google_web_search(keyword) -> str:
'''
获取 google 搜索结果, 得到 title、desc、link
'''
ret = ""
index = 1
try:
ls = search(keyword, advanced=True, num_results=4)
for i in ls:
desc = i.description
try:
# gu.log(f"搜索网页: {i.url}", tag="网页搜索", level=gu.LEVEL_INFO)
desc = fetch_website_content(i.url)
except BaseException as e:
print(f"(google) fetch_website_content err: {str(e)}")
# gu.log(f"# No.{str(index)}\ntitle: {i.title}\nurl: {i.url}\ncontent: {desc}\n\n", level=gu.LEVEL_DEBUG, max_len=9999)
ret += f"# No.{str(index)}\ntitle: {i.title}\nurl: {i.url}\ncontent: {desc}\n\n"
index += 1
except Exception as e:
print(f"google search err: {str(e)}")
return web_keyword_search_via_bing(keyword)
return ret
def web_keyword_search_via_bing(keyword) -> str:
'''
获取bing搜索结果, 得到 title、desc、link
'''
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
url = "https://www.bing.com/search?q="+keyword
_cnt = 0
# _detail_store = []
while _cnt < 5:
try:
response = requests.get(url, headers=headers)
response.encoding = "utf-8"
# gu.log(f"bing response: {response.text}", tag="bing", level=gu.LEVEL_DEBUG, max_len=9999)
soup = BeautifulSoup(response.text, "html.parser")
res = ""
result_cnt = 0
ols = soup.find(id="b_results")
for i in ols.find_all("li", class_="b_algo"):
try:
title = i.find("h2").text
desc = i.find("p").text
link = i.find("h2").find("a").get("href")
# res.append({
# "title": title,
# "desc": desc,
# "link": link,
# })
try:
# gu.log(f"搜索网页: {link}", tag="网页搜索", level=gu.LEVEL_INFO)
desc = fetch_website_content(link)
except BaseException as e:
print(f"(bing) fetch_website_content err: {str(e)}")
res += f"# No.{str(result_cnt + 1)}\ntitle: {title}\nurl: {link}\ncontent: {desc}\n\n"
result_cnt += 1
if result_cnt > 5: break
# if len(_detail_store) >= 3:
# continue
# # 爬取前两条的网页内容
# if "zhihu.com" in link:
# try:
# _detail_store.append(special_fetch_zhihu(link))
# except BaseException as e:
# print(f"zhihu parse err: {str(e)}")
# else:
# try:
# _detail_store.append(fetch_website_content(link))
# except BaseException as e:
# print(f"fetch_website_content err: {str(e)}")
except Exception as e:
print(f"bing parse err: {str(e)}")
if result_cnt == 0: break
return res
except Exception as e:
# gu.log(f"bing fetch err: {str(e)}")
_cnt += 1
time.sleep(1)
# gu.log("fail to fetch bing info, using sougou.")
return web_keyword_search_via_sougou(keyword)
def web_keyword_search_via_sougou(keyword) -> str:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
}
url = f"https://sogou.com/web?query={keyword}"
response = requests.get(url, headers=headers)
response.encoding = "utf-8"
soup = BeautifulSoup(response.text, "html.parser")
res = []
results = soup.find("div", class_="results")
for i in results.find_all("div", class_="vrwrap"):
try:
title = tidy_text(i.find("h3").text)
link = tidy_text(i.find("h3").find("a").get("href"))
if link.startswith("/link?url="):
link = "https://www.sogou.com" + link
res.append({
"title": title,
"link": link,
})
if len(res) >= 5: # 限制5条
break
except Exception as e:
pass
# gu.log(f"sougou parse err: {str(e)}", tag="web_keyword_search_via_sougou", level=gu.LEVEL_ERROR)
# 爬取网页内容
_detail_store = []
for i in res:
if _detail_store >= 3:
break
try:
_detail_store.append(fetch_website_content(i["link"]))
except BaseException as e:
print(f"fetch_website_content err: {str(e)}")
ret = f"{str(res)}"
if len(_detail_store) > 0:
ret += f"\n网页内容: {str(_detail_store)}"
return ret
def fetch_website_content(url):
# gu.log(f"fetch_website_content: {url}", tag="fetch_website_content", level=gu.LEVEL_DEBUG)
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
response = requests.get(url, headers=headers, timeout=3)
response.encoding = "utf-8"
doc = Document(response.content)
# print('title:', doc.title())
ret = doc.summary(html_partial=True)
soup = BeautifulSoup(ret, 'html.parser')
ret = tidy_text(soup.get_text())
return ret
async def web_search(question, provider: Provider, session_id, official_fc=False):
'''
official_fc: 使用官方 function-calling
'''
new_func_call = FuncCall(provider)
new_func_call.add_func("google_web_search", [{
"type": "string",
"name": "keyword",
"description": "google search query (分词,尽量保留所有信息)"
}],
"通过搜索引擎搜索。如果问题需要获取近期、实时的消息,在网页上搜索(如天气、新闻或任何需要通过网页获取信息的问题),则调用此函数;如果没有,不要调用此函数。",
web_keyword_search_via_bing
)
new_func_call.add_func("fetch_website_content", [{
"type": "string",
"name": "url",
"description": "网址"
}],
"获取网页的内容。如果问题带有合法的网页链接(例如: `帮我总结一下 https://github.com 的内容`), 就调用此函数。如果没有,不要调用此函数。",
fetch_website_content
)
question1 = f"{question} \n> hint: 最多只能调用1个function, 并且存在不会调用任何function的可能性。"
has_func = False
function_invoked_ret = ""
if official_fc:
# we use official function-calling
func = await provider.text_chat(question1, session_id, function_call=new_func_call.get_func())
if isinstance(func, Function):
# 执行对应的结果:
func_obj = None
for i in new_func_call.func_list:
if i["name"] == func.name:
func_obj = i["func_obj"]
break
if not func_obj:
# gu.log("找不到返回的 func name " + func.name, level=gu.LEVEL_ERROR)
return await provider.text_chat(question1, session_id) + "\n(网页搜索失败, 此为默认回复)"
try:
args = json.loads(func.arguments)
# we use to_thread to avoid blocking the event loop
function_invoked_ret = await asyncio.to_thread(func_obj, **args)
has_func = True
except BaseException as e:
traceback.print_exc()
return await provider.text_chat(question1, session_id) + "\n(网页搜索失败, 此为默认回复)"
else:
# now func is a string
return func
else:
# we use our own function-calling
try:
args = {
'question': question1,
'func_definition': new_func_call.func_dump(),
'is_task': False,
'is_summary': False,
}
function_invoked_ret, has_func = await asyncio.to_thread(new_func_call.func_call, **args)
except BaseException as e:
res = await provider.text_chat(question) + "\n(网页搜索失败, 此为默认回复)"
return res
has_func = True
if has_func:
await provider.forget(session_id)
question3 = f"""
你的任务是:
1. 根据末尾的材料对问题`{question}`做切题的总结(详细);
2. 简单地发表你对这个问题的看法(简略)。
你的总结末尾应当有对材料的引用, 如果有链接, 请在末尾附上引用网页链接。引用格式严格按照 `\n[1] title url \n`。
不要提到任何函数调用的信息。
一些回复的消息模板:
模板1:
```
从网上的信息来看,可以知道...我个人认为...你觉得呢?
```
模板2:
```
根据网上的最新信息,可以得知...我觉得...你怎么看?
```
你可以根据这些模板来组织回答,但可以不照搬,要根据问题的内容来回答。
以下是相关材料:
"""
_c = 0
while _c < 3:
try:
print('text chat')
final_ret = await provider.text_chat(question3 + "```" + function_invoked_ret + "```", session_id)
return final_ret
except Exception as e:
print(e)
_c += 1
if _c == 3: raise e
if "The message you submitted was too long" in str(e):
await provider.forget(session_id)
function_invoked_ret = function_invoked_ret[:int(len(function_invoked_ret) / 2)]
time.sleep(3)
return function_invoked_ret

View File

@@ -1,140 +1,24 @@
import datetime
import time
import socket
from PIL import Image, ImageDraw, ImageFont
import os
import re
import requests
from util.cmd_config import CmdConfig
import aiohttp
import socket
from cores.qqbot.types import GlobalObject
import platform
import logging
import json
import sys
import psutil
import ssl
import base64
PLATFORM_GOCQ = 'gocq'
PLATFORM_QQCHAN = 'qqchan'
from PIL import Image, ImageDraw, ImageFont
from type.types import GlobalObject
from SparkleLogging.utils.core import LogManager
from logging import Logger
FG_COLORS = {
"black": "30",
"red": "31",
"green": "32",
"yellow": "33",
"blue": "34",
"purple": "35",
"cyan": "36",
"white": "37",
"default": "39",
}
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
BG_COLORS = {
"black": "40",
"red": "41",
"green": "42",
"yellow": "43",
"blue": "44",
"purple": "45",
"cyan": "46",
"white": "47",
"default": "49",
}
LEVEL_DEBUG = "DEBUG"
LEVEL_INFO = "INFO"
LEVEL_WARNING = "WARN"
LEVEL_ERROR = "ERROR"
LEVEL_CRITICAL = "CRITICAL"
# 为了兼容旧版
level_codes = {
LEVEL_DEBUG: logging.DEBUG,
LEVEL_INFO: logging.INFO,
LEVEL_WARNING: logging.WARNING,
LEVEL_ERROR: logging.ERROR,
LEVEL_CRITICAL: logging.CRITICAL,
}
level_colors = {
"INFO": "green",
"WARN": "yellow",
"ERROR": "red",
"CRITICAL": "purple",
}
class Logger:
def __init__(self) -> None:
self.history = []
def log(
self,
msg: str,
level: str = "INFO",
tag: str = "System",
fg: str = None,
bg: str = None,
max_len: int = 50000,
err: Exception = None,):
"""
日志打印函数
"""
_set_level_code = level_codes[LEVEL_INFO]
if 'LOG_LEVEL' in os.environ and os.environ['LOG_LEVEL'] in level_codes:
_set_level_code = level_codes[os.environ['LOG_LEVEL']]
if level in level_codes and level_codes[level] < _set_level_code:
return
if err is not None:
msg += "\n异常原因: " + str(err)
level = LEVEL_ERROR
if len(msg) > max_len:
msg = msg[:max_len] + "..."
now = datetime.datetime.now().strftime("%H:%M:%S")
pres = []
for line in msg.split("\n"):
if line == "\n":
pres.append("")
else:
pres.append(f"[{now}] [{tag}/{level}] {line}")
if level == "INFO":
if fg is None:
fg = FG_COLORS["green"]
if bg is None:
bg = BG_COLORS["default"]
elif level == "WARN":
if fg is None:
fg = FG_COLORS["yellow"]
if bg is None:
bg = BG_COLORS["default"]
elif level == "ERROR":
if fg is None:
fg = FG_COLORS["red"]
if bg is None:
bg = BG_COLORS["default"]
elif level == "CRITICAL":
if fg is None:
fg = FG_COLORS["purple"]
if bg is None:
bg = BG_COLORS["default"]
ret = ""
for line in pres:
ret += f"\033[{fg};{bg}m{line}\033[0m\n"
try:
requests.post("http://localhost:6185/api/log", data=ret[:-1].encode(), timeout=1)
except BaseException as e:
pass
self.history.append(ret)
if len(self.history) > 100:
self.history = self.history[-100:]
print(ret[:-1])
log = Logger()
def port_checker(port: int, host: str = "localhost"):
sk = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
@@ -147,6 +31,7 @@ def port_checker(port: int, host: str = "localhost"):
sk.close()
return False
def get_font_path() -> str:
if os.path.exists("resources/fonts/syst.otf"):
font_path = "resources/fonts/syst.otf"
@@ -162,39 +47,6 @@ def get_font_path() -> str:
raise Exception("找不到字体文件")
return font_path
def word2img(title: str, text: str, max_width=30, font_size=20):
font_path = get_font_path()
width_factor = 1.0
height_factor = 1.5
# 格式化文本宽度最大为30
lines = text.split('\n')
i = 0
length = len(lines)
for l in lines:
if len(l) > max_width:
cp = l
for ii in range(len(l)):
if ii % max_width == 0:
cp = cp[:ii] + '\n' + cp[ii:]
length += 1
lines[i] = cp
i += 1
text = '\n'.join(lines)
width = int(max_width * font_size * width_factor)
height = int(length * font_size * height_factor)
image = Image.new('RGB', (width, height), (255, 255, 255))
draw = ImageDraw.Draw(image)
text_font = ImageFont.truetype(font_path, font_size)
title_font = ImageFont.truetype(font_path, font_size + 5)
# 标题居中
title_width, title_height = title_font.getsize(title)
draw.text(((width - title_width) / 2, 10), title, fill=(0, 0, 0), font=title_font)
# 文本不居中
draw.text((10, title_height+20), text, fill=(0, 0, 0), font=text_font)
return image
def render_markdown(markdown_text, image_width=800, image_height=600, font_size=26, font_color=(0, 0, 0), bg_color=(255, 255, 255)):
HEADER_MARGIN = 20
@@ -255,13 +107,15 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
try:
image_url = re.findall(IMAGE_REGEX, line)[0]
print(image_url)
image_res = Image.open(requests.get(image_url, stream=True, timeout=5).raw)
image_res = Image.open(requests.get(
image_url, stream=True, timeout=5).raw)
images[i] = image_res
# 最大不得超过image_width的50%
img_height = image_res.size[1]
if image_res.size[0] > image_width*0.5:
image_res = image_res.resize((int(image_width*0.5), int(image_res.size[1]*image_width*0.5/image_res.size[0])))
image_res = image_res.resize(
(int(image_width*0.5), int(image_res.size[1]*image_width*0.5/image_res.size[0])))
img_height = image_res.size[1]
height += img_height + IMAGE_MARGIN*2
@@ -361,16 +215,19 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
y += HEADER_MARGIN # 上边距
# 字间距
draw.text((x, y), line, font=font, fill=font_color)
draw.line((x, y + font_size_header + 8, image_width - 10, y + font_size_header + 8), fill=(230, 230, 230), width=3)
draw.line((x, y + font_size_header + 8, image_width - 10,
y + font_size_header + 8), fill=(230, 230, 230), width=3)
y += font_size_header + HEADER_MARGIN
elif line.startswith(">"):
# 处理引用
quote_text = line.strip(">")
y += QUOTE_LEFT_LINE_MARGIN
draw.line((x, y, x, y + QUOTE_LEFT_LINE_HEIGHT), fill=QUOTE_LEFT_LINE_COLOR, width=QUOTE_LEFT_LINE_WIDTH)
draw.line((x, y, x, y + QUOTE_LEFT_LINE_HEIGHT),
fill=QUOTE_LEFT_LINE_COLOR, width=QUOTE_LEFT_LINE_WIDTH)
font = ImageFont.truetype(font_path, QUOTE_FONT_SIZE)
draw.text((x + QUOTE_FONT_LINE_MARGIN, y + QUOTE_FONT_LINE_MARGIN), quote_text, font=font, fill=QUOTE_FONT_COLOR)
draw.text((x + QUOTE_FONT_LINE_MARGIN, y + QUOTE_FONT_LINE_MARGIN),
quote_text, font=font, fill=QUOTE_FONT_COLOR)
y += font_size + QUOTE_LEFT_LINE_HEIGHT + QUOTE_LEFT_LINE_MARGIN
elif line.startswith("-"):
@@ -378,7 +235,8 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
list_text = line.strip("-").strip()
font = ImageFont.truetype(font_path, LIST_FONT_SIZE)
y += LIST_MARGIN
draw.text((x, y), " · " + list_text, font=font, fill=LIST_FONT_COLOR)
draw.text((x, y), " · " + list_text,
font=font, fill=LIST_FONT_COLOR)
y += font_size + LIST_MARGIN
elif line.startswith("```"):
@@ -390,9 +248,11 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
in_code_block = False
codes = "\n".join(code_block_codes)
code_block_codes = []
draw.rounded_rectangle((x, code_block_start_y, image_width - 10, y+CODE_BLOCK_CODES_MARGIN_VERTICAL + CODE_BLOCK_TEXT_MARGIN), radius=5, fill=CODE_BLOCK_BG_COLOR, width=2)
draw.rounded_rectangle((x, code_block_start_y, image_width - 10, y+CODE_BLOCK_CODES_MARGIN_VERTICAL +
CODE_BLOCK_TEXT_MARGIN), radius=5, fill=CODE_BLOCK_BG_COLOR, width=2)
font = ImageFont.truetype(font_path1, CODE_BLOCK_FONT_SIZE)
draw.text((x + CODE_BLOCK_CODES_MARGIN_HORIZONTAL, code_block_start_y + CODE_BLOCK_CODES_MARGIN_VERTICAL), codes, font=font, fill=font_color)
draw.text((x + CODE_BLOCK_CODES_MARGIN_HORIZONTAL, code_block_start_y +
CODE_BLOCK_CODES_MARGIN_VERTICAL), codes, font=font, fill=font_color)
y += CODE_BLOCK_CODES_MARGIN_VERTICAL + CODE_BLOCK_MARGIN
# y += font_size+10
elif re.search(r"`(.*?)`", line):
@@ -409,11 +269,15 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
if part in parts_inline:
font = ImageFont.truetype(font_path, INLINE_CODE_FONT_SIZE)
code_text = part.strip("`")
code_width = font.getsize(code_text)[0] + INLINE_CODE_FONT_MARGIN*2
code_width = font.getsize(
code_text)[0] + INLINE_CODE_FONT_MARGIN*2
x += INLINE_CODE_MARGIN
code_box = (x, y, x + code_width, y + INLINE_CODE_BG_HEIGHT)
draw.rounded_rectangle(code_box, radius=5, fill=INLINE_CODE_BG_COLOR, width=2) # 使用灰色填充矩形框作为引用背景
draw.text((x+INLINE_CODE_FONT_MARGIN, y), code_text, font=font, fill=font_color)
code_box = (x, y, x + code_width,
y + INLINE_CODE_BG_HEIGHT)
draw.rounded_rectangle(
code_box, radius=5, fill=INLINE_CODE_BG_COLOR, width=2) # 使用灰色填充矩形框作为引用背景
draw.text((x+INLINE_CODE_FONT_MARGIN, y),
code_text, font=font, fill=font_color)
x += code_width+INLINE_CODE_MARGIN-INLINE_CODE_FONT_MARGIN
else:
font = ImageFont.truetype(font_path, font_size)
@@ -437,11 +301,13 @@ def render_markdown(markdown_text, image_width=800, image_height=600, font_size=
image_res = images[index]
# 最大不得超过image_width的50%
if image_res.size[0] > image_width*0.5:
image_res = image_res.resize((int(image_width*0.5), int(image_res.size[1]*image_width*0.5/image_res.size[0])))
image_res = image_res.resize(
(int(image_width*0.5), int(image_res.size[1]*image_width*0.5/image_res.size[0])))
image.paste(image_res, (IMAGE_MARGIN, y))
y += image_res.size[1] + IMAGE_MARGIN*2
return image
def save_temp_img(img: Image) -> str:
if not os.path.exists("temp"):
os.makedirs("temp")
@@ -455,31 +321,49 @@ def save_temp_img(img: Image) -> str:
if time.time() - ctime > 3600:
os.remove(path)
except Exception as e:
print(f"清除临时文件失败: {e}", level=LEVEL_WARNING, tag="GeneralUtils")
print(f"清除临时文件失败: {e}")
# 获得时间戳
timestamp = int(time.time())
p = f"temp/{timestamp}.png"
p = f"temp/{timestamp}.jpg"
if isinstance(img, Image.Image):
img.save(p)
else:
with open(p, "wb") as f:
f.write(img)
logger.info(f"保存临时图片: {p}")
return p
def create_text_image(title: str, text: str, max_width=30, font_size=20):
async def download_image_by_url(url: str, post: bool = False, post_data: dict = None) -> str:
'''
文本转图片
title: 标题
text: 文本内容
max_width: 文本宽度最大值默认30
font_size: 字体大小默认20
返回:文件路径
下载图片
'''
try:
img = word2img(title, text, max_width, font_size)
p = save_temp_img(img)
return p
logger.info(f"下载图片: {url}")
async with aiohttp.ClientSession() as session:
if post:
async with session.post(url, json=post_data) as resp:
return save_temp_img(await resp.read())
else:
async with session.get(url) as resp:
return save_temp_img(await resp.read())
except aiohttp.client_exceptions.ClientConnectorSSLError as e:
# 关闭SSL验证
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
async with aiohttp.ClientSession(trust_env=False) as session:
if post:
async with session.get(url, ssl=ssl_context) as resp:
return save_temp_img(await resp.read())
else:
async with session.get(url, ssl=ssl_context) as resp:
return save_temp_img(await resp.read())
except Exception as e:
raise e
def create_markdown_image(text: str):
'''
markdown文本转图片。
@@ -492,15 +376,6 @@ def create_markdown_image(text: str):
except Exception as e:
raise e
def try_migrate_config(old_config: dict):
'''
迁移配置文件到 cmd_config.json
'''
cc = CmdConfig()
if cc.get("qqbot", None) is None:
# 未迁移过
for k in old_config:
cc.put(k, old_config[k])
def get_local_ip_addresses():
ip = ''
@@ -514,39 +389,68 @@ def get_local_ip_addresses():
s.close()
return ip
def get_sys_info(global_object: GlobalObject):
mem = None
stats = global_object.dashboard_data.stats
os_name = platform.system()
os_version = platform.version()
if 'sys_perf' in stats and 'memory' in stats['sys_perf']:
mem = stats['sys_perf']['memory']
return {
'mem': mem,
'os': os_name + '_' + os_version,
'py': platform.python_version(),
}
def upload(_global_object: GlobalObject):
'''
上传相关非敏感统计数据
'''
time.sleep(10)
while True:
addr_ip = ''
platform_stats = {}
llm_stats = {}
plugin_stats = {}
for platform in _global_object.platforms:
platform_stats[platform.platform_name] = {
"cnt_receive": platform.platform_instance.cnt_receive,
"cnt_reply": platform.platform_instance.cnt_reply
}
for llm in _global_object.llms:
stat = llm.llm_instance.model_stat
for k in stat:
llm_stats[llm.llm_name + "#" + k] = stat[k]
llm.llm_instance.reset_model_stat()
for plugin in _global_object.cached_plugins:
plugin_stats[plugin.metadata.plugin_name] = {
"metadata": plugin.metadata,
"trig_cnt": plugin.trig_cnt
}
plugin.reset_trig_cnt()
try:
res = {
"version": _global_object.version,
"count": _global_object.cnt_total,
"ip": addr_ip,
"sys": sys.platform,
"admin": "null",
"stat_version": "moon",
"version": _global_object.version, # 版本号
"platform_stats": platform_stats, # 过去 30 分钟各消息平台交互消息数
"llm_stats": llm_stats,
"plugin_stats": plugin_stats,
"sys": sys.platform, # 系统版本
}
resp = requests.post('https://api.soulter.top/upload', data=json.dumps(res), timeout=5)
resp = requests.post(
'https://api.soulter.top/upload', data=json.dumps(res), timeout=5)
if resp.status_code == 200:
ok = resp.json()
if ok['status'] == 'ok':
_global_object.cnt_total = 0
except BaseException as e:
pass
time.sleep(10*60)
time.sleep(30*60)
def retry(n: int = 3):
'''
重试装饰器
'''
def decorator(func):
def wrapper(*args, **kwargs):
for i in range(n):
try:
return func(*args, **kwargs)
except Exception as e:
if i == n-1: raise e
logger.warning(f"函数 {func.__name__}{i+1} 次重试... {e}")
return wrapper
return decorator
def run_monitor(global_object: GlobalObject):
'''

View File

@@ -0,0 +1,43 @@
import aiohttp, os
from util.general_utils import download_image_by_url, create_markdown_image
from type.config import VERSION
BASE_RENDER_URL = "https://t2i.soulter.top/text2img"
TEMPLATE_PATH = os.path.join(os.path.dirname(__file__), "template")
async def text_to_image_base(text: str, return_url: bool = False) -> str:
'''
返回图像的文件路径
'''
with open(os.path.join(TEMPLATE_PATH, "base.html"), "r") as f:
tmpl_str = f.read()
assert(tmpl_str)
text = text.replace("`", "\`")
post_data = {
"tmpl": tmpl_str,
"json": return_url,
"tmpldata": {
"text": text,
"version": f"v{VERSION}",
},
"options": {
"full_page": True
}
}
if return_url:
async with aiohttp.ClientSession() as session:
async with session.post(f"{BASE_RENDER_URL}/generate", json=post_data) as resp:
ret = await resp.json()
return f"{BASE_RENDER_URL}/{ret['data']['id']}"
else:
image_path = ""
try:
image_path = await download_image_by_url(f"{BASE_RENDER_URL}/generate", post=True, post_data=post_data)
except Exception as e:
print(f"调用 markdown 渲染 API 失败,错误信息:{e},将使用本地渲染方式。")
image_path = create_markdown_image(text)
return image_path

View File

@@ -0,0 +1,247 @@
<!doctype html>
<html>
<head>
<meta charset="utf-8"/>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.16.10/dist/katex.min.css" integrity="sha384-wcIxkf4k558AjM3Yz3BBFQUbk/zgIYC2R0QpeeYb+TwlBVMrlgLqwRjRtGZiK7ww" crossorigin="anonymous">
<link rel="stylesheet" href="/path/to/styles/default.min.css">
<script src="/path/to/highlight.min.js"></script>
<script>hljs.highlightAll();</script>
<script defer src="https://cdn.jsdelivr.net/npm/katex@0.16.10/dist/katex.min.js" integrity="sha384-hIoBPJpTUs74ddyc4bFZSM1TVlQDA60VBbJS0oA934VSz82sBx1X7kSx2ATBDIyd" crossorigin="anonymous"></script>
<script defer src="https://cdn.jsdelivr.net/npm/katex@0.16.10/dist/contrib/auto-render.min.js" integrity="sha384-43gviWU0YVjaDtb/GhzOouOXtZMP/7XUzwPTstBeZFe/+rCMvRwr4yROQP43s0Xk" crossorigin="anonymous"
onload="renderMathInElement(document.getElementById('content'),{delimiters: [{left: '$$', right: '$$', display: true},{left: '$', right: '$', display: false}]});"></script>
</head>
<body>
<div style="background-color: #3276dc; color: #fff; font-size: 64px; ">
<span style="font-weight: bold; margin-left: 16px"># AstrBot</span>
<span>{{ version }}</span>
</div>
<article style="margin-top: 32px" id="content"></article>
<script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
<script>
document.getElementById('content').innerHTML = marked.parse(`{{ text | safe}}`);
</script>
</body>
</html>
<style>
#content {
min-width: 200px;
max-width: 85%;
margin: 0 auto;
padding: 2rem 1em 1em;
}
body {
word-break: break-word;
line-height: 1.75;
font-weight: 400;
font-size: 32px;
margin: 0;
padding: 0;
overflow-x: hidden;
color: #333;
font-family: -apple-system,BlinkMacSystemFont,Segoe UI,Helvetica,Arial,sans-serif,Apple Color Emoji,Segoe UI Emoji;
}
h1, h2, h3, h4, h5, h6 {
line-height: 1.5;
margin-top: 35px;
margin-bottom: 10px;
padding-bottom: 5px;
}
h1:first-child, h2:first-child, h3:first-child, h4:first-child, h5:first-child, h6:first-child {
margin-top: -1.5rem;
margin-bottom: 1rem;
}
h1::before, h2::before, h3::before, h4::before, h5::before, h6::before {
content: "#";
display: inline-block;
color: #3eaf7c;
padding-right: 0.23em;
}
h1 {
position: relative;
font-size: 2.5rem;
margin-bottom: 5px;
}
h1::before {
font-size: 2.5rem;
}
h2 {
padding-bottom: 0.5rem;
font-size: 2.2rem;
border-bottom: 1px solid #ececec;
}
h3 {
font-size: 1.5rem;
padding-bottom: 0;
}
h4 {
font-size: 1.25rem;
}
h5 {
font-size: 1rem;
}
h6 {
margin-top: 5px;
}
p {
line-height: inherit;
margin-top: 22px;
margin-bottom: 22px;
}
strong {
color: #3eaf7c;
}
img {
max-width: 100%;
border-radius: 2px;
display: block;
margin: auto;
border: 3px solid rgba(62, 175, 124, 0.2);
}
hr {
border-top: 1px solid #3eaf7c;
border-bottom: none;
border-left: none;
border-right: none;
margin-top: 32px;
margin-bottom: 32px;
}
code {
font-family: Menlo, Monaco, Consolas, "Courier New", monospace;
word-break: break-word;
overflow-x: auto;
padding: 0.2rem 0.5rem;
margin: 0;
color: #3eaf7c;
font-size: 0.85em;
background-color: rgba(27, 31, 35, 0.05);
border-radius: 3px;
}
pre {
font-family: Menlo, Monaco, Consolas, "Courier New", monospace;
overflow: auto;
position: relative;
line-height: 1.75;
border-radius: 6px;
border: 2px solid #3eaf7c;
}
pre > code {
font-size: 12px;
padding: 15px 12px;
margin: 0;
word-break: normal;
display: block;
overflow-x: auto;
color: #333;
background: #f8f8f8;
}
a {
font-weight: 500;
text-decoration: none;
color: #3eaf7c;
}
a:hover, a:active {
border-bottom: 1.5px solid #3eaf7c;
}
a:before {
content: "⇲";
}
table {
display: inline-block !important;
font-size: 12px;
width: auto;
max-width: 100%;
overflow: auto;
border: solid 1px #3eaf7c;
}
thead {
background: #3eaf7c;
color: #fff;
text-align: left;
}
tr:nth-child(2n) {
background-color: rgba(62, 175, 124, 0.2);
}
th, td {
padding: 12px 7px;
line-height: 24px;
}
td {
min-width: 120px;
}
blockquote {
color: #666;
padding: 1px 23px;
margin: 22px 0;
border-left: 0.5rem solid rgba(62, 175, 124, 0.6);
border-color: #42b983;
background-color: #f8f8f8;
}
blockquote::after {
display: block;
content: "";
}
blockquote > p {
margin: 10px 0;
}
details {
border: none;
outline: none;
border-left: 4px solid #3eaf7c;
padding-left: 10px;
margin-left: 4px;
}
details summary {
cursor: pointer;
border: none;
outline: none;
background: white;
margin: 0px -17px;
}
details summary::-webkit-details-marker {
color: #3eaf7c;
}
ol, ul {
padding-left: 28px;
}
ol li, ul li {
margin-bottom: 0;
list-style: inherit;
}
ol li .task-list-item, ul li .task-list-item {
list-style: none;
}
ol li .task-list-item ul, ul li .task-list-item ul, ol li .task-list-item ol, ul li .task-list-item ol {
margin-top: 0;
}
ol ul, ul ul, ol ol, ul ol {
margin-top: 3px;
}
ol li {
padding-left: 6px;
}
ol li::marker {
color: #3eaf7c;
}
ul li {
list-style: none;
}
ul li:before {
content: "•";
margin-right: 4px;
color: #3eaf7c;
}
@media (max-width: 720px) {
h1 {
font-size: 24px;
}
h2 {
font-size: 20px;
}
h3 {
font-size: 18px;
}
}
</style>

View File

@@ -1,11 +1,5 @@
from cores.qqbot.types import (
PluginMetadata,
RegisteredLLM,
RegisteredPlugin,
RegisteredPlatform,
RegisteredPlugins,
PluginType,
GlobalObject,
AstrMessageEvent,
CommandResult
)
from type.plugin import PluginMetadata, PluginType
from type.register import RegisteredLLM, RegisteredPlatform, RegisteredPlugin, RegisteredPlugins
from type.types import GlobalObject
from type.message import AstrMessageEvent
from type.command import CommandResult

View File

@@ -1,5 +1,6 @@
from cores.qqbot.core import oper_msg
from cores.qqbot.types import AstrMessageEvent, CommandResult
from astrbot.core import oper_msg
from type.message import *
from type.command import CommandResult
from model.platform._message_result import MessageResult
'''

View File

@@ -5,7 +5,8 @@
'''
from model.provider.provider import Provider as LLMProvider
from model.platform._platfrom import Platform
from cores.qqbot.types import GlobalObject, RegisteredPlatform, RegisteredLLM
from type.types import GlobalObject
from type.register import RegisteredPlatform, RegisteredLLM
def register_platform(platform_name: str, platform_instance: Platform, context: GlobalObject) -> None:
'''

View File

@@ -2,4 +2,4 @@
插件类型
'''
from cores.qqbot.types import PluginType
from type.plugin import PluginType

View File

@@ -1,26 +1,25 @@
'''
插件工具函数
'''
import os
import os, sys
import inspect
import shutil
import stat
import traceback
try:
import git.exc
from git.repo import Repo
except ImportError:
pass
import shutil
import importlib
import stat
import traceback
from types import ModuleType
from typing import List
from pip._internal import main as pipmain
from cores.qqbot.types import (
PluginMetadata,
PluginType,
RegisteredPlugin,
RegisteredPlugins
)
from type.plugin import *
from type.register import *
from SparkleLogging.utils.core import LogManager
from logging import Logger
from type.types import GlobalObject
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
# 找出模块里所有的类名
@@ -35,6 +34,8 @@ def get_classes(p_name, arg: ModuleType):
return classes
# 获取一个文件夹下所有的模块, 文件名和文件夹名相同
def get_modules(path):
modules = []
@@ -58,31 +59,52 @@ def get_modules(path):
})
return modules
def get_plugin_store_path():
if os.path.exists("addons/plugins"):
return "addons/plugins"
elif os.path.exists("QQChannelChatGPT/addons/plugins"):
return "QQChannelChatGPT/addons/plugins"
elif os.path.exists("AstrBot/addons/plugins"):
return "AstrBot/addons/plugins"
else:
raise FileNotFoundError("插件文件夹不存在。")
plugin_dir = os.path.abspath(os.path.join(os.path.abspath(__file__), "../../addons/plugins"))
return plugin_dir
def get_plugin_modules():
plugins = []
try:
if os.path.exists("addons/plugins"):
plugins = get_modules("addons/plugins")
plugin_dir = get_plugin_store_path()
if os.path.exists(plugin_dir):
plugins = get_modules(plugin_dir)
return plugins
elif os.path.exists("QQChannelChatGPT/addons/plugins"):
plugins = get_modules("QQChannelChatGPT/addons/plugins")
return plugins
else:
return None
except BaseException as e:
raise e
def plugin_reload(cached_plugins: RegisteredPlugins):
def check_plugin_dept_update(cached_plugins: RegisteredPlugins, target_plugin: str = None):
plugin_dir = get_plugin_store_path()
if not os.path.exists(plugin_dir):
return False
to_update = []
if target_plugin:
to_update.append(target_plugin)
else:
for p in cached_plugins:
to_update.append(p.root_dir_name)
for p in to_update:
plugin_path = os.path.join(plugin_dir, p)
if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
pth = os.path.join(plugin_path, "requirements.txt")
logger.info(f"正在检查更新插件 {p} 的依赖: {pth}")
update_plugin_dept(os.path.join(plugin_path, "requirements.txt"))
def has_init_param(cls, param_name):
try:
# 获取 __init__ 方法的签名
init_signature = inspect.signature(cls.__init__)
# 检查参数名是否在签名中
return param_name in init_signature.parameters
except (AttributeError, ValueError):
# 如果类没有 __init__ 方法或者无法获取签名
return False
def plugin_reload(ctx: GlobalObject):
cached_plugins = ctx.cached_plugins
plugins = get_plugin_modules()
if plugins is None:
return False, "未找到任何插件模块"
@@ -98,13 +120,17 @@ def plugin_reload(cached_plugins: RegisteredPlugins):
module_path = plugin['module_path']
root_dir_name = plugin['pname']
if module_path in registered_map:
# 之前注册过
module = importlib.reload(module)
else:
module = __import__("addons.plugins." + root_dir_name + "." + p, fromlist=[p])
check_plugin_dept_update(cached_plugins, root_dir_name)
module = __import__("addons.plugins." +
root_dir_name + "." + p, fromlist=[p])
cls = get_classes(p, module)
try:
# 尝试传入 ctx
obj = getattr(module, cls[0])(ctx=ctx)
except:
obj = getattr(module, cls[0])()
metadata = None
@@ -131,6 +157,8 @@ def plugin_reload(cached_plugins: RegisteredPlugins):
except BaseException as e:
fail_rec += f"注册插件 {module_path} 失败, 原因: {str(e)}\n"
continue
if module_path not in registered_map:
cached_plugins.append(RegisteredPlugin(
metadata=metadata,
plugin_instance=obj,
@@ -146,7 +174,13 @@ def plugin_reload(cached_plugins: RegisteredPlugins):
else:
return False, fail_rec
def install_plugin(repo_url: str, cached_plugins: RegisteredPlugins):
def update_plugin_dept(path):
mirror = "https://mirrors.aliyun.com/pypi/simple/"
py = sys.executable
os.system(f"{py} -m pip install -r {path} -i {mirror} --quiet")
def install_plugin(repo_url: str, ctx: GlobalObject):
ppath = get_plugin_store_path()
# 删除末尾的 /
if repo_url.endswith("/"):
@@ -155,17 +189,16 @@ def install_plugin(repo_url: str, cached_plugins: RegisteredPlugins):
d = repo_url.split("/")[-1]
# 转换非法字符:-
d = d.replace("-", "_")
d = d.lower() # 转换为小写
# 创建文件夹
plugin_path = os.path.join(ppath, d)
if os.path.exists(plugin_path):
remove_dir(plugin_path)
Repo.clone_from(repo_url, to_path=plugin_path, branch='master')
# 读取插件的requirements.txt
if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
if pipmain(['install', '-r', os.path.join(plugin_path, "requirements.txt"), '--quiet']) != 0:
raise Exception("插件的依赖安装失败, 需要您手动 pip 安装对应插件的依赖。")
ok, err = plugin_reload(cached_plugins)
if not ok: raise Exception(err)
ok, err = plugin_reload(ctx)
if not ok:
raise Exception(err)
def get_registered_plugin(plugin_name: str, cached_plugins: RegisteredPlugins) -> RegisteredPlugin:
ret = None
@@ -175,18 +208,20 @@ def get_registered_plugin(plugin_name: str, cached_plugins: RegisteredPlugins) -
break
return ret
def uninstall_plugin(plugin_name: str, cached_plugins: RegisteredPlugins):
plugin = get_registered_plugin(plugin_name, cached_plugins)
def uninstall_plugin(plugin_name: str, ctx: GlobalObject):
plugin = get_registered_plugin(plugin_name, ctx.cached_plugins)
if not plugin:
raise Exception("插件不存在。")
root_dir_name = plugin.root_dir_name
ppath = get_plugin_store_path()
cached_plugins.remove(plugin)
ctx.cached_plugins.remove(plugin)
if not remove_dir(os.path.join(ppath, root_dir_name)):
raise Exception("移除插件成功,但是删除插件文件夹失败。您可以手动删除该文件夹,位于 addons/plugins/ 下。")
def update_plugin(plugin_name: str, cached_plugins: RegisteredPlugins):
plugin = get_registered_plugin(plugin_name, cached_plugins)
def update_plugin(plugin_name: str, ctx: GlobalObject):
plugin = get_registered_plugin(plugin_name, ctx.cached_plugins)
if not plugin:
raise Exception("插件不存在。")
ppath = get_plugin_store_path()
@@ -194,12 +229,10 @@ def update_plugin(plugin_name: str, cached_plugins: RegisteredPlugins):
plugin_path = os.path.join(ppath, root_dir_name)
repo = Repo(path=plugin_path)
repo.remotes.origin.pull()
# 读取插件的requirements.txt
if os.path.exists(os.path.join(plugin_path, "requirements.txt")):
if pipmain(['install', '-r', os.path.join(plugin_path, "requirements.txt"), '--quiet']) != 0:
raise Exception("插件依赖安装失败, 需要您手动pip安装对应插件的依赖。")
ok, err = plugin_reload(cached_plugins)
if not ok: raise Exception(err)
ok, err = plugin_reload(ctx)
if not ok:
raise Exception(err)
def remove_dir(file_path) -> bool:
try_cnt = 50

View File

@@ -0,0 +1,38 @@
from typing import List
try:
from util.search_engine_scraper.engine import SearchEngine, SearchResult
from util.search_engine_scraper.config import HEADERS, USER_AGENT_BING
except ImportError:
from engine import SearchEngine, SearchResult
from config import HEADERS, USER_AGENT_BING
class Bing(SearchEngine):
def __init__(self) -> None:
super().__init__()
self.base_url = "https://www.bing.com"
self.headers.update({'User-Agent': USER_AGENT_BING})
def _set_selector(self, selector: str):
selectors = {
'url': 'div.b_attribution cite',
'title': 'h2',
'text': 'p',
'links': 'ol#b_results > li.b_algo',
'next': 'div#b_content nav[role="navigation"] a.sb_pagN'
}
return selectors[selector]
async def _get_next_page(self, query) -> str:
if self.page == 1:
await self._get_html(self.base_url)
url = f'{self.base_url}/search?q={query}&form=QBLH&sp=-1&lq=0&pq=hi&sc=10-2&qs=n&sk=&cvid=DE75965E2D6346D681288933984DE48F&ghsh=0&ghacc=0&ghpl='
return await self._get_html(url, None)
async def search(self, query: str, num_results: int) -> List[SearchResult]:
results = await super().search(query, num_results)
for result in results:
if not isinstance(result.url, str):
result.url = result.url.text
return results

View File

@@ -0,0 +1,20 @@
HEADERS = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; rv:84.0) Gecko/20100101 Firefox/84.0',
'Accept': '*/*',
'Connection': 'keep-alive',
'Accept-Language': 'en-GB,en;q=0.5'
}
USER_AGENT_BING = 'Mozilla/5.0 (Windows NT 6.1; rv:84.0) Gecko/20100101 Firefox/84.0'
USER_AGENTS = [
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Version/14.1.2 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Version/14.1 Safari/537.36',
'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0',
'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:88.0) Gecko/20100101 Firefox/88.0'
]

View File

@@ -0,0 +1,73 @@
import random
try:
from util.search_engine_scraper.config import HEADERS, USER_AGENTS
except ImportError:
from config import HEADERS, USER_AGENTS
from bs4 import BeautifulSoup
from aiohttp import ClientSession
from dataclasses import dataclass
from typing import List
@dataclass
class SearchResult():
title: str
url: str
snippet: str
def __str__(self) -> str:
return f"{self.title} - {self.url}\n{self.snippet}"
class SearchEngine():
'''
搜索引擎爬虫基类
'''
def __init__(self) -> None:
self.TIMEOUT = 10
self.page = 1
self.headers = HEADERS
def _set_selector(self, selector: str) -> None:
raise NotImplementedError()
def _get_next_page(self):
raise NotImplementedError()
async def _get_html(self, url: str, data: dict = None) -> str:
headers = self.headers
headers["Referer"] = url
headers["User-Agent"] = random.choice(USER_AGENTS)
if data:
async with ClientSession() as session:
async with session.post(url, headers=headers, data=data, timeout=self.TIMEOUT) as resp:
return await resp.text(encoding="utf-8")
else:
async with ClientSession() as session:
async with session.get(url, headers=headers, timeout=self.TIMEOUT) as resp:
return await resp.text(encoding="utf-8")
def tidy_text(self, text: str) -> str:
'''
清理文本,去除空格、换行符等
'''
return text.strip().replace("\n", " ").replace("\r", " ").replace(" ", " ")
async def search(self, query: str, num_results: int) -> List[SearchResult]:
try:
resp = await self._get_next_page(query)
soup = BeautifulSoup(resp, 'html.parser')
links = soup.select(self._set_selector('links'))
results = []
for link in links:
title = self.tidy_text(link.select_one(self._set_selector('title')).text)
url = link.select_one(self._set_selector('url'))
snippet = ''
if title and url:
results.append(SearchResult(title=title, url=url, snippet=snippet))
return results[:num_results] if len(results) > num_results else results
except Exception as e:
raise e

View File

@@ -0,0 +1,27 @@
import os
from googlesearch import search
try:
from util.search_engine_scraper.engine import SearchEngine, SearchResult
from util.search_engine_scraper.config import HEADERS, USER_AGENTS
except ImportError:
from engine import SearchEngine, SearchResult
from config import HEADERS, USER_AGENTS
from typing import List
class Google(SearchEngine):
def __init__(self) -> None:
super().__init__()
self.proxy = os.environ.get("HTTPS_PROXY")
async def search(self, query: str, num_results: int) -> List[SearchResult]:
results = []
try:
print("use proxy:", self.proxy)
ls = search(query, advanced=True, num_results=num_results, timeout=3, proxy=self.proxy)
for i in ls:
results.append(SearchResult(title=i.title, url=i.url, snippet=i.description))
except Exception as e:
raise e
return results

View File

@@ -0,0 +1,49 @@
import random, re
from bs4 import BeautifulSoup
try:
from util.search_engine_scraper.engine import SearchEngine, SearchResult
from util.search_engine_scraper.config import HEADERS, USER_AGENTS
except ImportError:
from engine import SearchEngine, SearchResult
from config import HEADERS, USER_AGENTS
from typing import List
class Sogo(SearchEngine):
def __init__(self) -> None:
super().__init__()
self.base_url = "https://www.sogou.com"
self.headers['User-Agent'] = random.choice(USER_AGENTS)
def _set_selector(self, selector: str):
selectors = {
'url': 'h3 > a',
'title': 'h3',
'text': '',
'links': 'div.results > div.vrwrap:not(.middle-better-hintBox)',
'next': ''
}
return selectors[selector]
async def _get_next_page(self, query) -> str:
url = f'{self.base_url}/web?query={query}'
return await self._get_html(url, None)
async def search(self, query: str, num_results: int) -> List[SearchResult]:
results = await super().search(query, num_results)
for result in results:
result.url = result.url.get("href")
if result.url.startswith("/link?"):
result.url = self.base_url + result.url
result.url = await self._parse_url(result.url)
return results
async def _parse_url(self, url) -> str:
html = await self._get_html(url)
soup = BeautifulSoup(html, 'html.parser')
script = soup.find("script")
if script:
url = re.search(r'window.location.replace\("(.+?)"\)', script.string).group(1)
return url

View File

@@ -0,0 +1,22 @@
from sogo import Sogo
from bing import Bing
sogo_search = Sogo()
bing_search = Bing()
async def search(keyword: str) -> str:
results = await sogo_search.search(keyword, 5)
# results = await bing_search.search(keyword, 5)
ret = ""
if len(results) == 0:
return "没有搜索到结果"
idx = 1
for i in results:
ret += f"{idx}. {i.title}({i.url})\n{i.snippet}\n\n"
idx += 1
return ret
import asyncio
ret = asyncio.run(search("gpt4orelease"))
print(ret)

View File

@@ -6,9 +6,35 @@ except BaseException as e:
has_git = False
import sys, os
import requests
import psutil
from type.config import VERSION
from SparkleLogging.utils.core import LogManager
from logging import Logger
logger: Logger = LogManager.GetLogger(log_name='astrbot-core')
def terminate_child_processes():
try:
parent = psutil.Process(os.getpid())
children = parent.children(recursive=True)
logger.info(f"正在终止 {len(children)} 个子进程。")
for child in children:
logger.info(f"正在终止子进程 {child.pid}")
child.terminate()
try:
child.wait(timeout=3)
except psutil.NoSuchProcess:
continue
except psutil.TimeoutExpired:
logger.info(f"子进程 {child.pid} 没有被正常终止, 正在强行杀死。")
child.kill()
except psutil.NoSuchProcess:
pass
def _reboot():
py = sys.executable
terminate_child_processes()
os.execl(py, py, *sys.argv)
def find_repo() -> Repo:
@@ -78,20 +104,22 @@ def check_update() -> str:
print(f"当前版本: {curr_commit}")
print(f"最新版本: {new_commit}")
if curr_commit.startswith(new_commit):
return "当前已经是最新版本"
return f"当前已经是最新版本: v{VERSION}"
else:
update_info = f"""有新版本可用。
=== 当前版本 ===
{curr_commit}
update_info = f"""> 有新版本可用,请及时更新
# 当前版本
v{VERSION}
=== 新版本 ===
# 最新版本
{update_data[0]['version']}
=== 发布时间 ===
# 发布时间
{update_data[0]['published_at']}
=== 更新内容 ===
{update_data[0]['body']}"""
# 更新内容
---
{update_data[0]['body']}
---"""
return update_info
def update_project(update_data: list,
@@ -111,34 +139,32 @@ def update_project(update_data: list,
else:
# 更新到最新版本对应的commit
try:
repo.remotes.origin.fetch()
repo.git.checkout(update_data[0]['tag_name'])
repo.git.fetch()
repo.git.checkout(update_data[0]['tag_name'], "-f")
if reboot: _reboot()
except BaseException as e:
raise e
else:
# 更新到指定版本
flag = False
print(f"请求更新到指定版本: {version}")
for data in update_data:
if data['tag_name'] == version:
try:
repo.remotes.origin.fetch()
repo.git.checkout(data['tag_name'])
repo.git.fetch()
repo.git.checkout(data['tag_name'], "-f")
flag = True
if reboot: _reboot()
except BaseException as e:
raise e
else:
continue
if not flag:
raise Exception("未找到指定版本。")
def checkout_branch(branch_name: str):
repo = find_repo()
try:
origin = repo.remotes.origin
origin.fetch()
repo.git.checkout(branch_name)
repo.git.fetch()
repo.git.checkout(branch_name, "-f")
repo.git.pull("origin", branch_name, "-f")
return True
except BaseException as e: