Compare commits

...

26 Commits

Author SHA1 Message Date
beyondkmp
04cb96558b update libsql 2025-11-20 15:59:15 +08:00
亢奋猫
0f1a487bb0 refactor: simplify agent creation form (#11369)
* refactor(AgentModal): simplify agent type handling and update default values

- Removed unused agent type options and related logic.
- Updated default agent name from 'Claude Code' to 'Agent'.
- Adjusted padding in button styles and textarea rows for better UI consistency.
- Cleaned up unnecessary imports and code comments for improved readability.

* refactor(AgentSettings): clean up and enhance name setting component

- Removed unused imports and commented-out code in AgentModal and EssentialSettings.
- Updated NameSetting to include an emoji avatar picker for enhanced user experience.
- Simplified the logic for updating the agent's name and avatar.
- Improved overall readability and maintainability of the code.
2025-11-20 10:42:49 +08:00
亢奋猫
2df8bb58df fix: remove light background from MCP NpxUv install alerts (#11372)
- Remove 'banner' prop from Alert components in InstallNpxUv
- Set SettingContainer background to 'inherit' in MCP settings
- Fixes the light background color issue in NpxUv interface

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-20 10:41:41 +08:00
defi-failure
62976f6fe0 refactor: namespace tool call ids with session id to prevent conflicts (#11319) 2025-11-20 10:35:11 +08:00
MyPrototypeWhat
77529b3cd3 chore: update ai-core release scripts and bump version to 1.0.7 (#11370)
* chore: update ai-core release scripts and bump version to 1.0.7

* chore: update ai-sdk-provider release script to include build step and enhance type exports in webSearchPlugin and providers

* chore: bump @cherrystudio/ai-core version to 1.0.8 and update dependencies in package.json and yarn.lock

* chore: bump @cherrystudio/ai-core version to 1.0.9 and @cherrystudio/ai-sdk-provider version to 0.1.2 in package.json and yarn.lock

---------

Co-authored-by: suyao <sy20010504@gmail.com>
2025-11-19 20:44:22 +08:00
SuYao
c8e9a10190 bump ai core version (#11363)
* bump ai core version

* chore

* chore: add patch for @ai-sdk/openai and update peer dependencies in aiCore

* chore: update installation instructions in README to include @ai-sdk/google and @ai-sdk/openai

* chore: bump @cherrystudio/ai-core version to 1.0.6 in package.json and yarn.lock

---------

Co-authored-by: MyPrototypeWhat <daoquqiexing@gmail.com>
2025-11-19 18:13:33 +08:00
scientia
0e011ff35f fix: fix api-host for vercel ai-gateway provider (#11321)
Co-authored-by: scientia <wangdenghui@xiaomi.com>
2025-11-19 17:11:17 +08:00
MyPrototypeWhat
40a64a7c92 feat(options): enhance provider key handling for cherryin in buildPro… (#11361)
feat(options): enhance provider key handling for cherryin in buildProviderOptions function
2025-11-19 16:25:29 +08:00
Phantom
dc9503ef8b feat: support gemini 3 (#11356)
* feat(reasoning): add support for gemini-3-pro-preview model

Update regex pattern to include gemini-3-pro-preview as a supported thinking model
Add tests for new gemini-3 model support and edge cases

* fix(reasoning): update gemini model regex to include stable versions

Add support for stable versions of gemini-3-flash and gemini-3-pro in the model regex pattern. Update tests to verify both preview and stable versions are correctly identified.

* feat(providers): add vertexai provider check function

Add isVertexAiProvider function to consistently check for vertexai provider type and use it in websearch model detection

* feat(websearch): update gemini search regex to include v3 models

Add support for gemini 3.x models in the search regex pattern, including preview versions

* feat(vision): add support for gemini-3 models and add tests

Add regex pattern for gemini-3 models in visionAllowedModels
Create comprehensive test suite for isVisionModel function

* refactor(vision): make vision-related model constants private

Remove unused isNotSupportedImageSizeModel function and change exports to const declarations for internal use only

* chore(deps): update @ai-sdk/google to v2.0.36 and related dependencies

update @ai-sdk/google dependency from v2.0.31 to v2.0.36 to include fixes for model path handling and tool support for newer Gemini models

* chore: remove outdated @ai-sdk-google patch file

* chore: remove outdated @ai-sdk/google patch dependency
2025-11-19 14:05:14 +08:00
beyondkmp
f2c8484c48 feat: enable local crash mini dump file (#11348)
* feat: enabel loca crash mini file dump

* update version
2025-11-18 18:27:57 +08:00
kangfenmao
a9c9224835 fix(migrate): update anthropicApiHost for qiniu and longcat providers in migration to version 176
- Added anthropicApiHost configuration for qiniu and longcat providers during state migration.
- Incremented version number in persistedReducer to 176.
- Ensured proper handling of reasoning_effort settings during migration.
2025-11-18 11:05:46 +08:00
caoli5288
43223fd1f5 feat(config): add anthropicApiHost for qiniu and longcat providers (#11335) 2025-11-18 10:10:59 +08:00
Phantom
4bac843b37 fix(InputbarCore): prevent message send when cannotSend is true (#11337)
Add cannotSend check to prevent message sending when conditions aren't met
2025-11-18 10:08:54 +08:00
Phantom
34723934f4 fix: use function as default tool use mode (#11338)
* refactor(assistant): change default tool use mode to function and use default settings

Simplify reset logic by using DEFAULT_ASSISTANT_SETTINGS object instead of hardcoded values

* fix(ApiService): safely fallback to prompt tool use for unsupported models

Add check for function calling model support before using tool use mode to prevent errors with unsupported models.
2025-11-17 23:28:43 +08:00
defi-failure
096c36caf8 fix: improve todo tool status icon visibility and colors (#11323) 2025-11-17 14:01:27 +08:00
beyondkmp
139950e193 fix(i18n): add input placeholder translations for multiple languages (#11320)
feat(i18n): add input placeholder translations for multiple languages

- Introduced a new placeholder for the input field in various language files, providing guidance on message entry and command selection.
- Updated English, Chinese (Simplified and Traditional), German, Greek, Spanish, French, Japanese, Portuguese, and Russian translations to include the new input placeholder text.
- Adjusted the reference in the AgentSessionInputbar component to use the new translation key for consistency.
2025-11-17 11:51:04 +08:00
SuYao
31eec403f7 fix: url context and web search capability (#11306)
* fix: enhance support for interleaved thinking and model compatibility

* fix: type
2025-11-17 10:53:47 +08:00
槑囿脑袋
7fd4837a47 fix: mineru validate pdf error and 403 error (#11312)
* fix: validate pdf error

* fix: net fetch error

* fix: mineru 403 error

* chore: change comment to english

* fix: format
2025-11-16 16:02:15 +00:00
Carlton
90b0c8b4a6 fix: resolve "no such file" error when processing non-English filenames in open-mineru (#11315) 2025-11-16 22:10:43 +08:00
github-actions[bot]
556353e910 docs: Weekly Automated Update: Nov 16, 2025 (#11308)
feat(bot): Weekly automated script run

Co-authored-by: EurFelux <59059173+EurFelux@users.noreply.github.com>
2025-11-16 10:57:32 +08:00
Copilot
11fb730b4d fix: add verbosity parameter support for GPT-5 models across legacy and modern AI SDK (#11281)
* Initial plan

* feat: add verbosity parameter support for GPT-5 models in OpenAIAPIClient

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* fix: ensure gpt-5-pro always uses 'high' verbosity

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* refactor: move verbosity configuration to config/models as suggested

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* refactor: encapsulate verbosity logic in getVerbosity method

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* feat: add support for verbosity and reasoning options for GPT-5 Pro and GPT-5.1 models

* fix comment

* build: add @ai-sdk/google dependency

Add the @ai-sdk/google package to support Google AI SDK integration

* build: add @ai-sdk/anthropic dependency

* refactor(aiCore): update reasoning params handling for AI providers

- Add type imports for provider options
- Handle 'none' reasoning effort consistently across providers
- Improve type safety by using Pick with provider options
- Standardize disabled reasoning config for all providers

* fix: adjust none effort ratio from 0 to 0.01

Prevent potential division by zero errors by ensuring none effort ratio has a small positive value

* feat(reasoning): add support for GPT-5.1 series models

Handle 'none' reasoning effort for GPT-5.1 models and add model type check

* Update src/renderer/src/aiCore/utils/reasoning.ts

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
Co-authored-by: suyao <sy20010504@gmail.com>
Co-authored-by: icarus <eurfelux@gmail.com>
2025-11-16 10:22:14 +08:00
Phantom
2511113b62 feat: support gpt-5.1 (#11294)
* build: update @cherrystudio/openai dependency from v6.5.0 to v6.9.0

* refactor(reasoning): replace 'off' with 'none' for reasoning effort option

Update reasoning effort option from 'off' to 'none' across multiple files for consistency
Add support for gpt5_1 model with reasoning effort options

* fix(openai): handle apply_patch_call and apply_patch_call_output in response conversion

Filter and properly handle apply_patch_call and apply_patch_call_output types in OpenAI response conversion. Ensure undefined/null values are handled appropriately and log warnings for missing required fields.

* feat(models): add gpt-5.1 model logo and configuration

* fix(providers): include cherryin in url context provider check

Add SystemProviderIds.cherryin to the list of providers that support URL context to ensure proper functionality

* feat(models): add logo images for gpt-5.1 model variants

* feat(model): add support for GPT-5.1 series models

- Add new model type check for GPT-5.1 series
- Update reasoning effort and verbosity checks to include GPT-5.1
- Add logging to provider options builder

* feat(models): add gpt5_1_codex model support

Add new model type 'gpt5_1_codex' to ThinkModelTypes and configure its reasoning effort levels
Update model type detection logic to handle gpt5_1_codex variant
2025-11-15 19:09:43 +08:00
beyondkmp
a29b2bb3d6 chore: update @opeoginni/github-copilot-openai-compatible to support gpt5.1 (#11299)
* chore: update @opeoginni/github-copilot-openai-compatible to version 0.1.21

- Updated package version in package.json and yarn.lock.
- Refactored OpenAIBaseClient to enhance getBaseURL method and improve header management for SDK instances.

* format
2025-11-15 19:07:16 +08:00
beyondkmp
d2be450906 fix: update gitcode update config url (#11298)
* fix: update gitcode update config url

* update version

---------

Co-authored-by: Payne Fu <payne@Paynes-MacBook-Pro.local>
2025-11-15 10:01:33 +08:00
kangfenmao
9c020f0d56 docs: update release notes for v1.7.0-rc.1
Add comprehensive release notes highlighting:
- AI Agent system as the major new feature
- New AI providers support (Hugging Face, Mistral, Perplexity, SophNet)
- Knowledge base enhancements (OpenMinerU, full-text search)
- Image & OCR improvements (Intel OVMS, OpenVINO NPU)
- MCP management interface redesign with dual-column layout
- German language support
- Electron 38.7.0 upgrade and system improvements
- Important bug fixes
2025-11-14 20:04:16 +08:00
fullex
e033eb5b5c Add CODEOWNER for app-upgrade-config.json 2025-11-14 19:02:03 +08:00
78 changed files with 1605 additions and 574 deletions

1
.github/CODEOWNERS vendored
View File

@@ -3,3 +3,4 @@
/src/main/services/ConfigManager.ts @0xfullex
/packages/shared/IpcChannel.ts @0xfullex
/src/main/ipc.ts @0xfullex
/app-upgrade-config.json @kangfenmao

View File

@@ -1,26 +0,0 @@
diff --git a/dist/index.js b/dist/index.js
index ff305b112779b718f21a636a27b1196125a332d9..cf32ff5086d4d9e56f8fe90c98724559083bafc3 100644
--- a/dist/index.js
+++ b/dist/index.js
@@ -471,7 +471,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {
- return modelId.includes("/") ? modelId : `models/${modelId}`;
+ return modelId.includes("models/") ? modelId : `models/${modelId}`;
}
// src/google-generative-ai-options.ts
diff --git a/dist/index.mjs b/dist/index.mjs
index 57659290f1cec74878a385626ad75b2a4d5cd3fc..d04e5927ec3725b6ffdb80868bfa1b5a48849537 100644
--- a/dist/index.mjs
+++ b/dist/index.mjs
@@ -477,7 +477,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {
- return modelId.includes("/") ? modelId : `models/${modelId}`;
+ return modelId.includes("models/") ? modelId : `models/${modelId}`;
}
// src/google-generative-ai-options.ts

View File

@@ -0,0 +1,152 @@
diff --git a/dist/index.js b/dist/index.js
index c2ef089c42e13a8ee4a833899a415564130e5d79..75efa7baafb0f019fb44dd50dec1641eee8879e7 100644
--- a/dist/index.js
+++ b/dist/index.js
@@ -471,7 +471,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {
- return modelId.includes("/") ? modelId : `models/${modelId}`;
+ return modelId.includes("models/") ? modelId : `models/${modelId}`;
}
// src/google-generative-ai-options.ts
diff --git a/dist/index.mjs b/dist/index.mjs
index d75c0cc13c41192408c1f3f2d29d76a7bffa6268..ada730b8cb97d9b7d4cb32883a1d1ff416404d9b 100644
--- a/dist/index.mjs
+++ b/dist/index.mjs
@@ -477,7 +477,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {
- return modelId.includes("/") ? modelId : `models/${modelId}`;
+ return modelId.includes("models/") ? modelId : `models/${modelId}`;
}
// src/google-generative-ai-options.ts
diff --git a/dist/internal/index.js b/dist/internal/index.js
index 277cac8dc734bea2fb4f3e9a225986b402b24f48..bb704cd79e602eb8b0cee1889e42497d59ccdb7a 100644
--- a/dist/internal/index.js
+++ b/dist/internal/index.js
@@ -432,7 +432,15 @@ function prepareTools({
var _a;
tools = (tools == null ? void 0 : tools.length) ? tools : void 0;
const toolWarnings = [];
- const isGemini2 = modelId.includes("gemini-2");
+ // These changes could be safely removed when @ai-sdk/google v3 released.
+ const isLatest = (
+ [
+ 'gemini-flash-latest',
+ 'gemini-flash-lite-latest',
+ 'gemini-pro-latest',
+ ]
+ ).some(id => id === modelId);
+ const isGemini2OrNewer = modelId.includes("gemini-2") || modelId.includes("gemini-3") || isLatest;
const supportsDynamicRetrieval = modelId.includes("gemini-1.5-flash") && !modelId.includes("-8b");
const supportsFileSearch = modelId.includes("gemini-2.5");
if (tools == null) {
@@ -458,7 +466,7 @@ function prepareTools({
providerDefinedTools.forEach((tool) => {
switch (tool.id) {
case "google.google_search":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ googleSearch: {} });
} else if (supportsDynamicRetrieval) {
googleTools2.push({
@@ -474,7 +482,7 @@ function prepareTools({
}
break;
case "google.url_context":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ urlContext: {} });
} else {
toolWarnings.push({
@@ -485,7 +493,7 @@ function prepareTools({
}
break;
case "google.code_execution":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ codeExecution: {} });
} else {
toolWarnings.push({
@@ -507,7 +515,7 @@ function prepareTools({
}
break;
case "google.vertex_rag_store":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({
retrieval: {
vertex_rag_store: {
diff --git a/dist/internal/index.mjs b/dist/internal/index.mjs
index 03b7cc591be9b58bcc2e775a96740d9f98862a10..347d2c12e1cee79f0f8bb258f3844fb0522a6485 100644
--- a/dist/internal/index.mjs
+++ b/dist/internal/index.mjs
@@ -424,7 +424,15 @@ function prepareTools({
var _a;
tools = (tools == null ? void 0 : tools.length) ? tools : void 0;
const toolWarnings = [];
- const isGemini2 = modelId.includes("gemini-2");
+ // These changes could be safely removed when @ai-sdk/google v3 released.
+ const isLatest = (
+ [
+ 'gemini-flash-latest',
+ 'gemini-flash-lite-latest',
+ 'gemini-pro-latest',
+ ]
+ ).some(id => id === modelId);
+ const isGemini2OrNewer = modelId.includes("gemini-2") || modelId.includes("gemini-3") || isLatest;
const supportsDynamicRetrieval = modelId.includes("gemini-1.5-flash") && !modelId.includes("-8b");
const supportsFileSearch = modelId.includes("gemini-2.5");
if (tools == null) {
@@ -450,7 +458,7 @@ function prepareTools({
providerDefinedTools.forEach((tool) => {
switch (tool.id) {
case "google.google_search":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ googleSearch: {} });
} else if (supportsDynamicRetrieval) {
googleTools2.push({
@@ -466,7 +474,7 @@ function prepareTools({
}
break;
case "google.url_context":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ urlContext: {} });
} else {
toolWarnings.push({
@@ -477,7 +485,7 @@ function prepareTools({
}
break;
case "google.code_execution":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ codeExecution: {} });
} else {
toolWarnings.push({
@@ -499,7 +507,7 @@ function prepareTools({
}
break;
case "google.vertex_rag_store":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({
retrieval: {
vertex_rag_store: {
@@ -1434,9 +1442,7 @@ var googleTools = {
vertexRagStore
};
export {
- GoogleGenerativeAILanguageModel,
getGroundingMetadataSchema,
- getUrlContextMetadataSchema,
- googleTools
+ getUrlContextMetadataSchema, GoogleGenerativeAILanguageModel, googleTools
};
//# sourceMappingURL=index.mjs.map
\ No newline at end of file

View File

@@ -134,42 +134,58 @@ artifactBuildCompleted: scripts/artifact-build-completed.js
releaseInfo:
releaseNotes: |
<!--LANG:en-->
What's New in v1.7.0-beta.6
What's New in v1.7.0-rc.1
New Features:
- Enhanced Input Bar: Completely redesigned input bar with improved responsiveness and functionality
- Better File Handling: Improved drag-and-drop and paste support for images and documents
- Smart Tool Suggestions: Enhanced quick panel with better item selection and keyboard shortcuts
🎉 MAJOR NEW FEATURE: AI Agents
- Create and manage custom AI agents with specialized tools and permissions
- Dedicated agent sessions with persistent SQLite storage, separate from regular chats
- Real-time tool approval system - review and approve agent actions dynamically
- MCP (Model Context Protocol) integration for connecting external tools
- Slash commands support for quick agent interactions
- OpenAI-compatible REST API for agent access
Improvements:
- Smoother Input Experience: Better auto-resizing and text handling in chat input
- Enhanced AI Performance: Improved connection stability and response speed
- More Reliable File Uploads: Better support for various file types and upload scenarios
- Cleaner Interface: Optimized UI elements for better visual consistency
✨ New Features:
- AI Providers: Added support for Hugging Face, Mistral, Perplexity, and SophNet
- Knowledge Base: OpenMinerU document preprocessor, full-text search in notes, enhanced tool selection
- Image & OCR: Intel OVMS painting provider and Intel OpenVINO (NPU) OCR support
- MCP Management: Redesigned interface with dual-column layout for easier management
- Languages: Added German language support
Bug Fixes:
- Fixed image selection issue when adding custom AI providers
- Fixed file upload problems with certain API configurations
- Fixed input bar responsiveness issues
- Fixed quick panel not working properly in some situations
⚡ Improvements:
- Upgraded to Electron 38.7.0
- Enhanced system shutdown handling and automatic update checks
- Improved proxy bypass rules
🐛 Important Bug Fixes:
- Fixed streaming response issues across multiple AI providers
- Fixed session list scrolling problems
- Fixed knowledge base deletion errors
<!--LANG:zh-CN-->
v1.7.0-beta.6 新特性
v1.7.0-rc.1 新特性
新功能:
- 增强输入栏:完全重新设计的输入栏,响应更灵敏,功能更强大
- 更好的文件处理:改进的拖拽和粘贴功能,支持图片和文档
- 智能工具建议:增强的快速面板,更好的项目选择和键盘快捷键
🎉 重大更新AI Agent 智能体系统
- 创建和管理专属 AI Agent配置专用工具和权限
- 独立的 Agent 会话,使用 SQLite 持久化存储,与普通聊天分离
- 实时工具审批系统 - 动态审查和批准 Agent 操作
- MCP模型上下文协议集成连接外部工具
- 支持斜杠命令快速交互
- 兼容 OpenAI 的 REST API 访问
改进:
- 更流畅的输入体验:聊天输入框的自动调整和文本处理更佳
- 增强 AI 性能:改进连接稳定性和响应速度
- 更可靠的文件上传:更好地支持各种文件类型和上传场景
- 更简洁的界面:优化 UI 元素,视觉一致性更好
✨ 新功能:
- AI 提供商:新增 Hugging Face、Mistral、Perplexity 和 SophNet 支持
- 知识库OpenMinerU 文档预处理器、笔记全文搜索、增强的工具选择
- 图像与 OCRIntel OVMS 绘图提供商和 Intel OpenVINO (NPU) OCR 支持
- MCP 管理:重构管理界面,采用双列布局,更加方便管理
- 语言:新增德语支持
问题修复:
- 修复添加自定义 AI 提供商时的图片选择问题
- 修复某些 API 配置下的文件上传问题
- 修复输入栏响应性问题
- 修复快速面板在某些情况下无法正常工作的问题
⚡ 改进:
- 升级到 Electron 38.7.0
- 增强的系统关机处理和自动更新检查
- 改进的代理绕过规则
🐛 重要修复:
- 修复多个 AI 提供商的流式响应问题
- 修复会话列表滚动问题
- 修复知识库删除错误
<!--LANG:END-->

View File

@@ -1,6 +1,6 @@
{
"name": "CherryStudio",
"version": "1.6.5",
"version": "1.7.0-rc.1",
"private": true,
"description": "A powerful AI assistant for producer.",
"main": "./out/main/index.js",
@@ -74,14 +74,15 @@
"format:check": "biome format && biome lint",
"prepare": "git config blame.ignoreRevsFile .git-blame-ignore-revs && husky",
"claude": "dotenv -e .env -- claude",
"release:aicore:alpha": "yarn workspace @cherrystudio/ai-core version prerelease --immediate && yarn workspace @cherrystudio/ai-core npm publish --tag alpha --access public",
"release:aicore:beta": "yarn workspace @cherrystudio/ai-core version prerelease --immediate && yarn workspace @cherrystudio/ai-core npm publish --tag beta --access public",
"release:aicore": "yarn workspace @cherrystudio/ai-core version patch --immediate && yarn workspace @cherrystudio/ai-core npm publish --access public"
"release:aicore:alpha": "yarn workspace @cherrystudio/ai-core version prerelease --preid alpha --immediate && yarn workspace @cherrystudio/ai-core build && yarn workspace @cherrystudio/ai-core npm publish --tag alpha --access public",
"release:aicore:beta": "yarn workspace @cherrystudio/ai-core version prerelease --preid beta --immediate && yarn workspace @cherrystudio/ai-core build && yarn workspace @cherrystudio/ai-core npm publish --tag beta --access public",
"release:aicore": "yarn workspace @cherrystudio/ai-core version patch --immediate && yarn workspace @cherrystudio/ai-core build && yarn workspace @cherrystudio/ai-core npm publish --access public",
"release:ai-sdk-provider": "yarn workspace @cherrystudio/ai-sdk-provider version patch --immediate && yarn workspace @cherrystudio/ai-sdk-provider build && yarn workspace @cherrystudio/ai-sdk-provider npm publish --access public"
},
"dependencies": {
"@anthropic-ai/claude-agent-sdk": "patch:@anthropic-ai/claude-agent-sdk@npm%3A0.1.30#~/.yarn/patches/@anthropic-ai-claude-agent-sdk-npm-0.1.30-b50a299674.patch",
"@libsql/client": "0.14.0",
"@libsql/win32-x64-msvc": "^0.4.7",
"@libsql/client": "0.15.15",
"@libsql/win32-x64-msvc": "^0.5.22",
"@napi-rs/system-ocr": "patch:@napi-rs/system-ocr@npm%3A1.0.2#~/.yarn/patches/@napi-rs-system-ocr-npm-1.0.2-59e7a78e8b.patch",
"@paymoapp/electron-shutdown-handler": "^1.1.2",
"@strongtz/win32-arm64-msvc": "^0.4.7",
@@ -108,11 +109,14 @@
"@agentic/searxng": "^7.3.3",
"@agentic/tavily": "^7.3.3",
"@ai-sdk/amazon-bedrock": "^3.0.53",
"@ai-sdk/anthropic": "^2.0.44",
"@ai-sdk/cerebras": "^1.0.31",
"@ai-sdk/gateway": "^2.0.9",
"@ai-sdk/google-vertex": "^3.0.62",
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.36#~/.yarn/patches/@ai-sdk-google-npm-2.0.36-6f3cc06026.patch",
"@ai-sdk/google-vertex": "^3.0.68",
"@ai-sdk/huggingface": "patch:@ai-sdk/huggingface@npm%3A0.0.8#~/.yarn/patches/@ai-sdk-huggingface-npm-0.0.8-d4d0aaac93.patch",
"@ai-sdk/mistral": "^2.0.23",
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch",
"@ai-sdk/perplexity": "^2.0.17",
"@ant-design/v5-patch-for-react-19": "^1.0.3",
"@anthropic-ai/sdk": "^0.41.0",
@@ -121,7 +125,7 @@
"@aws-sdk/client-bedrock-runtime": "^3.910.0",
"@aws-sdk/client-s3": "^3.910.0",
"@biomejs/biome": "2.2.4",
"@cherrystudio/ai-core": "workspace:^1.0.0-alpha.18",
"@cherrystudio/ai-core": "workspace:^1.0.9",
"@cherrystudio/embedjs": "^0.1.31",
"@cherrystudio/embedjs-libsql": "^0.1.31",
"@cherrystudio/embedjs-loader-csv": "^0.1.31",
@@ -135,7 +139,7 @@
"@cherrystudio/embedjs-ollama": "^0.1.31",
"@cherrystudio/embedjs-openai": "^0.1.31",
"@cherrystudio/extension-table-plus": "workspace:^",
"@cherrystudio/openai": "^6.5.0",
"@cherrystudio/openai": "^6.9.0",
"@dnd-kit/core": "^6.3.1",
"@dnd-kit/modifiers": "^9.0.0",
"@dnd-kit/sortable": "^10.0.0",
@@ -165,7 +169,7 @@
"@opentelemetry/sdk-trace-base": "^2.0.0",
"@opentelemetry/sdk-trace-node": "^2.0.0",
"@opentelemetry/sdk-trace-web": "^2.0.0",
"@opeoginni/github-copilot-openai-compatible": "0.1.19",
"@opeoginni/github-copilot-openai-compatible": "0.1.21",
"@playwright/test": "^1.52.0",
"@radix-ui/react-context-menu": "^2.2.16",
"@reduxjs/toolkit": "^2.2.5",
@@ -382,10 +386,10 @@
"@codemirror/lint": "6.8.5",
"@codemirror/view": "6.38.1",
"@langchain/core@npm:^0.3.26": "patch:@langchain/core@npm%3A1.0.2#~/.yarn/patches/@langchain-core-npm-1.0.2-183ef83fe4.patch",
"@libsql/client": "0.15.15",
"atomically@npm:^1.7.0": "patch:atomically@npm%3A1.7.0#~/.yarn/patches/atomically-npm-1.7.0-e742e5293b.patch",
"esbuild": "^0.25.0",
"file-stream-rotator@npm:^0.6.1": "patch:file-stream-rotator@npm%3A0.6.1#~/.yarn/patches/file-stream-rotator-npm-0.6.1-eab45fb13d.patch",
"libsql@npm:^0.4.4": "patch:libsql@npm%3A0.4.7#~/.yarn/patches/libsql-npm-0.4.7-444e260fb1.patch",
"node-abi": "4.24.0",
"openai@npm:^4.77.0": "npm:@cherrystudio/openai@6.5.0",
"openai@npm:^4.87.3": "npm:@cherrystudio/openai@6.5.0",
@@ -408,7 +412,7 @@
"@langchain/openai@npm:>=0.2.0 <0.7.0": "patch:@langchain/openai@npm%3A1.0.0#~/.yarn/patches/@langchain-openai-npm-1.0.0-474d0ad9d4.patch",
"@ai-sdk/openai@npm:2.0.64": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch",
"@ai-sdk/openai@npm:^2.0.42": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch",
"@ai-sdk/google@npm:2.0.31": "patch:@ai-sdk/google@npm%3A2.0.31#~/.yarn/patches/@ai-sdk-google-npm-2.0.31-b0de047210.patch"
"@ai-sdk/google@npm:2.0.36": "patch:@ai-sdk/google@npm%3A2.0.36#~/.yarn/patches/@ai-sdk-google-npm-2.0.36-6f3cc06026.patch"
},
"packageManager": "yarn@4.9.1",
"lint-staged": {

View File

@@ -1,6 +1,6 @@
{
"name": "@cherrystudio/ai-sdk-provider",
"version": "0.1.0",
"version": "0.1.2",
"description": "Cherry Studio AI SDK provider bundle with CherryIN routing.",
"keywords": [
"ai-sdk",

View File

@@ -71,7 +71,7 @@ Cherry Studio AI Core 是一个基于 Vercel AI SDK 的统一 AI Provider 接口
## 安装
```bash
npm install @cherrystudio/ai-core ai
npm install @cherrystudio/ai-core ai @ai-sdk/google @ai-sdk/openai
```
### React Native

View File

@@ -1,6 +1,6 @@
{
"name": "@cherrystudio/ai-core",
"version": "1.0.1",
"version": "1.0.9",
"description": "Cherry Studio AI Core - Unified AI Provider Interface Based on Vercel AI SDK",
"main": "dist/index.js",
"module": "dist/index.mjs",
@@ -33,19 +33,19 @@
},
"homepage": "https://github.com/CherryHQ/cherry-studio#readme",
"peerDependencies": {
"@ai-sdk/google": "^2.0.36",
"@ai-sdk/openai": "^2.0.64",
"@cherrystudio/ai-sdk-provider": "^0.1.2",
"ai": "^5.0.26"
},
"dependencies": {
"@ai-sdk/anthropic": "^2.0.43",
"@ai-sdk/azure": "^2.0.66",
"@ai-sdk/deepseek": "^1.0.27",
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.31#~/.yarn/patches/@ai-sdk-google-npm-2.0.31-b0de047210.patch",
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch",
"@ai-sdk/openai-compatible": "^1.0.26",
"@ai-sdk/provider": "^2.0.0",
"@ai-sdk/provider-utils": "^3.0.16",
"@ai-sdk/xai": "^2.0.31",
"@cherrystudio/ai-sdk-provider": "workspace:*",
"zod": "^4.1.5"
},
"devDependencies": {

View File

@@ -4,12 +4,7 @@
*/
export const BUILT_IN_PLUGIN_PREFIX = 'built-in:'
export { googleToolsPlugin } from './googleToolsPlugin'
export { createLoggingPlugin } from './logging'
export { createPromptToolUsePlugin } from './toolUsePlugin/promptToolUsePlugin'
export type {
PromptToolUseConfig,
ToolUseRequestContext,
ToolUseResult
} from './toolUsePlugin/type'
export { webSearchPlugin, type WebSearchPluginConfig } from './webSearchPlugin'
export * from './googleToolsPlugin'
export * from './toolUsePlugin/promptToolUsePlugin'
export * from './toolUsePlugin/type'
export * from './webSearchPlugin'

View File

@@ -32,7 +32,7 @@ export const webSearchPlugin = (config: WebSearchPluginConfig = DEFAULT_WEB_SEAR
})
// 导出类型定义供开发者使用
export type { WebSearchPluginConfig, WebSearchToolOutputSchema } from './helper'
export * from './helper'
// 默认导出
export default webSearchPlugin

View File

@@ -44,7 +44,7 @@ export {
// ==================== 基础数据和类型 ====================
// 基础Provider数据源
export { baseProviderIds, baseProviders } from './schemas'
export { baseProviderIds, baseProviders, isBaseProvider } from './schemas'
// 类型定义和Schema
export type {

View File

@@ -7,7 +7,6 @@ import { createAzure } from '@ai-sdk/azure'
import { type AzureOpenAIProviderSettings } from '@ai-sdk/azure'
import { createDeepSeek } from '@ai-sdk/deepseek'
import { createGoogleGenerativeAI } from '@ai-sdk/google'
import { createHuggingFace } from '@ai-sdk/huggingface'
import { createOpenAI, type OpenAIProviderSettings } from '@ai-sdk/openai'
import { createOpenAICompatible } from '@ai-sdk/openai-compatible'
import type { LanguageModelV2 } from '@ai-sdk/provider'
@@ -33,8 +32,7 @@ export const baseProviderIds = [
'deepseek',
'openrouter',
'cherryin',
'cherryin-chat',
'huggingface'
'cherryin-chat'
] as const
/**
@@ -158,12 +156,6 @@ export const baseProviders = [
})
},
supportsImageGeneration: true
},
{
id: 'huggingface',
name: 'HuggingFace',
creator: createHuggingFace,
supportsImageGeneration: true
}
] as const satisfies BaseProvider[]

View File

@@ -41,6 +41,7 @@ export enum IpcChannel {
App_SetFullScreen = 'app:set-full-screen',
App_IsFullScreen = 'app:is-full-screen',
App_GetSystemFonts = 'app:get-system-fonts',
APP_CrashRenderProcess = 'app:crash-render-process',
App_MacIsProcessTrusted = 'app:mac-is-process-trusted',
App_MacRequestProcessTrust = 'app:mac-request-process-trust',

View File

@@ -199,7 +199,7 @@ export enum FeedUrl {
export enum UpdateConfigUrl {
GITHUB = 'https://raw.githubusercontent.com/CherryHQ/cherry-studio/refs/heads/x-files/app-upgrade-config/app-upgrade-config.json',
GITCODE = 'https://raw.gitcode.com/CherryHQ/cherry-studio/raw/x-files/app-upgrade-config/app-upgrade-config.json'
GITCODE = 'https://raw.gitcode.com/CherryHQ/cherry-studio/raw/x-files%2Fapp-upgrade-config/app-upgrade-config.json'
}
export enum UpgradeChannel {

View File

@@ -8,7 +8,7 @@ import '@main/config'
import { loggerService } from '@logger'
import { electronApp, optimizer } from '@electron-toolkit/utils'
import { replaceDevtoolsFont } from '@main/utils/windowUtil'
import { app } from 'electron'
import { app, crashReporter } from 'electron'
import installExtension, { REACT_DEVELOPER_TOOLS, REDUX_DEVTOOLS } from 'electron-devtools-installer'
import { isDev, isLinux, isWin } from './constant'
@@ -37,6 +37,14 @@ import { initWebviewHotkeys } from './services/WebviewService'
const logger = loggerService.withContext('MainEntry')
// enable local crash reports
crashReporter.start({
companyName: 'CherryHQ',
productName: 'CherryStudio',
submitURL: '',
uploadToServer: false
})
/**
* Disable hardware acceleration if setting is enabled
*/

View File

@@ -1038,4 +1038,8 @@ export function registerIpc(mainWindow: BrowserWindow, app: Electron.App) {
ipcMain.handle(IpcChannel.WebSocket_Status, WebSocketService.getStatus)
ipcMain.handle(IpcChannel.WebSocket_SendFile, WebSocketService.sendFile)
ipcMain.handle(IpcChannel.WebSocket_GetAllCandidates, WebSocketService.getAllCandidates)
ipcMain.handle(IpcChannel.APP_CrashRenderProcess, () => {
mainWindow.webContents.forcefullyCrashRenderer()
})
}

View File

@@ -21,6 +21,7 @@ type ApiResponse<T> = {
type BatchUploadResponse = {
batch_id: string
file_urls: string[]
headers?: Record<string, string>[]
}
type ExtractProgress = {
@@ -55,7 +56,7 @@ type QuotaResponse = {
export default class MineruPreprocessProvider extends BasePreprocessProvider {
constructor(provider: PreprocessProvider, userId?: string) {
super(provider, userId)
// todo免费期结束后删除
// TODO: remove after free period ends
this.provider.apiKey = this.provider.apiKey || import.meta.env.MAIN_VITE_MINERU_API_KEY
}
@@ -68,21 +69,21 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
logger.info(`MinerU preprocess processing started: ${filePath}`)
await this.validateFile(filePath)
// 1. 获取上传URL并上传文件
// 1. Get upload URL and upload file
const batchId = await this.uploadFile(file)
logger.info(`MinerU file upload completed: batch_id=${batchId}`)
// 2. 等待处理完成并获取结果
// 2. Wait for completion and fetch results
const extractResult = await this.waitForCompletion(sourceId, batchId, file.origin_name)
logger.info(`MinerU processing completed for batch: ${batchId}`)
// 3. 下载并解压文件
// 3. Download and extract output
const { path: outputPath } = await this.downloadAndExtractFile(extractResult.full_zip_url!, file)
// 4. check quota
const quota = await this.checkQuota()
// 5. 创建处理后的文件信息
// 5. Create processed file metadata
return {
processedFile: this.createProcessedFileInfo(file, outputPath),
quota
@@ -115,23 +116,48 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
}
private async validateFile(filePath: string): Promise<void> {
// Phase 1: check file size (without loading into memory)
logger.info(`Validating PDF file: ${filePath}`)
const stats = await fs.promises.stat(filePath)
const fileSizeBytes = stats.size
// Ensure file size is under 200MB
if (fileSizeBytes >= 200 * 1024 * 1024) {
const fileSizeMB = Math.round(fileSizeBytes / (1024 * 1024))
throw new Error(`PDF file size (${fileSizeMB}MB) exceeds the limit of 200MB`)
}
// Phase 2: check page count (requires reading file with error handling)
const pdfBuffer = await fs.promises.readFile(filePath)
try {
const doc = await this.readPdf(pdfBuffer)
// 文件页数小于600页
// Ensure page count is under 600 pages
if (doc.numPages >= 600) {
throw new Error(`PDF page count (${doc.numPages}) exceeds the limit of 600 pages`)
}
// 文件大小小于200MB
if (pdfBuffer.length >= 200 * 1024 * 1024) {
const fileSizeMB = Math.round(pdfBuffer.length / (1024 * 1024))
throw new Error(`PDF file size (${fileSizeMB}MB) exceeds the limit of 200MB`)
logger.info(`PDF validation passed: ${doc.numPages} pages, ${Math.round(fileSizeBytes / (1024 * 1024))}MB`)
} catch (error: any) {
// If the page limit is exceeded, rethrow immediately
if (error.message.includes('exceeds the limit')) {
throw error
}
// If PDF parsing fails, log a detailed warning but continue processing
logger.warn(
`Failed to parse PDF structure (file may be corrupted or use non-standard format). ` +
`Skipping page count validation. Will attempt to process with MinerU API. ` +
`Error details: ${error.message}. ` +
`Suggestion: If processing fails, try repairing the PDF using tools like Adobe Acrobat or online PDF repair services.`
)
// Do not throw; continue processing
}
}
private createProcessedFileInfo(file: FileMetadata, outputPath: string): FileMetadata {
// 查找解压后的主要文件
// Locate the main extracted file
let finalPath = ''
let finalName = file.origin_name.replace('.pdf', '.md')
@@ -143,14 +169,14 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
const originalMdPath = path.join(outputPath, mdFile)
const newMdPath = path.join(outputPath, finalName)
// 重命名文件为原始文件名
// Rename the file to match the original name
try {
fs.renameSync(originalMdPath, newMdPath)
finalPath = newMdPath
logger.info(`Renamed markdown file from ${mdFile} to ${finalName}`)
} catch (renameError) {
logger.warn(`Failed to rename file ${mdFile} to ${finalName}: ${renameError}`)
// 如果重命名失败,使用原文件
// If renaming fails, fall back to the original file
finalPath = originalMdPath
finalName = mdFile
}
@@ -178,7 +204,7 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
logger.info(`Downloading MinerU result to: ${zipPath}`)
try {
// 下载ZIP文件
// Download the ZIP file
const response = await net.fetch(zipUrl, { method: 'GET' })
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`)
@@ -187,17 +213,17 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
fs.writeFileSync(zipPath, Buffer.from(arrayBuffer))
logger.info(`Downloaded ZIP file: ${zipPath}`)
// 确保提取目录存在
// Ensure the extraction directory exists
if (!fs.existsSync(extractPath)) {
fs.mkdirSync(extractPath, { recursive: true })
}
// 解压文件
// Extract the ZIP contents
const zip = new AdmZip(zipPath)
zip.extractAllTo(extractPath, true)
logger.info(`Extracted files to: ${extractPath}`)
// 删除临时ZIP文件
// Remove the temporary ZIP file
fs.unlinkSync(zipPath)
return { path: extractPath }
@@ -209,11 +235,11 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
private async uploadFile(file: FileMetadata): Promise<string> {
try {
// 步骤1: 获取上传URL
const { batchId, fileUrls } = await this.getBatchUploadUrls(file)
// 步骤2: 上传文件到获取的URL
// Step 1: obtain the upload URL
const { batchId, fileUrls, uploadHeaders } = await this.getBatchUploadUrls(file)
// Step 2: upload the file to the obtained URL
const filePath = fileStorage.getFilePathById(file)
await this.putFileToUrl(filePath, fileUrls[0])
await this.putFileToUrl(filePath, fileUrls[0], file.origin_name, uploadHeaders?.[0])
logger.info(`File uploaded successfully: ${filePath}`, { batchId, fileUrls })
return batchId
@@ -223,7 +249,9 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
}
}
private async getBatchUploadUrls(file: FileMetadata): Promise<{ batchId: string; fileUrls: string[] }> {
private async getBatchUploadUrls(
file: FileMetadata
): Promise<{ batchId: string; fileUrls: string[]; uploadHeaders?: Record<string, string>[] }> {
const endpoint = `${this.provider.apiHost}/api/v4/file-urls/batch`
const payload = {
@@ -254,10 +282,11 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
if (response.ok) {
const data: ApiResponse<BatchUploadResponse> = await response.json()
if (data.code === 0 && data.data) {
const { batch_id, file_urls } = data.data
const { batch_id, file_urls, headers: uploadHeaders } = data.data
return {
batchId: batch_id,
fileUrls: file_urls
fileUrls: file_urls,
uploadHeaders
}
} else {
throw new Error(`API returned error: ${data.msg || JSON.stringify(data)}`)
@@ -271,18 +300,28 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
}
}
private async putFileToUrl(filePath: string, uploadUrl: string): Promise<void> {
private async putFileToUrl(
filePath: string,
uploadUrl: string,
fileName?: string,
headers?: Record<string, string>
): Promise<void> {
try {
const fileBuffer = await fs.promises.readFile(filePath)
const fileSize = fileBuffer.byteLength
const displayName = fileName ?? path.basename(filePath)
logger.info(`Uploading file to MinerU OSS: ${displayName} (${fileSize} bytes)`)
// https://mineru.net/apiManage/docs
const response = await net.fetch(uploadUrl, {
method: 'PUT',
body: fileBuffer
headers,
body: new Uint8Array(fileBuffer)
})
if (!response.ok) {
// 克隆 response 以避免消费 body stream
// Clone the response to avoid consuming the body stream
const responseClone = response.clone()
try {
@@ -353,20 +392,20 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
try {
const result = await this.getExtractResults(batchId)
// 查找对应文件的处理结果
// Find the corresponding file result
const fileResult = result.extract_result.find((item) => item.file_name === fileName)
if (!fileResult) {
throw new Error(`File ${fileName} not found in batch results`)
}
// 检查处理状态
// Check the processing state
if (fileResult.state === 'done' && fileResult.full_zip_url) {
logger.info(`Processing completed for file: ${fileName}`)
return fileResult
} else if (fileResult.state === 'failed') {
throw new Error(`Processing failed for file: ${fileName}, error: ${fileResult.err_msg}`)
} else if (fileResult.state === 'running') {
// 发送进度更新
// Send progress updates
if (fileResult.extract_progress) {
const progress = Math.round(
(fileResult.extract_progress.extracted_pages / fileResult.extract_progress.total_pages) * 100
@@ -374,7 +413,7 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
await this.sendPreprocessProgress(sourceId, progress)
logger.info(`File ${fileName} processing progress: ${progress}%`)
} else {
// 如果没有具体进度信息,发送一个通用进度
// If no detailed progress information is available, send a generic update
await this.sendPreprocessProgress(sourceId, 50)
logger.info(`File ${fileName} is still processing...`)
}

View File

@@ -53,18 +53,43 @@ export default class OpenMineruPreprocessProvider extends BasePreprocessProvider
}
private async validateFile(filePath: string): Promise<void> {
// 第一阶段:检查文件大小(无需读取文件到内存)
logger.info(`Validating PDF file: ${filePath}`)
const stats = await fs.promises.stat(filePath)
const fileSizeBytes = stats.size
// File size must be less than 200MB
if (fileSizeBytes >= 200 * 1024 * 1024) {
const fileSizeMB = Math.round(fileSizeBytes / (1024 * 1024))
throw new Error(`PDF file size (${fileSizeMB}MB) exceeds the limit of 200MB`)
}
// 第二阶段:检查页数(需要读取文件,带错误处理)
const pdfBuffer = await fs.promises.readFile(filePath)
try {
const doc = await this.readPdf(pdfBuffer)
// File page count must be less than 600 pages
if (doc.numPages >= 600) {
throw new Error(`PDF page count (${doc.numPages}) exceeds the limit of 600 pages`)
}
// File size must be less than 200MB
if (pdfBuffer.length >= 200 * 1024 * 1024) {
const fileSizeMB = Math.round(pdfBuffer.length / (1024 * 1024))
throw new Error(`PDF file size (${fileSizeMB}MB) exceeds the limit of 200MB`)
logger.info(`PDF validation passed: ${doc.numPages} pages, ${Math.round(fileSizeBytes / (1024 * 1024))}MB`)
} catch (error: any) {
// 如果是页数超限错误,直接抛出
if (error.message.includes('exceeds the limit')) {
throw error
}
// PDF 解析失败,记录详细警告但允许继续处理
logger.warn(
`Failed to parse PDF structure (file may be corrupted or use non-standard format). ` +
`Skipping page count validation. Will attempt to process with MinerU API. ` +
`Error details: ${error.message}. ` +
`Suggestion: If processing fails, try repairing the PDF using tools like Adobe Acrobat or online PDF repair services.`
)
// 不抛出错误,允许继续处理
}
}
@@ -72,8 +97,8 @@ export default class OpenMineruPreprocessProvider extends BasePreprocessProvider
// Find the main file after extraction
let finalPath = ''
let finalName = file.origin_name.replace('.pdf', '.md')
// Find the corresponding folder by file name
outputPath = path.join(outputPath, `${file.origin_name.replace('.pdf', '')}`)
// Find the corresponding folder by file id
outputPath = path.join(outputPath, file.id)
try {
const files = fs.readdirSync(outputPath)
@@ -125,7 +150,7 @@ export default class OpenMineruPreprocessProvider extends BasePreprocessProvider
formData.append('return_md', 'true')
formData.append('response_format_zip', 'true')
formData.append('files', fileBuffer, {
filename: file.origin_name
filename: file.name
})
while (retries < maxRetries) {
@@ -139,7 +164,7 @@ export default class OpenMineruPreprocessProvider extends BasePreprocessProvider
...(this.provider.apiKey ? { Authorization: `Bearer ${this.provider.apiKey}` } : {}),
...formData.getHeaders()
},
body: formData.getBuffer()
body: new Uint8Array(formData.getBuffer())
})
if (!response.ok) {

View File

@@ -25,7 +25,7 @@ describe('stripLocalCommandTags', () => {
describe('Claude → AiSDK transform', () => {
it('handles tool call streaming lifecycle', () => {
const state = new ClaudeStreamState()
const state = new ClaudeStreamState({ agentSessionId: baseStreamMetadata.session_id })
const parts: ReturnType<typeof transformSDKMessageToStreamParts>[number][] = []
const messages: SDKMessage[] = [
@@ -182,14 +182,14 @@ describe('Claude → AiSDK transform', () => {
(typeof parts)[number],
{ type: 'tool-result' }
>
expect(toolResult.toolCallId).toBe('tool-1')
expect(toolResult.toolCallId).toBe('session-123:tool-1')
expect(toolResult.toolName).toBe('Bash')
expect(toolResult.input).toEqual({ command: 'ls' })
expect(toolResult.output).toBe('ok')
})
it('handles streaming text completion', () => {
const state = new ClaudeStreamState()
const state = new ClaudeStreamState({ agentSessionId: baseStreamMetadata.session_id })
const parts: ReturnType<typeof transformSDKMessageToStreamParts>[number][] = []
const messages: SDKMessage[] = [

View File

@@ -10,8 +10,21 @@
* Every Claude turn gets its own instance. `resetStep` should be invoked once the finish event has
* been emitted to avoid leaking state into the next turn.
*/
import { loggerService } from '@logger'
import type { FinishReason, LanguageModelUsage, ProviderMetadata } from 'ai'
/**
* Builds a namespaced tool call ID by combining session ID with raw tool call ID.
* This ensures tool calls from different sessions don't conflict even if they have
* the same raw ID from the SDK.
*
* @param sessionId - The agent session ID
* @param rawToolCallId - The raw tool call ID from SDK (e.g., "WebFetch_0")
*/
export function buildNamespacedToolCallId(sessionId: string, rawToolCallId: string): string {
return `${sessionId}:${rawToolCallId}`
}
/**
* Shared fields for every block that Claude can stream (text, reasoning, tool).
*/
@@ -34,6 +47,7 @@ type ReasoningBlockState = BaseBlockState & {
type ToolBlockState = BaseBlockState & {
kind: 'tool'
toolCallId: string
rawToolCallId: string
toolName: string
inputBuffer: string
providerMetadata?: ProviderMetadata
@@ -48,12 +62,17 @@ type PendingUsageState = {
}
type PendingToolCall = {
rawToolCallId: string
toolCallId: string
toolName: string
input: unknown
providerMetadata?: ProviderMetadata
}
type ClaudeStreamStateOptions = {
agentSessionId: string
}
/**
* Tracks the lifecycle of Claude streaming blocks (text, thinking, tool calls)
* across individual websocket events. The transformer relies on this class to
@@ -61,12 +80,20 @@ type PendingToolCall = {
* usage/finish metadata once Anthropic closes a message.
*/
export class ClaudeStreamState {
private logger
private readonly agentSessionId: string
private blocksByIndex = new Map<number, BlockState>()
private toolIndexById = new Map<string, number>()
private toolIndexByNamespacedId = new Map<string, number>()
private pendingUsage: PendingUsageState = {}
private pendingToolCalls = new Map<string, PendingToolCall>()
private stepActive = false
constructor(options: ClaudeStreamStateOptions) {
this.logger = loggerService.withContext('ClaudeStreamState')
this.agentSessionId = options.agentSessionId
this.logger.silly('ClaudeStreamState', options)
}
/** Marks the beginning of a new AiSDK step. */
beginStep(): void {
this.stepActive = true
@@ -104,19 +131,21 @@ export class ClaudeStreamState {
/** Caches tool metadata so subsequent input deltas and results can find it. */
openToolBlock(
index: number,
params: { toolCallId: string; toolName: string; providerMetadata?: ProviderMetadata }
params: { rawToolCallId: string; toolName: string; providerMetadata?: ProviderMetadata }
): ToolBlockState {
const toolCallId = buildNamespacedToolCallId(this.agentSessionId, params.rawToolCallId)
const block: ToolBlockState = {
kind: 'tool',
id: params.toolCallId,
id: toolCallId,
index,
toolCallId: params.toolCallId,
toolCallId,
rawToolCallId: params.rawToolCallId,
toolName: params.toolName,
inputBuffer: '',
providerMetadata: params.providerMetadata
}
this.blocksByIndex.set(index, block)
this.toolIndexById.set(params.toolCallId, index)
this.toolIndexByNamespacedId.set(toolCallId, index)
return block
}
@@ -125,13 +154,17 @@ export class ClaudeStreamState {
}
getToolBlockById(toolCallId: string): ToolBlockState | undefined {
const index = this.toolIndexById.get(toolCallId)
const index = this.toolIndexByNamespacedId.get(toolCallId)
if (index === undefined) return undefined
const block = this.blocksByIndex.get(index)
if (!block || block.kind !== 'tool') return undefined
return block
}
getToolBlockByRawId(rawToolCallId: string): ToolBlockState | undefined {
return this.getToolBlockById(buildNamespacedToolCallId(this.agentSessionId, rawToolCallId))
}
/** Appends streamed text to a text block, returning the updated state when present. */
appendTextDelta(index: number, text: string): TextBlockState | undefined {
const block = this.blocksByIndex.get(index)
@@ -158,10 +191,12 @@ export class ClaudeStreamState {
/** Records a tool call to be consumed once its result arrives from the user. */
registerToolCall(
toolCallId: string,
rawToolCallId: string,
payload: { toolName: string; input: unknown; providerMetadata?: ProviderMetadata }
): void {
this.pendingToolCalls.set(toolCallId, {
const toolCallId = buildNamespacedToolCallId(this.agentSessionId, rawToolCallId)
this.pendingToolCalls.set(rawToolCallId, {
rawToolCallId,
toolCallId,
toolName: payload.toolName,
input: payload.input,
@@ -170,10 +205,10 @@ export class ClaudeStreamState {
}
/** Retrieves and clears the buffered tool call metadata for the given id. */
consumePendingToolCall(toolCallId: string): PendingToolCall | undefined {
const entry = this.pendingToolCalls.get(toolCallId)
consumePendingToolCall(rawToolCallId: string): PendingToolCall | undefined {
const entry = this.pendingToolCalls.get(rawToolCallId)
if (entry) {
this.pendingToolCalls.delete(toolCallId)
this.pendingToolCalls.delete(rawToolCallId)
}
return entry
}
@@ -183,12 +218,12 @@ export class ClaudeStreamState {
* completion so that downstream tool results can reference the original call.
*/
completeToolBlock(toolCallId: string, input: unknown, providerMetadata?: ProviderMetadata): void {
const block = this.getToolBlockByRawId(toolCallId)
this.registerToolCall(toolCallId, {
toolName: this.getToolBlockById(toolCallId)?.toolName ?? 'unknown',
toolName: block?.toolName ?? 'unknown',
input,
providerMetadata
})
const block = this.getToolBlockById(toolCallId)
if (block) {
block.resolvedInput = input
}
@@ -200,7 +235,7 @@ export class ClaudeStreamState {
if (!block) return undefined
this.blocksByIndex.delete(index)
if (block.kind === 'tool') {
this.toolIndexById.delete(block.toolCallId)
this.toolIndexByNamespacedId.delete(block.toolCallId)
}
return block
}
@@ -227,7 +262,7 @@ export class ClaudeStreamState {
/** Drops cached block metadata for the currently active message. */
resetBlocks(): void {
this.blocksByIndex.clear()
this.toolIndexById.clear()
this.toolIndexByNamespacedId.clear()
}
/** Resets the entire step lifecycle after emitting a terminal frame. */
@@ -236,6 +271,10 @@ export class ClaudeStreamState {
this.resetPendingUsage()
this.stepActive = false
}
getNamespacedToolCallId(rawToolCallId: string): string {
return buildNamespacedToolCallId(this.agentSessionId, rawToolCallId)
}
}
export type { PendingToolCall }

View File

@@ -13,6 +13,7 @@ import { app } from 'electron'
import type { GetAgentSessionResponse } from '../..'
import type { AgentServiceInterface, AgentStream, AgentStreamEvent } from '../../interfaces/AgentStreamInterface'
import { sessionService } from '../SessionService'
import { buildNamespacedToolCallId } from './claude-stream-state'
import { promptForToolApproval } from './tool-permissions'
import { ClaudeStreamState, transformSDKMessageToStreamParts } from './transform'
@@ -150,7 +151,10 @@ class ClaudeCodeService implements AgentServiceInterface {
return { behavior: 'allow', updatedInput: input }
}
return promptForToolApproval(toolName, input, options)
return promptForToolApproval(toolName, input, {
...options,
toolCallId: buildNamespacedToolCallId(session.id, options.toolUseID)
})
}
// Build SDK options from parameters
@@ -346,7 +350,7 @@ class ClaudeCodeService implements AgentServiceInterface {
const jsonOutput: SDKMessage[] = []
let hasCompleted = false
const startTime = Date.now()
const streamState = new ClaudeStreamState()
const streamState = new ClaudeStreamState({ agentSessionId: sessionId })
try {
for await (const message of query({ prompt: promptStream, options })) {

View File

@@ -37,6 +37,7 @@ type RendererPermissionRequestPayload = {
requestId: string
toolName: string
toolId: string
toolCallId: string
description?: string
requiresPermissions: boolean
input: Record<string, unknown>
@@ -206,10 +207,19 @@ const ensureIpcHandlersRegistered = () => {
})
}
type PromptForToolApprovalOptions = {
signal: AbortSignal
suggestions?: PermissionUpdate[]
// NOTICE: This ID is namespaced with session ID, not the raw SDK tool call ID.
// Format: `${sessionId}:${rawToolCallId}`, e.g., `session_123:WebFetch_0`
toolCallId: string
}
export async function promptForToolApproval(
toolName: string,
input: Record<string, unknown>,
options?: { signal: AbortSignal; suggestions?: PermissionUpdate[] }
options: PromptForToolApprovalOptions
): Promise<PermissionResult> {
if (shouldAutoApproveTools) {
logger.debug('promptForToolApproval auto-approving tool for test', {
@@ -245,6 +255,7 @@ export async function promptForToolApproval(
logger.info('Requesting user approval for tool usage', {
requestId,
toolName,
toolCallId: options.toolCallId,
description: toolMetadata?.description
})
@@ -252,6 +263,7 @@ export async function promptForToolApproval(
requestId,
toolName,
toolId: toolMetadata?.id ?? toolName,
toolCallId: options.toolCallId,
description: toolMetadata?.description,
requiresPermissions: toolMetadata?.requirePermissions ?? false,
input: sanitizedInput,
@@ -266,6 +278,7 @@ export async function promptForToolApproval(
logger.debug('Registering tool permission request', {
requestId,
toolName,
toolCallId: options.toolCallId,
requiresPermissions: requestPayload.requiresPermissions,
timeoutMs: TOOL_APPROVAL_TIMEOUT_MS,
suggestionCount: sanitizedSuggestions.length
@@ -273,7 +286,11 @@ export async function promptForToolApproval(
return new Promise<PermissionResult>((resolve) => {
const timeout = setTimeout(() => {
logger.info('User tool permission request timed out', { requestId, toolName })
logger.info('User tool permission request timed out', {
requestId,
toolName,
toolCallId: options.toolCallId
})
finalizeRequest(requestId, { behavior: 'deny', message: 'Timed out waiting for approval' }, 'timeout')
}, TOOL_APPROVAL_TIMEOUT_MS)
@@ -287,7 +304,11 @@ export async function promptForToolApproval(
if (options?.signal) {
const abortListener = () => {
logger.info('Tool permission request aborted before user responded', { requestId, toolName })
logger.info('Tool permission request aborted before user responded', {
requestId,
toolName,
toolCallId: options.toolCallId
})
finalizeRequest(requestId, defaultDenyUpdate, 'aborted')
}

View File

@@ -243,9 +243,10 @@ function handleAssistantToolUse(
state: ClaudeStreamState,
chunks: AgentStreamPart[]
): void {
const toolCallId = state.getNamespacedToolCallId(block.id)
chunks.push({
type: 'tool-call',
toolCallId: block.id,
toolCallId,
toolName: block.name,
input: block.input,
providerExecuted: true,
@@ -331,10 +332,11 @@ function handleUserMessage(
if (block.type === 'tool_result') {
const toolResult = block as ToolResultContent
const pendingCall = state.consumePendingToolCall(toolResult.tool_use_id)
const toolCallId = pendingCall?.toolCallId ?? state.getNamespacedToolCallId(toolResult.tool_use_id)
if (toolResult.is_error) {
chunks.push({
type: 'tool-error',
toolCallId: toolResult.tool_use_id,
toolCallId,
toolName: pendingCall?.toolName ?? 'unknown',
input: pendingCall?.input,
error: toolResult.content,
@@ -343,7 +345,7 @@ function handleUserMessage(
} else {
chunks.push({
type: 'tool-result',
toolCallId: toolResult.tool_use_id,
toolCallId,
toolName: pendingCall?.toolName ?? 'unknown',
input: pendingCall?.input,
output: toolResult.content,
@@ -514,7 +516,7 @@ function handleContentBlockStart(
}
case 'tool_use': {
const block = state.openToolBlock(index, {
toolCallId: contentBlock.id,
rawToolCallId: contentBlock.id,
toolName: contentBlock.name,
providerMetadata
})

View File

@@ -111,6 +111,7 @@ const api = {
setFullScreen: (value: boolean): Promise<void> => ipcRenderer.invoke(IpcChannel.App_SetFullScreen, value),
isFullScreen: (): Promise<boolean> => ipcRenderer.invoke(IpcChannel.App_IsFullScreen),
getSystemFonts: (): Promise<string[]> => ipcRenderer.invoke(IpcChannel.App_GetSystemFonts),
mockCrashRenderProcess: () => ipcRenderer.invoke(IpcChannel.APP_CrashRenderProcess),
mac: {
isProcessTrusted: (): Promise<boolean> => ipcRenderer.invoke(IpcChannel.App_MacIsProcessTrusted),
requestProcessTrust: (): Promise<boolean> => ipcRenderer.invoke(IpcChannel.App_MacRequestProcessTrust)

View File

@@ -1,5 +1,6 @@
import { loggerService } from '@logger'
import {
getModelSupportedVerbosity,
isFunctionCallingModel,
isNotSupportTemperatureAndTopP,
isOpenAIModel,
@@ -242,12 +243,18 @@ export abstract class BaseApiClient<
return serviceTierSetting
}
protected getVerbosity(): OpenAIVerbosity {
protected getVerbosity(model?: Model): OpenAIVerbosity {
try {
const state = window.store?.getState()
const verbosity = state?.settings?.openAI?.verbosity
if (verbosity && ['low', 'medium', 'high'].includes(verbosity)) {
// If model is provided, check if the verbosity is supported by the model
if (model) {
const supportedVerbosity = getModelSupportedVerbosity(model)
// Use user's verbosity if supported, otherwise use the first supported option
return supportedVerbosity.includes(verbosity) ? verbosity : supportedVerbosity[0]
}
return verbosity
}
} catch (error) {

View File

@@ -35,6 +35,7 @@ import {
isSupportedThinkingTokenModel,
isSupportedThinkingTokenQwenModel,
isSupportedThinkingTokenZhipuModel,
isSupportVerbosityModel,
isVisionModel,
MODEL_SUPPORTED_REASONING_EFFORT,
ZHIPU_RESULT_TOKENS
@@ -733,6 +734,13 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
...modalities,
// groq 有不同的 service tier 配置,不符合 openai 接口类型
service_tier: this.getServiceTier(model) as OpenAIServiceTier,
...(isSupportVerbosityModel(model)
? {
text: {
verbosity: this.getVerbosity(model)
}
}
: {}),
...this.getProviderSpecificParameters(assistant, model),
...reasoningEffort,
...getOpenAIWebSearchParams(model, enableWebSearch),

View File

@@ -48,9 +48,8 @@ export abstract class OpenAIBaseClient<
}
// 仅适用于openai
override getBaseURL(): string {
const host = this.provider.apiHost
return formatApiHost(host)
override getBaseURL(isSupportedAPIVerion: boolean = true): string {
return formatApiHost(this.provider.apiHost, isSupportedAPIVerion)
}
override async generateImage({
@@ -144,6 +143,11 @@ export abstract class OpenAIBaseClient<
}
let apiKeyForSdkInstance = this.apiKey
let baseURLForSdkInstance = this.getBaseURL()
let headersForSdkInstance = {
...this.defaultHeaders(),
...this.provider.extra_headers
}
if (this.provider.id === 'copilot') {
const defaultHeaders = store.getState().copilot.defaultHeaders
@@ -151,6 +155,11 @@ export abstract class OpenAIBaseClient<
// this.provider.apiKey不允许修改
// this.provider.apiKey = token
apiKeyForSdkInstance = token
baseURLForSdkInstance = this.getBaseURL(false)
headersForSdkInstance = {
...headersForSdkInstance,
...COPILOT_DEFAULT_HEADERS
}
}
if (this.provider.id === 'azure-openai' || this.provider.type === 'azure-openai') {
@@ -164,12 +173,8 @@ export abstract class OpenAIBaseClient<
this.sdkInstance = new OpenAI({
dangerouslyAllowBrowser: true,
apiKey: apiKeyForSdkInstance,
baseURL: this.getBaseURL(),
defaultHeaders: {
...this.defaultHeaders(),
...this.provider.extra_headers,
...(this.provider.id === 'copilot' ? COPILOT_DEFAULT_HEADERS : {})
}
baseURL: baseURLForSdkInstance,
defaultHeaders: headersForSdkInstance
}) as TSdkInstance
}
return this.sdkInstance

View File

@@ -297,7 +297,31 @@ export class OpenAIResponseAPIClient extends OpenAIBaseClient<
private convertResponseToMessageContent(response: OpenAI.Responses.Response): ResponseInput {
const content: OpenAI.Responses.ResponseInput = []
content.push(...response.output)
response.output.forEach((item) => {
if (item.type !== 'apply_patch_call' && item.type !== 'apply_patch_call_output') {
content.push(item)
} else if (item.type === 'apply_patch_call') {
if (item.operation !== undefined) {
const applyPatchToolCall: OpenAI.Responses.ResponseInputItem.ApplyPatchCall = {
...item,
operation: item.operation
}
content.push(applyPatchToolCall)
} else {
logger.warn('Undefined tool call operation for ApplyPatchToolCall.')
}
} else if (item.type === 'apply_patch_call_output') {
if (item.output !== undefined) {
const applyPatchToolCallOutput: OpenAI.Responses.ResponseInputItem.ApplyPatchCallOutput = {
...item,
output: item.output === null ? undefined : item.output
}
content.push(applyPatchToolCallOutput)
} else {
logger.warn('Undefined tool call operation for ApplyPatchToolCall.')
}
}
})
return content
}
@@ -496,7 +520,7 @@ export class OpenAIResponseAPIClient extends OpenAIBaseClient<
...(isSupportVerbosityModel(model)
? {
text: {
verbosity: this.getVerbosity()
verbosity: this.getVerbosity(model)
}
}
: {}),

View File

@@ -0,0 +1,13 @@
import { isClaude45ReasoningModel } from '@renderer/config/models'
import type { Assistant, Model } from '@renderer/types'
import { isToolUseModeFunction } from '@renderer/utils/assistant'
const INTERLEAVED_THINKING_HEADER = 'interleaved-thinking-2025-05-14'
export function addAnthropicHeaders(assistant: Assistant, model: Model): string[] {
const anthropicHeaders: string[] = []
if (isClaude45ReasoningModel(model) && isToolUseModeFunction(assistant)) {
anthropicHeaders.push(INTERLEAVED_THINKING_HEADER)
}
return anthropicHeaders
}

View File

@@ -7,10 +7,12 @@ import { anthropic } from '@ai-sdk/anthropic'
import { google } from '@ai-sdk/google'
import { vertexAnthropic } from '@ai-sdk/google-vertex/anthropic/edge'
import { vertex } from '@ai-sdk/google-vertex/edge'
import { combineHeaders } from '@ai-sdk/provider-utils'
import type { WebSearchPluginConfig } from '@cherrystudio/ai-core/built-in/plugins'
import { isBaseProvider } from '@cherrystudio/ai-core/core/providers/schemas'
import { loggerService } from '@logger'
import {
isAnthropicModel,
isGenerateImageModel,
isOpenRouterBuiltInWebSearchModel,
isReasoningModel,
@@ -19,6 +21,8 @@ import {
isSupportedThinkingTokenModel,
isWebSearchModel
} from '@renderer/config/models'
import { isAwsBedrockProvider } from '@renderer/config/providers'
import { isVertexProvider } from '@renderer/hooks/useVertexAI'
import { getAssistantSettings, getDefaultModel } from '@renderer/services/AssistantService'
import store from '@renderer/store'
import type { CherryWebSearchConfig } from '@renderer/store/websearch'
@@ -34,6 +38,7 @@ import { setupToolsConfig } from '../utils/mcp'
import { buildProviderOptions } from '../utils/options'
import { getAnthropicThinkingBudget } from '../utils/reasoning'
import { buildProviderBuiltinWebSearchConfig } from '../utils/websearch'
import { addAnthropicHeaders } from './header'
import { supportsTopP } from './modelCapabilities'
import { getTemperature, getTopP } from './modelParameters'
@@ -172,13 +177,21 @@ export async function buildStreamTextParams(
}
}
let headers: Record<string, string | undefined> = options.requestOptions?.headers ?? {}
// https://docs.claude.com/en/docs/build-with-claude/extended-thinking#interleaved-thinking
if (!isVertexProvider(provider) && !isAwsBedrockProvider(provider) && isAnthropicModel(model)) {
const newBetaHeaders = { 'anthropic-beta': addAnthropicHeaders(assistant, model).join(',') }
headers = combineHeaders(headers, newBetaHeaders)
}
// 构建基础参数
const params: StreamTextParams = {
messages: sdkMessages,
maxOutputTokens: maxTokens,
temperature: getTemperature(assistant, model),
abortSignal: options.requestOptions?.signal,
headers: options.requestOptions?.headers,
headers,
providerOptions,
stopWhen: stepCountIs(20),
maxRetries: 0

View File

@@ -1,5 +1,12 @@
import { baseProviderIdSchema, customProviderIdSchema } from '@cherrystudio/ai-core/provider'
import { isOpenAIModel, isQwenMTModel, isSupportFlexServiceTierModel } from '@renderer/config/models'
import { loggerService } from '@logger'
import {
getModelSupportedVerbosity,
isOpenAIModel,
isQwenMTModel,
isSupportFlexServiceTierModel,
isSupportVerbosityModel
} from '@renderer/config/models'
import { isSupportServiceTierProvider } from '@renderer/config/providers'
import { mapLanguageToQwenMTModel } from '@renderer/config/translate'
import type { Assistant, Model, Provider } from '@renderer/types'
@@ -26,6 +33,8 @@ import {
} from './reasoning'
import { getWebSearchParams } from './websearch'
const logger = loggerService.withContext('aiCore.utils.options')
// copy from BaseApiClient.ts
const getServiceTier = (model: Model, provider: Provider) => {
const serviceTierSetting = provider.serviceTier
@@ -70,6 +79,7 @@ export function buildProviderOptions(
enableGenerateImage: boolean
}
): Record<string, any> {
logger.debug('buildProviderOptions', { assistant, model, actualProvider, capabilities })
const rawProviderId = getAiSdkProviderId(actualProvider)
// 构建 provider 特定的选项
let providerSpecificOptions: Record<string, any> = {}
@@ -89,9 +99,6 @@ export function buildProviderOptions(
serviceTier: serviceTierSetting
}
break
case 'huggingface':
providerSpecificOptions = buildOpenAIProviderOptions(assistant, model, capabilities)
break
case 'anthropic':
providerSpecificOptions = buildAnthropicProviderOptions(assistant, model, capabilities)
break
@@ -134,6 +141,9 @@ export function buildProviderOptions(
case 'bedrock':
providerSpecificOptions = buildBedrockProviderOptions(assistant, model, capabilities)
break
case 'huggingface':
providerSpecificOptions = buildOpenAIProviderOptions(assistant, model, capabilities)
break
default:
// 对于其他 provider使用通用的构建逻辑
providerSpecificOptions = {
@@ -152,13 +162,17 @@ export function buildProviderOptions(
...getCustomParameters(assistant)
}
const rawProviderKey =
let rawProviderKey =
{
'google-vertex': 'google',
'google-vertex-anthropic': 'anthropic',
'ai-gateway': 'gateway'
}[rawProviderId] || rawProviderId
if (rawProviderKey === 'cherryin') {
rawProviderKey = { gemini: 'google' }[actualProvider.type] || actualProvider.type
}
// 返回 AI Core SDK 要求的格式:{ 'providerId': providerOptions }
return {
[rawProviderKey]: providerSpecificOptions
@@ -187,6 +201,23 @@ function buildOpenAIProviderOptions(
...reasoningParams
}
}
if (isSupportVerbosityModel(model)) {
const state = window.store?.getState()
const userVerbosity = state?.settings?.openAI?.verbosity
if (userVerbosity && ['low', 'medium', 'high'].includes(userVerbosity)) {
const supportedVerbosity = getModelSupportedVerbosity(model)
// Use user's verbosity if supported, otherwise use the first supported option
const verbosity = supportedVerbosity.includes(userVerbosity) ? userVerbosity : supportedVerbosity[0]
providerOptions = {
...providerOptions,
textVerbosity: verbosity
}
}
}
return providerOptions
}

View File

@@ -1,3 +1,7 @@
import type { BedrockProviderOptions } from '@ai-sdk/amazon-bedrock'
import type { AnthropicProviderOptions } from '@ai-sdk/anthropic'
import type { GoogleGenerativeAIProviderOptions } from '@ai-sdk/google'
import type { XaiProviderOptions } from '@ai-sdk/xai'
import { loggerService } from '@logger'
import { DEFAULT_MAX_TOKENS } from '@renderer/config/constant'
import {
@@ -7,6 +11,7 @@ import {
isDeepSeekHybridInferenceModel,
isDoubaoSeedAfter251015,
isDoubaoThinkingAutoModel,
isGPT51SeriesModel,
isGrok4FastReasoningModel,
isGrokReasoningModel,
isOpenAIDeepResearchModel,
@@ -56,13 +61,20 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
}
const reasoningEffort = assistant?.settings?.reasoning_effort
if (!reasoningEffort) {
// Handle undefined and 'none' reasoningEffort.
// TODO: They should be separated.
if (!reasoningEffort || reasoningEffort === 'none') {
// openrouter: use reasoning
if (model.provider === SystemProviderIds.openrouter) {
// Don't disable reasoning for Gemini models that support thinking tokens
if (isSupportedThinkingTokenGeminiModel(model) && !GEMINI_FLASH_MODEL_REGEX.test(model.id)) {
return {}
}
// 'none' is not an available value for effort for now.
// I think they should resolve this issue soon, so I'll just go ahead and use this value.
if (isGPT51SeriesModel(model) && reasoningEffort === 'none') {
return { reasoning: { effort: 'none' } }
}
// Don't disable reasoning for models that require it
if (
isGrokReasoningModel(model) ||
@@ -117,6 +129,13 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
return { thinking: { type: 'disabled' } }
}
// Specially for GPT-5.1. Suppose this is a OpenAI Compatible provider
if (isGPT51SeriesModel(model) && reasoningEffort === 'none') {
return {
reasoningEffort: 'none'
}
}
return {}
}
@@ -371,7 +390,7 @@ export function getOpenAIReasoningParams(assistant: Assistant, model: Model): Re
export function getAnthropicThinkingBudget(assistant: Assistant, model: Model): number {
const { maxTokens, reasoning_effort: reasoningEffort } = getAssistantSettings(assistant)
if (reasoningEffort === undefined) {
if (reasoningEffort === undefined || reasoningEffort === 'none') {
return 0
}
const effortRatio = EFFORT_RATIO[reasoningEffort]
@@ -393,14 +412,17 @@ export function getAnthropicThinkingBudget(assistant: Assistant, model: Model):
* 获取 Anthropic 推理参数
* 从 AnthropicAPIClient 中提取的逻辑
*/
export function getAnthropicReasoningParams(assistant: Assistant, model: Model): Record<string, any> {
export function getAnthropicReasoningParams(
assistant: Assistant,
model: Model
): Pick<AnthropicProviderOptions, 'thinking'> {
if (!isReasoningModel(model)) {
return {}
}
const reasoningEffort = assistant?.settings?.reasoning_effort
if (reasoningEffort === undefined) {
if (reasoningEffort === undefined || reasoningEffort === 'none') {
return {
thinking: {
type: 'disabled'
@@ -429,7 +451,10 @@ export function getAnthropicReasoningParams(assistant: Assistant, model: Model):
* 注意Gemini/GCP 端点所使用的 thinkingBudget 等参数应该按照驼峰命名法传递
* 而在 Google 官方提供的 OpenAI 兼容端点中则使用蛇形命名法 thinking_budget
*/
export function getGeminiReasoningParams(assistant: Assistant, model: Model): Record<string, any> {
export function getGeminiReasoningParams(
assistant: Assistant,
model: Model
): Pick<GoogleGenerativeAIProviderOptions, 'thinkingConfig'> {
if (!isReasoningModel(model)) {
return {}
}
@@ -438,7 +463,7 @@ export function getGeminiReasoningParams(assistant: Assistant, model: Model): Re
// Gemini 推理参数
if (isSupportedThinkingTokenGeminiModel(model)) {
if (reasoningEffort === undefined) {
if (reasoningEffort === undefined || reasoningEffort === 'none') {
return {
thinkingConfig: {
includeThoughts: false,
@@ -478,27 +503,35 @@ export function getGeminiReasoningParams(assistant: Assistant, model: Model): Re
* @param model - The model being used
* @returns XAI-specific reasoning parameters
*/
export function getXAIReasoningParams(assistant: Assistant, model: Model): Record<string, any> {
export function getXAIReasoningParams(assistant: Assistant, model: Model): Pick<XaiProviderOptions, 'reasoningEffort'> {
if (!isSupportedReasoningEffortGrokModel(model)) {
return {}
}
const { reasoning_effort: reasoningEffort } = getAssistantSettings(assistant)
if (!reasoningEffort) {
if (!reasoningEffort || reasoningEffort === 'none') {
return {}
}
// For XAI provider Grok models, use reasoningEffort parameter directly
return {
reasoningEffort
switch (reasoningEffort) {
case 'auto':
case 'minimal':
case 'medium':
return { reasoningEffort: 'low' }
case 'low':
case 'high':
return { reasoningEffort }
}
}
/**
* Get Bedrock reasoning parameters
*/
export function getBedrockReasoningParams(assistant: Assistant, model: Model): Record<string, any> {
export function getBedrockReasoningParams(
assistant: Assistant,
model: Model
): Pick<BedrockProviderOptions, 'reasoningConfig'> {
if (!isReasoningModel(model)) {
return {}
}
@@ -509,6 +542,14 @@ export function getBedrockReasoningParams(assistant: Assistant, model: Model): R
return {}
}
if (reasoningEffort === 'none') {
return {
reasoningConfig: {
type: 'disabled'
}
}
}
// Only apply thinking budget for Claude reasoning models
if (!isSupportedThinkingTokenClaudeModel(model)) {
return {}

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@@ -1,5 +1,4 @@
import { loggerService } from '@logger'
import ClaudeIcon from '@renderer/assets/images/models/claude.png'
import { ErrorBoundary } from '@renderer/components/ErrorBoundary'
import { TopView } from '@renderer/components/TopView'
import { permissionModeCards } from '@renderer/config/agent'
@@ -9,7 +8,6 @@ import SelectAgentBaseModelButton from '@renderer/pages/home/components/SelectAg
import type {
AddAgentForm,
AgentEntity,
AgentType,
ApiModel,
BaseAgentForm,
PermissionMode,
@@ -17,30 +15,22 @@ import type {
UpdateAgentForm
} from '@renderer/types'
import { AgentConfigurationSchema, isAgentType } from '@renderer/types'
import { Avatar, Button, Input, Modal, Select } from 'antd'
import { Button, Input, Modal, Select } from 'antd'
import { AlertTriangleIcon } from 'lucide-react'
import type { ChangeEvent, FormEvent } from 'react'
import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import { useTranslation } from 'react-i18next'
import styled from 'styled-components'
import type { BaseOption } from './shared'
const { TextArea } = Input
const logger = loggerService.withContext('AddAgentPopup')
interface AgentTypeOption extends BaseOption {
type: 'type'
key: AgentEntity['type']
name: AgentEntity['name']
}
type AgentWithTools = AgentEntity & { tools?: Tool[] }
const buildAgentForm = (existing?: AgentWithTools): BaseAgentForm => ({
type: existing?.type ?? 'claude-code',
name: existing?.name ?? 'Claude Code',
name: existing?.name ?? 'Agent',
description: existing?.description,
instructions: existing?.instructions,
model: existing?.model ?? '',
@@ -100,54 +90,6 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
})
}, [])
// add supported agents type here.
const agentConfig = useMemo(
() =>
[
{
type: 'type',
key: 'claude-code',
label: 'Claude Code',
name: 'Claude Code',
avatar: ClaudeIcon
}
] as const satisfies AgentTypeOption[],
[]
)
const agentOptions = useMemo(
() =>
agentConfig.map((option) => ({
value: option.key,
label: (
<OptionWrapper>
<Avatar src={option.avatar} size={24} />
<span>{option.label}</span>
</OptionWrapper>
)
})),
[agentConfig]
)
const onAgentTypeChange = useCallback(
(value: AgentType) => {
const prevConfig = agentConfig.find((config) => config.key === form.type)
let newName: string | undefined = form.name
if (prevConfig && prevConfig.name === form.name) {
const newConfig = agentConfig.find((config) => config.key === value)
if (newConfig) {
newName = newConfig.name
}
}
setForm((prev) => ({
...prev,
type: value,
name: newName
}))
},
[agentConfig, form.name, form.type]
)
const onNameChange = useCallback((e: ChangeEvent<HTMLInputElement>) => {
setForm((prev) => ({
...prev,
@@ -155,12 +97,12 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
}))
}, [])
const onDescChange = useCallback((e: ChangeEvent<HTMLTextAreaElement>) => {
setForm((prev) => ({
...prev,
description: e.target.value
}))
}, [])
// const onDescChange = useCallback((e: ChangeEvent<HTMLTextAreaElement>) => {
// setForm((prev) => ({
// ...prev,
// description: e.target.value
// }))
// }, [])
const onInstChange = useCallback((e: ChangeEvent<HTMLTextAreaElement>) => {
setForm((prev) => ({
@@ -334,16 +276,6 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
<StyledForm onSubmit={onSubmit}>
<FormContent>
<FormRow>
<FormItem style={{ flex: 1 }}>
<Label>{t('agent.type.label')}</Label>
<Select
value={form.type}
onChange={onAgentTypeChange}
options={agentOptions}
disabled={isEditing(agent)}
style={{ width: '100%' }}
/>
</FormItem>
<FormItem style={{ flex: 1 }}>
<Label>
{t('common.name')} <RequiredMark>*</RequiredMark>
@@ -363,7 +295,7 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
avatarSize={24}
iconSize={16}
buttonStyle={{
padding: '8px 12px',
padding: '3px 8px',
width: '100%',
border: '1px solid var(--color-border)',
borderRadius: 6,
@@ -382,7 +314,6 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
onChange={onPermissionModeChange}
style={{ width: '100%' }}
placeholder={t('agent.settings.tooling.permissionMode.placeholder', 'Select permission mode')}
dropdownStyle={{ minWidth: '500px' }}
optionLabelProp="label">
{permissionModeCards.map((item) => (
<Select.Option key={item.mode} value={item.mode} label={t(item.titleKey, item.titleFallback)}>
@@ -438,10 +369,10 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
<TextArea rows={3} value={form.instructions ?? ''} onChange={onInstChange} />
</FormItem>
<FormItem>
{/* <FormItem>
<Label>{t('common.description')}</Label>
<TextArea rows={2} value={form.description ?? ''} onChange={onDescChange} />
</FormItem>
<TextArea rows={1} value={form.description ?? ''} onChange={onDescChange} />
</FormItem> */}
</FormContent>
<FormFooter>
@@ -575,14 +506,7 @@ const FormFooter = styled.div`
display: flex;
justify-content: flex-end;
gap: 8px;
padding-top: 16px;
border-top: 1px solid var(--color-border);
`
const OptionWrapper = styled.div`
display: flex;
align-items: center;
gap: 8px;
padding: 10px;
`
const PermissionOptionWrapper = styled.div`

View File

@@ -1,6 +1,12 @@
import { describe, expect, it, vi } from 'vitest'
import { isDoubaoSeedAfter251015, isDoubaoThinkingAutoModel, isLingReasoningModel } from '../models/reasoning'
import {
isDoubaoSeedAfter251015,
isDoubaoThinkingAutoModel,
isGeminiReasoningModel,
isLingReasoningModel,
isSupportedThinkingTokenGeminiModel
} from '../models/reasoning'
vi.mock('@renderer/store', () => ({
default: {
@@ -231,3 +237,284 @@ describe('Ling Models', () => {
})
})
})
describe('Gemini Models', () => {
describe('isSupportedThinkingTokenGeminiModel', () => {
it('should return true for gemini 2.5 models', () => {
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-flash-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-pro-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini latest models', () => {
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-flash-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-pro-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-flash-lite-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini 3 models', () => {
// Preview versions
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'google/gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
// Future stable versions
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'google/gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'google/gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return false for image and tts models', () => {
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-flash-image',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-flash-preview-tts',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
it('should return false for older gemini models', () => {
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-1.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-1.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-1.0-pro',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
})
describe('isGeminiReasoningModel', () => {
it('should return true for gemini thinking models', () => {
expect(
isGeminiReasoningModel({
id: 'gemini-2.0-flash-thinking',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'gemini-thinking-exp',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for supported thinking token gemini models', () => {
expect(
isGeminiReasoningModel({
id: 'gemini-2.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'gemini-2.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini-3 models', () => {
// Preview versions
expect(
isGeminiReasoningModel({
id: 'gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'google/gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
// Future stable versions
expect(
isGeminiReasoningModel({
id: 'gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'google/gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'google/gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return false for older gemini models without thinking', () => {
expect(
isGeminiReasoningModel({
id: 'gemini-1.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isGeminiReasoningModel({
id: 'gemini-1.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
it('should return false for undefined model', () => {
expect(isGeminiReasoningModel(undefined)).toBe(false)
})
})
})

View File

@@ -0,0 +1,167 @@
import { describe, expect, it, vi } from 'vitest'
import { isVisionModel } from '../models/vision'
vi.mock('@renderer/store', () => ({
default: {
getState: () => ({
llm: {
settings: {}
}
})
}
}))
// FIXME: Idk why it's imported. Maybe circular dependency somewhere
vi.mock('@renderer/services/AssistantService.ts', () => ({
getDefaultAssistant: () => {
return {
id: 'default',
name: 'default',
emoji: '😀',
prompt: '',
topics: [],
messages: [],
type: 'assistant',
regularPhrases: [],
settings: {}
}
},
getProviderByModel: () => null
}))
describe('isVisionModel', () => {
describe('Gemini Models', () => {
it('should return true for gemini 1.5 models', () => {
expect(
isVisionModel({
id: 'gemini-1.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-1.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini 2.x models', () => {
expect(
isVisionModel({
id: 'gemini-2.0-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-2.0-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-2.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-2.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini latest models', () => {
expect(
isVisionModel({
id: 'gemini-flash-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-pro-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-flash-lite-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini 3 models', () => {
// Preview versions
expect(
isVisionModel({
id: 'gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
// Future stable versions
expect(
isVisionModel({
id: 'gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini exp models', () => {
expect(
isVisionModel({
id: 'gemini-exp-1206',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return false for gemini 1.0 models', () => {
expect(
isVisionModel({
id: 'gemini-1.0-pro',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
})
})

View File

@@ -0,0 +1,64 @@
import { describe, expect, it, vi } from 'vitest'
import { GEMINI_SEARCH_REGEX } from '../models/websearch'
vi.mock('@renderer/store', () => ({
default: {
getState: () => ({
llm: {
settings: {}
}
})
}
}))
// FIXME: Idk why it's imported. Maybe circular dependency somewhere
vi.mock('@renderer/services/AssistantService.ts', () => ({
getDefaultAssistant: () => {
return {
id: 'default',
name: 'default',
emoji: '😀',
prompt: '',
topics: [],
messages: [],
type: 'assistant',
regularPhrases: [],
settings: {}
}
},
getProviderByModel: () => null
}))
describe('Gemini Search Models', () => {
describe('GEMINI_SEARCH_REGEX', () => {
it('should match gemini 2.x models', () => {
expect(GEMINI_SEARCH_REGEX.test('gemini-2.0-flash')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-2.0-pro')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-2.5-flash')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-2.5-pro')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-2.5-flash-latest')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-2.5-pro-latest')).toBe(true)
})
it('should match gemini latest models', () => {
expect(GEMINI_SEARCH_REGEX.test('gemini-flash-latest')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-pro-latest')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-flash-lite-latest')).toBe(true)
})
it('should match gemini 3 models', () => {
// Preview versions
expect(GEMINI_SEARCH_REGEX.test('gemini-3-pro-preview')).toBe(true)
// Future stable versions
expect(GEMINI_SEARCH_REGEX.test('gemini-3-flash')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-3-pro')).toBe(true)
})
it('should not match older gemini models', () => {
expect(GEMINI_SEARCH_REGEX.test('gemini-1.5-flash')).toBe(false)
expect(GEMINI_SEARCH_REGEX.test('gemini-1.5-pro')).toBe(false)
expect(GEMINI_SEARCH_REGEX.test('gemini-1.0-pro')).toBe(false)
})
})
})

View File

@@ -59,6 +59,10 @@ import {
} from '@renderer/assets/images/models/gpt_dark.png'
import ChatGPTImageModelLogo from '@renderer/assets/images/models/gpt_image_1.png'
import ChatGPTo1ModelLogo from '@renderer/assets/images/models/gpt_o1.png'
import GPT51ModelLogo from '@renderer/assets/images/models/gpt-5.1.png'
import GPT51ChatModelLogo from '@renderer/assets/images/models/gpt-5.1-chat.png'
import GPT51CodexModelLogo from '@renderer/assets/images/models/gpt-5.1-codex.png'
import GPT51CodexMiniModelLogo from '@renderer/assets/images/models/gpt-5.1-codex-mini.png'
import GPT5ModelLogo from '@renderer/assets/images/models/gpt-5.png'
import GPT5ChatModelLogo from '@renderer/assets/images/models/gpt-5-chat.png'
import GPT5CodexModelLogo from '@renderer/assets/images/models/gpt-5-codex.png'
@@ -182,6 +186,10 @@ export function getModelLogoById(modelId: string): string | undefined {
'gpt-5-nano': GPT5NanoModelLogo,
'gpt-5-chat': GPT5ChatModelLogo,
'gpt-5-codex': GPT5CodexModelLogo,
'gpt-5.1-codex-mini': GPT51CodexMiniModelLogo,
'gpt-5.1-codex': GPT51CodexModelLogo,
'gpt-5.1-chat': GPT51ChatModelLogo,
'gpt-5.1': GPT51ModelLogo,
'gpt-5': GPT5ModelLogo,
gpts: isLight ? ChatGPT4ModelLogo : ChatGPT4ModelLogoDark,
'gpt-oss(?:-[\\w-]+)': isLight ? ChatGptModelLogo : ChatGptModelLogoDark,

View File

@@ -8,7 +8,7 @@ import type {
import { getLowerBaseModelName, isUserSelectedModelType } from '@renderer/utils'
import { isEmbeddingModel, isRerankModel } from './embedding'
import { isGPT5SeriesModel } from './utils'
import { isGPT5ProModel, isGPT5SeriesModel, isGPT51SeriesModel } from './utils'
import { isTextToImageModel } from './vision'
import { GEMINI_FLASH_MODEL_REGEX, isOpenAIDeepResearchModel } from './websearch'
@@ -24,6 +24,9 @@ export const MODEL_SUPPORTED_REASONING_EFFORT: ReasoningEffortConfig = {
openai_deep_research: ['medium'] as const,
gpt5: ['minimal', 'low', 'medium', 'high'] as const,
gpt5_codex: ['low', 'medium', 'high'] as const,
gpt5_1: ['none', 'low', 'medium', 'high'] as const,
gpt5_1_codex: ['none', 'medium', 'high'] as const,
gpt5pro: ['high'] as const,
grok: ['low', 'high'] as const,
grok4_fast: ['auto'] as const,
gemini: ['low', 'medium', 'high', 'auto'] as const,
@@ -41,24 +44,27 @@ export const MODEL_SUPPORTED_REASONING_EFFORT: ReasoningEffortConfig = {
// 模型类型到支持选项的映射表
export const MODEL_SUPPORTED_OPTIONS: ThinkingOptionConfig = {
default: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.default] as const,
default: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.default] as const,
o: MODEL_SUPPORTED_REASONING_EFFORT.o,
openai_deep_research: MODEL_SUPPORTED_REASONING_EFFORT.openai_deep_research,
gpt5: [...MODEL_SUPPORTED_REASONING_EFFORT.gpt5] as const,
gpt5pro: MODEL_SUPPORTED_REASONING_EFFORT.gpt5pro,
gpt5_codex: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_codex,
gpt5_1: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_1,
gpt5_1_codex: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_1_codex,
grok: MODEL_SUPPORTED_REASONING_EFFORT.grok,
grok4_fast: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.grok4_fast] as const,
gemini: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini] as const,
grok4_fast: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.grok4_fast] as const,
gemini: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini] as const,
gemini_pro: MODEL_SUPPORTED_REASONING_EFFORT.gemini_pro,
qwen: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.qwen] as const,
qwen: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.qwen] as const,
qwen_thinking: MODEL_SUPPORTED_REASONING_EFFORT.qwen_thinking,
doubao: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao] as const,
doubao_no_auto: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao_no_auto] as const,
doubao: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao] as const,
doubao_no_auto: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao_no_auto] as const,
doubao_after_251015: MODEL_SUPPORTED_REASONING_EFFORT.doubao_after_251015,
hunyuan: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.hunyuan] as const,
zhipu: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.zhipu] as const,
hunyuan: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.hunyuan] as const,
zhipu: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.zhipu] as const,
perplexity: MODEL_SUPPORTED_REASONING_EFFORT.perplexity,
deepseek_hybrid: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.deepseek_hybrid] as const
deepseek_hybrid: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.deepseek_hybrid] as const
} as const
const withModelIdAndNameAsId = <T>(model: Model, fn: (model: Model) => T): { idResult: T; nameResult: T } => {
@@ -75,11 +81,20 @@ const _getThinkModelType = (model: Model): ThinkingModelType => {
if (isOpenAIDeepResearchModel(model)) {
return 'openai_deep_research'
}
if (isGPT5SeriesModel(model)) {
if (isGPT51SeriesModel(model)) {
if (modelId.includes('codex')) {
thinkingModelType = 'gpt5_1_codex'
} else {
thinkingModelType = 'gpt5_1'
}
} else if (isGPT5SeriesModel(model)) {
if (modelId.includes('codex')) {
thinkingModelType = 'gpt5_codex'
} else {
thinkingModelType = 'gpt5'
if (isGPT5ProModel(model)) {
thinkingModelType = 'gpt5pro'
}
}
} else if (isSupportedReasoningEffortOpenAIModel(model)) {
thinkingModelType = 'o'
@@ -239,7 +254,7 @@ export function isGeminiReasoningModel(model?: Model): boolean {
// Gemini 支持思考模式的模型正则
export const GEMINI_THINKING_MODEL_REGEX =
/gemini-(?:2\.5.*(?:-latest)?|flash-latest|pro-latest|flash-lite-latest)(?:-[\w-]+)*$/i
/gemini-(?:2\.5.*(?:-latest)?|3-(?:flash|pro)(?:-preview)?|flash-latest|pro-latest|flash-lite-latest)(?:-[\w-]+)*$/i
export const isSupportedThinkingTokenGeminiModel = (model: Model): boolean => {
const modelId = getLowerBaseModelName(model.id, '/')
@@ -526,7 +541,7 @@ export function isSupportedReasoningEffortOpenAIModel(model: Model): boolean {
modelId.includes('o3') ||
modelId.includes('o4') ||
modelId.includes('gpt-oss') ||
(isGPT5SeriesModel(model) && !modelId.includes('chat'))
((isGPT5SeriesModel(model) || isGPT51SeriesModel(model)) && !modelId.includes('chat'))
)
}

View File

@@ -54,7 +54,7 @@ export function isSupportedFlexServiceTier(model: Model): boolean {
export function isSupportVerbosityModel(model: Model): boolean {
const modelId = getLowerBaseModelName(model.id)
return isGPT5SeriesModel(model) && !modelId.includes('chat')
return (isGPT5SeriesModel(model) || isGPT51SeriesModel(model)) && !modelId.includes('chat')
}
export function isOpenAIChatCompletionOnlyModel(model: Model): boolean {
@@ -227,12 +227,32 @@ export const isNotSupportSystemMessageModel = (model: Model): boolean => {
export const isGPT5SeriesModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gpt-5')
return modelId.includes('gpt-5') && !modelId.includes('gpt-5.1')
}
export const isGPT5SeriesReasoningModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gpt-5') && !modelId.includes('chat')
return isGPT5SeriesModel(model) && !modelId.includes('chat')
}
export const isGPT51SeriesModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gpt-5.1')
}
// GPT-5 verbosity configuration
// gpt-5-pro only supports 'high', other GPT-5 models support all levels
export const MODEL_SUPPORTED_VERBOSITY: Record<string, ('low' | 'medium' | 'high')[]> = {
'gpt-5-pro': ['high'],
default: ['low', 'medium', 'high']
}
export const getModelSupportedVerbosity = (model: Model): ('low' | 'medium' | 'high')[] => {
const modelId = getLowerBaseModelName(model.id)
if (modelId.includes('gpt-5-pro')) {
return MODEL_SUPPORTED_VERBOSITY['gpt-5-pro']
}
return MODEL_SUPPORTED_VERBOSITY.default
}
export const isGeminiModel = (model: Model) => {
@@ -251,3 +271,8 @@ export const ZHIPU_RESULT_TOKENS = ['<|begin_of_box|>', '<|end_of_box|>'] as con
export const agentModelFilter = (model: Model): boolean => {
return !isEmbeddingModel(model) && !isRerankModel(model) && !isTextToImageModel(model)
}
export const isGPT5ProModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gpt-5-pro')
}

View File

@@ -12,6 +12,7 @@ const visionAllowedModels = [
'gemini-1\\.5',
'gemini-2\\.0',
'gemini-2\\.5',
'gemini-3-(?:flash|pro)(?:-preview)?',
'gemini-(flash|pro|flash-lite)-latest',
'gemini-exp',
'claude-3',
@@ -64,13 +65,13 @@ const visionExcludedModels = [
'o1-preview',
'AIDC-AI/Marco-o1'
]
export const VISION_REGEX = new RegExp(
const VISION_REGEX = new RegExp(
`\\b(?!(?:${visionExcludedModels.join('|')})\\b)(${visionAllowedModels.join('|')})\\b`,
'i'
)
// For middleware to identify models that must use the dedicated Image API
export const DEDICATED_IMAGE_MODELS = [
const DEDICATED_IMAGE_MODELS = [
'grok-2-image',
'grok-2-image-1212',
'grok-2-image-latest',
@@ -79,7 +80,7 @@ export const DEDICATED_IMAGE_MODELS = [
'gpt-image-1'
]
export const IMAGE_ENHANCEMENT_MODELS = [
const IMAGE_ENHANCEMENT_MODELS = [
'grok-2-image(?:-[\\w-]+)?',
'qwen-image-edit',
'gpt-image-1',
@@ -90,9 +91,9 @@ export const IMAGE_ENHANCEMENT_MODELS = [
const IMAGE_ENHANCEMENT_MODELS_REGEX = new RegExp(IMAGE_ENHANCEMENT_MODELS.join('|'), 'i')
// Models that should auto-enable image generation button when selected
export const AUTO_ENABLE_IMAGE_MODELS = ['gemini-2.5-flash-image', ...DEDICATED_IMAGE_MODELS]
const AUTO_ENABLE_IMAGE_MODELS = ['gemini-2.5-flash-image', ...DEDICATED_IMAGE_MODELS]
export const OPENAI_TOOL_USE_IMAGE_GENERATION_MODELS = [
const OPENAI_TOOL_USE_IMAGE_GENERATION_MODELS = [
'o3',
'gpt-4o',
'gpt-4o-mini',
@@ -102,9 +103,9 @@ export const OPENAI_TOOL_USE_IMAGE_GENERATION_MODELS = [
'gpt-5'
]
export const OPENAI_IMAGE_GENERATION_MODELS = [...OPENAI_TOOL_USE_IMAGE_GENERATION_MODELS, 'gpt-image-1']
const OPENAI_IMAGE_GENERATION_MODELS = [...OPENAI_TOOL_USE_IMAGE_GENERATION_MODELS, 'gpt-image-1']
export const GENERATE_IMAGE_MODELS = [
const GENERATE_IMAGE_MODELS = [
'gemini-2.0-flash-exp',
'gemini-2.0-flash-exp-image-generation',
'gemini-2.0-flash-preview-image-generation',
@@ -169,22 +170,23 @@ export function isPureGenerateImageModel(model: Model): boolean {
}
// Text to image models
export const TEXT_TO_IMAGE_REGEX = /flux|diffusion|stabilityai|sd-|dall|cogview|janus|midjourney|mj-|image|gpt-image/i
const TEXT_TO_IMAGE_REGEX = /flux|diffusion|stabilityai|sd-|dall|cogview|janus|midjourney|mj-|image|gpt-image/i
export function isTextToImageModel(model: Model): boolean {
const modelId = getLowerBaseModelName(model.id)
return TEXT_TO_IMAGE_REGEX.test(modelId)
}
export function isNotSupportedImageSizeModel(model?: Model): boolean {
if (!model) {
return false
}
// It's not used now
// export function isNotSupportedImageSizeModel(model?: Model): boolean {
// if (!model) {
// return false
// }
const baseName = getLowerBaseModelName(model.id, '/')
// const baseName = getLowerBaseModelName(model.id, '/')
return baseName.includes('grok-2-image')
}
// return baseName.includes('grok-2-image')
// }
/**
* 判断模型是否支持图片增强(包括编辑、增强、修复等)

View File

@@ -3,7 +3,13 @@ import type { Model } from '@renderer/types'
import { SystemProviderIds } from '@renderer/types'
import { getLowerBaseModelName, isUserSelectedModelType } from '@renderer/utils'
import { isGeminiProvider, isNewApiProvider, isOpenAICompatibleProvider, isOpenAIProvider } from '../providers'
import {
isGeminiProvider,
isNewApiProvider,
isOpenAICompatibleProvider,
isOpenAIProvider,
isVertexAiProvider
} from '../providers'
import { isEmbeddingModel, isRerankModel } from './embedding'
import { isAnthropicModel } from './utils'
import { isPureGenerateImageModel, isTextToImageModel } from './vision'
@@ -16,7 +22,7 @@ export const CLAUDE_SUPPORTED_WEBSEARCH_REGEX = new RegExp(
export const GEMINI_FLASH_MODEL_REGEX = new RegExp('gemini.*-flash.*$')
export const GEMINI_SEARCH_REGEX = new RegExp(
'gemini-(?:2.*(?:-latest)?|flash-latest|pro-latest|flash-lite-latest)(?:-[\\w-]+)*$',
'gemini-(?:2.*(?:-latest)?|3-(?:flash|pro)(?:-preview)?|flash-latest|pro-latest|flash-lite-latest)(?:-[\\w-]+)*$',
'i'
)
@@ -70,7 +76,7 @@ export function isWebSearchModel(model: Model): boolean {
// bedrock和vertex不支持
if (
isAnthropicModel(model) &&
(provider.id === SystemProviderIds['aws-bedrock'] || provider.id === SystemProviderIds.vertexai)
!(provider.id === SystemProviderIds['aws-bedrock'] || provider.id === SystemProviderIds.vertexai)
) {
return CLAUDE_SUPPORTED_WEBSEARCH_REGEX.test(modelId)
}
@@ -107,7 +113,7 @@ export function isWebSearchModel(model: Model): boolean {
}
}
if (isGeminiProvider(provider) || provider.id === SystemProviderIds.vertexai) {
if (isGeminiProvider(provider) || isVertexAiProvider(provider)) {
return GEMINI_SEARCH_REGEX.test(modelId)
}

View File

@@ -67,7 +67,7 @@ import type {
SystemProvider,
SystemProviderId
} from '@renderer/types'
import { isSystemProvider, OpenAIServiceTiers } from '@renderer/types'
import { isSystemProvider, OpenAIServiceTiers, SystemProviderIds } from '@renderer/types'
import { TOKENFLUX_HOST } from './constant'
import { glm45FlashModel, qwen38bModel, SYSTEM_MODELS } from './models'
@@ -275,6 +275,7 @@ export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> =
type: 'openai',
apiKey: '',
apiHost: 'https://api.qnaigc.com',
anthropicApiHost: 'https://api.qnaigc.com',
models: SYSTEM_MODELS.qiniu,
isSystem: true,
enabled: false
@@ -665,6 +666,7 @@ export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> =
type: 'openai',
apiKey: '',
apiHost: 'https://api.longcat.chat/openai',
anthropicApiHost: 'https://api.longcat.chat/anthropic',
models: SYSTEM_MODELS.longcat,
isSystem: true,
enabled: false
@@ -684,7 +686,7 @@ export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> =
name: 'AI Gateway',
type: 'ai-gateway',
apiKey: '',
apiHost: 'https://ai-gateway.vercel.sh/v1',
apiHost: 'https://ai-gateway.vercel.sh/v1/ai',
models: [],
isSystem: true,
enabled: false
@@ -1519,7 +1521,10 @@ const SUPPORT_URL_CONTEXT_PROVIDER_TYPES = [
] as const satisfies ProviderType[]
export const isSupportUrlContextProvider = (provider: Provider) => {
return SUPPORT_URL_CONTEXT_PROVIDER_TYPES.some((type) => type === provider.type)
return (
SUPPORT_URL_CONTEXT_PROVIDER_TYPES.some((type) => type === provider.type) ||
provider.id === SystemProviderIds.cherryin
)
}
const SUPPORT_GEMINI_NATIVE_WEB_SEARCH_PROVIDERS = ['gemini', 'vertexai'] as const satisfies SystemProviderId[]
@@ -1566,10 +1571,18 @@ export function isGeminiProvider(provider: Provider): boolean {
return provider.type === 'gemini'
}
export function isVertexAiProvider(provider: Provider): boolean {
return provider.type === 'vertexai'
}
export function isAIGatewayProvider(provider: Provider): boolean {
return provider.type === 'ai-gateway'
}
export function isAwsBedrockProvider(provider: Provider): boolean {
return provider.type === 'aws-bedrock'
}
const NOT_SUPPORT_API_VERSION_PROVIDERS = ['github', 'copilot', 'perplexity'] as const satisfies SystemProviderId[]
export const isSupportAPIVersionProvider = (provider: Provider) => {

View File

@@ -123,9 +123,9 @@ export function useAssistant(id: string) {
}
updateAssistantSettings({
reasoning_effort: fallbackOption === 'off' ? undefined : fallbackOption,
reasoning_effort_cache: fallbackOption === 'off' ? undefined : fallbackOption,
qwenThinkMode: fallbackOption === 'off' ? undefined : true
reasoning_effort: fallbackOption === 'none' ? undefined : fallbackOption,
reasoning_effort_cache: fallbackOption === 'none' ? undefined : fallbackOption,
qwenThinkMode: fallbackOption === 'none' ? undefined : true
})
} else {
// 对于支持的选项, 不再更新 cache.

View File

@@ -311,7 +311,7 @@ export const getHttpMessageLabel = (key: string): string => {
}
const reasoningEffortOptionsKeyMap: Record<ThinkingOption, string> = {
off: 'assistants.settings.reasoning_effort.off',
none: 'assistants.settings.reasoning_effort.off',
minimal: 'assistants.settings.reasoning_effort.minimal',
high: 'assistants.settings.reasoning_effort.high',
low: 'assistants.settings.reasoning_effort.low',

View File

@@ -27,6 +27,9 @@
"null_id": "Agent ID is null."
}
},
"input": {
"placeholder": "Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Failed to list agents."

View File

@@ -27,6 +27,9 @@
"null_id": "智能体 ID 为空。"
}
},
"input": {
"placeholder": "在这里输入消息,按 {{key}} 发送 - @ 选择路径, / 选择命令"
},
"list": {
"error": {
"failed": "获取智能体列表失败"
@@ -4478,7 +4481,7 @@
"confirm": "确认",
"forward": "前进",
"multiple": "多选",
"noResult": "[to be translated]:No results found",
"noResult": "未找到结果",
"page": "翻页",
"select": "选择",
"title": "快捷菜单"

View File

@@ -27,6 +27,9 @@
"null_id": "代理程式 ID 為空。"
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "無法列出代理程式。"
@@ -4478,7 +4481,7 @@
"confirm": "確認",
"forward": "前進",
"multiple": "多選",
"noResult": "[to be translated]:No results found",
"noResult": "未找到結果",
"page": "翻頁",
"select": "選擇",
"title": "快捷選單"

View File

@@ -27,6 +27,9 @@
"null_id": "Agent ID ist leer."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Agent-Liste abrufen fehlgeschlagen"
@@ -4478,7 +4481,7 @@
"confirm": "Bestätigen",
"forward": "Vorwärts",
"multiple": "Mehrfachauswahl",
"noResult": "[to be translated]:No results found",
"noResult": "Keine Ergebnisse gefunden",
"page": "Seite umblättern",
"select": "Auswählen",
"title": "Schnellmenü"

View File

@@ -27,6 +27,9 @@
"null_id": "Το ID του πράκτορα είναι null."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Αποτυχία καταχώρησης πρακτόρων."
@@ -4478,7 +4481,7 @@
"confirm": "Επιβεβαίωση",
"forward": "Μπρος",
"multiple": "Πολλαπλή επιλογή",
"noResult": "[to be translated]:No results found",
"noResult": "Δεν βρέθηκαν αποτελέσματα",
"page": "Σελίδα",
"select": "Επιλογή",
"title": "Γρήγορη Πρόσβαση"

View File

@@ -27,6 +27,9 @@
"null_id": "El ID del agente es nulo."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Error al listar agentes."
@@ -4478,7 +4481,7 @@
"confirm": "Confirmar",
"forward": "Adelante",
"multiple": "Selección múltiple",
"noResult": "[to be translated]:No results found",
"noResult": "No se encontraron resultados",
"page": "Página",
"select": "Seleccionar",
"title": "Menú de acceso rápido"

View File

@@ -27,6 +27,9 @@
"null_id": "L'ID de l'agent est nul."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Échec de la liste des agents."
@@ -4478,7 +4481,7 @@
"confirm": "Подтвердить",
"forward": "Вперед",
"multiple": "Множественный выбор",
"noResult": "[to be translated]:No results found",
"noResult": "Aucun résultat trouvé",
"page": "Перелистнуть страницу",
"select": "Выбрать",
"title": "Быстрое меню"

View File

@@ -27,6 +27,9 @@
"null_id": "エージェント ID が null です。"
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "エージェントの一覧取得に失敗しました。"
@@ -4478,7 +4481,7 @@
"confirm": "確認",
"forward": "進む",
"multiple": "複数選択",
"noResult": "[to be translated]:No results found",
"noResult": "結果が見つかりません",
"page": "ページ",
"select": "選択",
"title": "クイックメニュー"

View File

@@ -27,6 +27,9 @@
"null_id": "O ID do agente é nulo."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Falha ao listar agentes."
@@ -4478,7 +4481,7 @@
"confirm": "Confirmar",
"forward": "Avançar",
"multiple": "Múltipla Seleção",
"noResult": "[to be translated]:No results found",
"noResult": "Nenhum resultado encontrado",
"page": "Página",
"select": "Selecionar",
"title": "Menu de Atalho"

View File

@@ -27,6 +27,9 @@
"null_id": "ID агента равен null."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Не удалось получить список агентов."
@@ -4478,7 +4481,7 @@
"confirm": "Подтвердить",
"forward": "Вперед",
"multiple": "Множественный выбор",
"noResult": "[to be translated]:No results found",
"noResult": "Результаты не найдены",
"page": "Страница",
"select": "Выбрать",
"title": "Быстрое меню"

View File

@@ -470,7 +470,7 @@ const AgentSessionInputbarInner: FC<InnerProps> = ({ assistant, agentId, session
)
const placeholderText = useMemo(
() =>
t('chat.input.placeholder', {
t('agent.input.placeholder', {
key: getSendMessageShortcutLabel(sendMessageShortcut)
}),
[sendMessageShortcut, t]

View File

@@ -313,7 +313,7 @@ export const InputbarCore: FC<InputbarCoreProps> = ({
const isEnterPressed = event.key === 'Enter' && !event.nativeEvent.isComposing
if (isEnterPressed) {
if (isSendMessageKeyPressed(event, sendMessageShortcut)) {
if (isSendMessageKeyPressed(event, sendMessageShortcut) && !cannotSend) {
handleSendMessage()
event.preventDefault()
return
@@ -359,6 +359,7 @@ export const InputbarCore: FC<InputbarCoreProps> = ({
translate,
handleToggleExpanded,
sendMessageShortcut,
cannotSend,
handleSendMessage,
setText,
setTimeoutTimer,

View File

@@ -36,7 +36,7 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
const { assistant, updateAssistantSettings } = useAssistant(assistantId)
const currentReasoningEffort = useMemo(() => {
return assistant.settings?.reasoning_effort || 'off'
return assistant.settings?.reasoning_effort || 'none'
}, [assistant.settings?.reasoning_effort])
// 确定当前模型支持的选项类型
@@ -46,21 +46,21 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
const supportedOptions: ThinkingOption[] = useMemo(() => {
if (modelType === 'doubao') {
if (isDoubaoThinkingAutoModel(model)) {
return ['off', 'auto', 'high']
return ['none', 'auto', 'high']
}
return ['off', 'high']
return ['none', 'high']
}
return MODEL_SUPPORTED_OPTIONS[modelType]
}, [model, modelType])
const onThinkingChange = useCallback(
(option?: ThinkingOption) => {
const isEnabled = option !== undefined && option !== 'off'
const isEnabled = option !== undefined && option !== 'none'
// 然后更新设置
if (!isEnabled) {
updateAssistantSettings({
reasoning_effort: undefined,
reasoning_effort_cache: undefined,
reasoning_effort: option,
reasoning_effort_cache: option,
qwenThinkMode: false
})
return
@@ -96,10 +96,10 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
}))
}, [currentReasoningEffort, supportedOptions, onThinkingChange])
const isThinkingEnabled = currentReasoningEffort !== undefined && currentReasoningEffort !== 'off'
const isThinkingEnabled = currentReasoningEffort !== undefined && currentReasoningEffort !== 'none'
const disableThinking = useCallback(() => {
onThinkingChange('off')
onThinkingChange('none')
}, [onThinkingChange])
const openQuickPanel = useCallback(() => {
@@ -116,7 +116,7 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
return
}
if (isThinkingEnabled && supportedOptions.includes('off')) {
if (isThinkingEnabled && supportedOptions.includes('none')) {
disableThinking()
return
}
@@ -146,13 +146,13 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
<Tooltip
placement="top"
title={
isThinkingEnabled && supportedOptions.includes('off')
isThinkingEnabled && supportedOptions.includes('none')
? t('common.close')
: t('assistants.settings.reasoning_effort.label')
}
mouseLeaveDelay={0}
arrow>
<ActionIconButton onClick={handleOpenQuickPanel} active={currentReasoningEffort !== 'off'}>
<ActionIconButton onClick={handleOpenQuickPanel} active={currentReasoningEffort !== 'none'}>
{ThinkingIcon(currentReasoningEffort)}
</ActionIconButton>
</Tooltip>
@@ -178,7 +178,7 @@ const ThinkingIcon = (option?: ThinkingOption) => {
case 'auto':
IconComponent = MdiLightbulbAutoOutline
break
case 'off':
case 'none':
IconComponent = MdiLightbulbOffOutline
break
default:

View File

@@ -1,4 +1,4 @@
import { isGeminiModel } from '@renderer/config/models'
import { isAnthropicModel, isGeminiModel } from '@renderer/config/models'
import { isSupportUrlContextProvider } from '@renderer/config/providers'
import { defineTool, registerTool, TopicType } from '@renderer/pages/home/Inputbar/types'
import { getProviderByModel } from '@renderer/services/AssistantService'
@@ -10,9 +10,8 @@ const urlContextTool = defineTool({
label: (t) => t('chat.input.url_context'),
visibleInScopes: [TopicType.Chat],
condition: ({ model }) => {
if (!isGeminiModel(model)) return false
const provider = getProviderByModel(model)
return !!provider && isSupportUrlContextProvider(provider)
return !!provider && isSupportUrlContextProvider(provider) && (isGeminiModel(model) || isAnthropicModel(model))
},
render: ({ assistant }) => <UrlContextButton assistantId={assistant.id} />
})

View File

@@ -1,4 +1,3 @@
import { cn } from '@renderer/utils'
import type { CollapseProps } from 'antd'
import { Card } from 'antd'
import { CheckCircle, Circle, Clock, ListTodo } from 'lucide-react'
@@ -11,23 +10,27 @@ const getStatusConfig = (status: TodoItem['status']) => {
switch (status) {
case 'completed':
return {
color: 'success' as const,
icon: <CheckCircle className="h-3 w-3" />
color: 'var(--color-status-success)',
opacity: 0.6,
icon: <CheckCircle className="h-4 w-4" strokeWidth={2.5} />
}
case 'in_progress':
return {
color: 'primary' as const,
icon: <Clock className="h-3 w-3" />
color: 'var(--color-primary)',
opacity: 0.9,
icon: <Clock className="h-4 w-4" strokeWidth={2.5} />
}
case 'pending':
return {
color: 'default' as const,
icon: <Circle className="h-3 w-3" />
color: 'var(--color-border)',
opacity: 0.4,
icon: <Circle className="h-4 w-4" strokeWidth={2.5} />
}
default:
return {
color: 'default' as const,
icon: <Circle className="h-3 w-3" />
color: 'var(--color-border)',
opacity: 0.4,
icon: <Circle className="h-4 w-4" strokeWidth={2.5} />
}
}
}
@@ -64,10 +67,8 @@ export function TodoWriteTool({
<div className="p-2">
<div className="flex items-center justify-center gap-3">
<div
className={cn(
'flex items-center justify-center rounded-full border bg-opacity-50 p-2',
`bg-${statusConfig.color}`
)}>
className="flex items-center justify-center rounded-full border p-1"
style={{ backgroundColor: statusConfig.color, opacity: statusConfig.opacity }}>
{statusConfig.icon}
</div>
<div className="min-w-0 flex-1">

View File

@@ -1,7 +1,7 @@
import type { PermissionUpdate } from '@anthropic-ai/claude-agent-sdk'
import { loggerService } from '@logger'
import { useAppDispatch, useAppSelector } from '@renderer/store'
import { selectPendingPermissionByToolName, toolPermissionsActions } from '@renderer/store/toolPermissions'
import { selectPendingPermission, toolPermissionsActions } from '@renderer/store/toolPermissions'
import type { NormalToolResponse } from '@renderer/types'
import { Button } from 'antd'
import { ChevronDown, CirclePlay, CircleX } from 'lucide-react'
@@ -17,9 +17,7 @@ interface Props {
export function ToolPermissionRequestCard({ toolResponse }: Props) {
const { t } = useTranslation()
const dispatch = useAppDispatch()
const request = useAppSelector((state) =>
selectPendingPermissionByToolName(state.toolPermissions, toolResponse.tool.name)
)
const request = useAppSelector((state) => selectPendingPermission(state.toolPermissions, toolResponse.toolCallId))
const [now, setNow] = useState(() => Date.now())
const [showDetails, setShowDetails] = useState(false)

View File

@@ -1,5 +1,6 @@
import Selector from '@renderer/components/Selector'
import {
getModelSupportedVerbosity,
isSupportedReasoningEffortOpenAIModel,
isSupportFlexServiceTierModel,
isSupportVerbosityModel
@@ -80,7 +81,8 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
}
]
const verbosityOptions = [
const verbosityOptions = useMemo(() => {
const allOptions = [
{
value: 'low',
label: t('settings.openai.verbosity.low')
@@ -94,6 +96,9 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
label: t('settings.openai.verbosity.high')
}
]
const supportedVerbosityLevels = getModelSupportedVerbosity(model)
return allOptions.filter((option) => supportedVerbosityLevels.includes(option.value as any))
}, [model, t])
const serviceTierOptions = useMemo(() => {
let baseOptions: { value: ServiceTier; label: string }[]
@@ -155,6 +160,15 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
}
}, [provider.id, serviceTierMode, serviceTierOptions, setServiceTierMode])
useEffect(() => {
if (verbosity && !verbosityOptions.some((option) => option.value === verbosity)) {
const supportedVerbosityLevels = getModelSupportedVerbosity(model)
// Default to the highest supported verbosity level
const defaultVerbosity = supportedVerbosityLevels[supportedVerbosityLevels.length - 1]
setVerbosity(defaultVerbosity)
}
}, [model, verbosity, verbosityOptions, setVerbosity])
if (!isOpenAIReasoning && !isSupportServiceTier && !isSupportVerbosity) {
return null
}

View File

@@ -1,21 +1,13 @@
import { getAgentTypeAvatar } from '@renderer/config/agent'
import type { useUpdateAgent } from '@renderer/hooks/agents/useUpdateAgent'
import type { useUpdateSession } from '@renderer/hooks/agents/useUpdateSession'
import { getAgentTypeLabel } from '@renderer/i18n/label'
import type { GetAgentResponse, GetAgentSessionResponse } from '@renderer/types'
import { isAgentEntity } from '@renderer/types'
import { Avatar } from 'antd'
import type { FC } from 'react'
import { useTranslation } from 'react-i18next'
import { AccessibleDirsSetting } from './AccessibleDirsSetting'
import { AvatarSetting } from './AvatarSetting'
import { DescriptionSetting } from './DescriptionSetting'
import { ModelSetting } from './ModelSetting'
import { NameSetting } from './NameSetting'
import { SettingsContainer, SettingsItem, SettingsTitle } from './shared'
// const logger = loggerService.withContext('AgentEssentialSettings')
import { SettingsContainer } from './shared'
type EssentialSettingsProps =
| {
@@ -30,26 +22,10 @@ type EssentialSettingsProps =
}
const EssentialSettings: FC<EssentialSettingsProps> = ({ agentBase, update, showModelSetting = true }) => {
const { t } = useTranslation()
if (!agentBase) return null
const isAgent = isAgentEntity(agentBase)
return (
<SettingsContainer>
{isAgent && (
<SettingsItem inline>
<SettingsTitle>{t('agent.type.label')}</SettingsTitle>
<div className="flex items-center gap-2">
<Avatar size={24} src={getAgentTypeAvatar(agentBase.type)} className="h-6 w-6 text-lg" />
<span>{(agentBase?.name ?? agentBase?.type) ? getAgentTypeLabel(agentBase.type) : ''}</span>
</div>
</SettingsItem>
)}
{isAgent && (
<AvatarSetting agent={agentBase} update={update as ReturnType<typeof useUpdateAgent>['updateAgent']} />
)}
<NameSetting base={agentBase} update={update} />
{showModelSetting && <ModelSetting base={agentBase} update={update} />}
<AccessibleDirsSetting base={agentBase} update={update} />

View File

@@ -1,6 +1,8 @@
import { EmojiAvatarWithPicker } from '@renderer/components/Avatar/EmojiAvatarWithPicker'
import type { AgentBaseWithId, UpdateAgentBaseForm, UpdateAgentFunctionUnion } from '@renderer/types'
import { AgentConfigurationSchema, isAgentEntity, isAgentType } from '@renderer/types'
import { Input } from 'antd'
import { useState } from 'react'
import { useCallback, useState } from 'react'
import { useTranslation } from 'react-i18next'
import { SettingsItem, SettingsTitle } from './shared'
@@ -13,15 +15,49 @@ export interface NameSettingsProps {
export const NameSetting = ({ base, update }: NameSettingsProps) => {
const { t } = useTranslation()
const [name, setName] = useState<string | undefined>(base?.name?.trim())
const updateName = async (name: UpdateAgentBaseForm['name']) => {
if (!base) return
return update({ id: base.id, name: name?.trim() })
}
// Avatar logic
const isAgent = isAgentEntity(base)
const isDefault = isAgent ? isAgentType(base.configuration?.avatar) : false
const [emoji, setEmoji] = useState(isAgent && !isDefault ? (base.configuration?.avatar ?? '⭐️') : '⭐️')
const updateAvatar = useCallback(
(avatar: string) => {
if (!isAgent || !base) return
const parsedConfiguration = AgentConfigurationSchema.parse(base.configuration ?? {})
const payload = {
id: base.id,
configuration: {
...parsedConfiguration,
avatar
}
}
update(payload)
},
[base, update, isAgent]
)
if (!base) return null
return (
<SettingsItem inline>
<SettingsTitle>{t('common.name')}</SettingsTitle>
<div className="flex max-w-70 flex-1 items-center gap-1">
{isAgent && (
<EmojiAvatarWithPicker
emoji={emoji}
onPick={(emoji: string) => {
setEmoji(emoji)
if (isAgent && emoji === base?.configuration?.avatar) return
updateAvatar(emoji)
}}
/>
)}
<Input
placeholder={t('common.agent_one') + t('common.name')}
value={name}
@@ -31,8 +67,9 @@ export const NameSetting = ({ base, update }: NameSettingsProps) => {
updateName(name)
}
}}
className="max-w-70 flex-1"
className="flex-1"
/>
</div>
</SettingsItem>
)
}

View File

@@ -9,6 +9,7 @@ import { DEFAULT_CONTEXTCOUNT, DEFAULT_TEMPERATURE, MAX_CONTEXT_COUNT } from '@r
import { isEmbeddingModel, isRerankModel } from '@renderer/config/models'
import { useTimer } from '@renderer/hooks/useTimer'
import { SettingRow } from '@renderer/pages/settings'
import { DEFAULT_ASSISTANT_SETTINGS } from '@renderer/services/AssistantService'
import type { Assistant, AssistantSettingCustomParameters, AssistantSettings, Model } from '@renderer/types'
import { modalConfirm } from '@renderer/utils'
import { Button, Col, Divider, Input, InputNumber, Row, Select, Slider, Switch, Tooltip } from 'antd'
@@ -31,7 +32,9 @@ const AssistantModelSettings: FC<Props> = ({ assistant, updateAssistant, updateA
const [enableMaxTokens, setEnableMaxTokens] = useState(assistant?.settings?.enableMaxTokens ?? false)
const [maxTokens, setMaxTokens] = useState(assistant?.settings?.maxTokens ?? 0)
const [streamOutput, setStreamOutput] = useState(assistant?.settings?.streamOutput)
const [toolUseMode, setToolUseMode] = useState(assistant?.settings?.toolUseMode ?? 'prompt')
const [toolUseMode, setToolUseMode] = useState<AssistantSettings['toolUseMode']>(
assistant?.settings?.toolUseMode ?? 'function'
)
const [defaultModel, setDefaultModel] = useState(assistant?.defaultModel)
const [topP, setTopP] = useState(assistant?.settings?.topP ?? 1)
const [enableTopP, setEnableTopP] = useState(assistant?.settings?.enableTopP ?? false)
@@ -158,28 +161,17 @@ const AssistantModelSettings: FC<Props> = ({ assistant, updateAssistant, updateA
}
const onReset = () => {
setTemperature(DEFAULT_TEMPERATURE)
setEnableTemperature(true)
setContextCount(DEFAULT_CONTEXTCOUNT)
setEnableMaxTokens(false)
setMaxTokens(0)
setStreamOutput(true)
setTopP(1)
setEnableTopP(false)
setCustomParameters([])
setToolUseMode('prompt')
updateAssistantSettings({
temperature: DEFAULT_TEMPERATURE,
enableTemperature: true,
contextCount: DEFAULT_CONTEXTCOUNT,
enableMaxTokens: false,
maxTokens: 0,
streamOutput: true,
topP: 1,
enableTopP: false,
customParameters: [],
toolUseMode: 'prompt'
})
setTemperature(DEFAULT_ASSISTANT_SETTINGS.temperature)
setEnableTemperature(DEFAULT_ASSISTANT_SETTINGS.enableTemperature ?? true)
setContextCount(DEFAULT_ASSISTANT_SETTINGS.contextCount)
setEnableMaxTokens(DEFAULT_ASSISTANT_SETTINGS.enableMaxTokens ?? false)
setMaxTokens(DEFAULT_ASSISTANT_SETTINGS.maxTokens ?? 0)
setStreamOutput(DEFAULT_ASSISTANT_SETTINGS.streamOutput)
setTopP(DEFAULT_ASSISTANT_SETTINGS.topP)
setEnableTopP(DEFAULT_ASSISTANT_SETTINGS.enableTopP ?? false)
setCustomParameters(DEFAULT_ASSISTANT_SETTINGS.customParameters ?? [])
setToolUseMode(DEFAULT_ASSISTANT_SETTINGS.toolUseMode)
updateAssistantSettings(DEFAULT_ASSISTANT_SETTINGS)
}
const modelFilter = (model: Model) => !isEmbeddingModel(model) && !isRerankModel(model)

View File

@@ -109,7 +109,6 @@ const InstallNpxUv: FC<Props> = ({ mini = false }) => {
<Container>
<Alert
type={isUvInstalled ? 'success' : 'warning'}
banner
style={{ borderRadius: 'var(--list-item-border-radius)' }}
description={
<VStack>
@@ -140,7 +139,6 @@ const InstallNpxUv: FC<Props> = ({ mini = false }) => {
/>
<Alert
type={isBunInstalled ? 'success' : 'warning'}
banner
style={{ borderRadius: 'var(--list-item-border-radius)' }}
description={
<VStack>

View File

@@ -140,7 +140,7 @@ const MCPSettings: FC = () => {
<Route
path="mcp-install"
element={
<SettingContainer theme={theme}>
<SettingContainer style={{ backgroundColor: 'inherit' }}>
<InstallNpxUv />
</SettingContainer>
}

View File

@@ -6,7 +6,7 @@ import AiProvider from '@renderer/aiCore'
import type { CompletionsParams } from '@renderer/aiCore/legacy/middleware/schemas'
import type { AiSdkMiddlewareConfig } from '@renderer/aiCore/middleware/AiSdkMiddlewareBuilder'
import { buildStreamTextParams } from '@renderer/aiCore/prepareParams'
import { isDedicatedImageGenerationModel, isEmbeddingModel } from '@renderer/config/models'
import { isDedicatedImageGenerationModel, isEmbeddingModel, isFunctionCallingModel } from '@renderer/config/models'
import { getStoreSetting } from '@renderer/hooks/useSettings'
import i18n from '@renderer/i18n'
import store from '@renderer/store'
@@ -18,6 +18,7 @@ import type { Message } from '@renderer/types/newMessage'
import type { SdkModel } from '@renderer/types/sdk'
import { removeSpecialCharactersForTopicName, uuid } from '@renderer/utils'
import { abortCompletion, readyToAbort } from '@renderer/utils/abortController'
import { isToolUseModeFunction } from '@renderer/utils/assistant'
import { isAbortError } from '@renderer/utils/error'
import { purifyMarkdownImages } from '@renderer/utils/markdown'
import { isPromptToolUse, isSupportedToolUse } from '@renderer/utils/mcp-tools'
@@ -126,12 +127,16 @@ export async function fetchChatCompletion({
requestOptions: options
})
// Safely fallback to prompt tool use when function calling is not supported by model.
const usePromptToolUse =
isPromptToolUse(assistant) || (isToolUseModeFunction(assistant) && !isFunctionCallingModel(assistant.model))
const middlewareConfig: AiSdkMiddlewareConfig = {
streamOutput: assistant.settings?.streamOutput ?? true,
onChunk: onChunkReceived,
model: assistant.model,
enableReasoning: capabilities.enableReasoning,
isPromptToolUse: isPromptToolUse(assistant),
isPromptToolUse: usePromptToolUse,
isSupportedToolUse: isSupportedToolUse(assistant),
isImageGenerationEndpoint: isDedicatedImageGenerationModel(assistant.model || getDefaultModel()),
webSearchPluginConfig: webSearchPluginConfig,

View File

@@ -36,9 +36,10 @@ export const DEFAULT_ASSISTANT_SETTINGS: AssistantSettings = {
streamOutput: true,
topP: 1,
enableTopP: false,
toolUseMode: 'prompt',
// It would gracefully fallback to prompt if not supported by model.
toolUseMode: 'function',
customParameters: []
}
} as const
export function getDefaultAssistant(): Assistant {
return {
@@ -176,7 +177,7 @@ export const getAssistantSettings = (assistant: Assistant): AssistantSettings =>
enableMaxTokens: assistant?.settings?.enableMaxTokens ?? false,
maxTokens: getAssistantMaxTokens(),
streamOutput: assistant?.settings?.streamOutput ?? true,
toolUseMode: assistant?.settings?.toolUseMode ?? 'prompt',
toolUseMode: assistant?.settings?.toolUseMode ?? 'function',
defaultModel: assistant?.defaultModel ?? undefined,
reasoning_effort: assistant?.settings?.reasoning_effort ?? undefined,
customParameters: assistant?.settings?.customParameters ?? []

View File

@@ -67,7 +67,7 @@ const persistedReducer = persistReducer(
{
key: 'cherry-studio',
storage,
version: 174,
version: 176,
blacklist: ['runtime', 'messages', 'messageBlocks', 'tabs', 'toolPermissions'],
migrate
},

View File

@@ -2819,6 +2819,43 @@ const migrateConfig = {
logger.error('migrate 174 error', error as Error)
return state
}
},
'175': (state: RootState) => {
try {
state.assistants.assistants.forEach((assistant) => {
// @ts-ignore
if (assistant.settings?.reasoning_effort === 'off') {
// @ts-ignore
assistant.settings.reasoning_effort = 'none'
}
// @ts-ignore
if (assistant.settings?.reasoning_effort_cache === 'off') {
// @ts-ignore
assistant.settings.reasoning_effort_cache = 'none'
}
})
logger.info('migrate 175 success')
return state
} catch (error) {
logger.error('migrate 175 error', error as Error)
return state
}
},
'176': (state: RootState) => {
try {
state.llm.providers.forEach((provider) => {
if (provider.id === SystemProviderIds.qiniu) {
provider.anthropicApiHost = 'https://api.qnaigc.com'
}
if (provider.id === SystemProviderIds.longcat) {
provider.anthropicApiHost = 'https://api.longcat.chat/anthropic'
}
})
return state
} catch (error) {
logger.error('migrate 176 error', error as Error)
return state
}
}
}

View File

@@ -6,6 +6,7 @@ export type ToolPermissionRequestPayload = {
requestId: string
toolName: string
toolId: string
toolCallId: string
description?: string
requiresPermissions: boolean
input: Record<string, unknown>
@@ -82,12 +83,12 @@ export const selectActiveToolPermission = (state: ToolPermissionsState): ToolPer
return activeEntries[0]
}
export const selectPendingPermissionByToolName = (
export const selectPendingPermission = (
state: ToolPermissionsState,
toolName: string
toolCallId: string
): ToolPermissionEntry | undefined => {
const activeEntries = Object.values(state.requests)
.filter((entry) => entry.toolName === toolName)
.filter((entry) => entry.toolCallId === toolCallId)
.filter(
(entry) => entry.status === 'pending' || entry.status === 'submitting-allow' || entry.status === 'submitting-deny'
)

View File

@@ -83,7 +83,10 @@ const ThinkModelTypes = [
'o',
'openai_deep_research',
'gpt5',
'gpt5_1',
'gpt5_codex',
'gpt5_1_codex',
'gpt5pro',
'grok',
'grok4_fast',
'gemini',
@@ -100,7 +103,7 @@ const ThinkModelTypes = [
] as const
export type ReasoningEffortOption = NonNullable<OpenAI.ReasoningEffort> | 'auto'
export type ThinkingOption = ReasoningEffortOption | 'off'
export type ThinkingOption = ReasoningEffortOption
export type ThinkingModelType = (typeof ThinkModelTypes)[number]
export type ThinkingOptionConfig = Record<ThinkingModelType, ThinkingOption[]>
export type ReasoningEffortConfig = Record<ThinkingModelType, ReasoningEffortOption[]>
@@ -111,6 +114,7 @@ export function isThinkModelType(type: string): type is ThinkingModelType {
}
export const EFFORT_RATIO: EffortRatio = {
none: 0.01,
minimal: 0.05,
low: 0.05,
medium: 0.5,

View File

@@ -126,6 +126,10 @@ export type OpenAIExtraBody = {
source_lang: 'auto'
target_lang: string
}
// for gpt-5 series models verbosity control
text?: {
verbosity?: 'low' | 'medium' | 'high'
}
}
// image is for openrouter. audio is ignored for now
export type OpenAIModality = OpenAI.ChatCompletionModality | 'image'

224
yarn.lock
View File

@@ -102,7 +102,19 @@ __metadata:
languageName: node
linkType: hard
"@ai-sdk/anthropic@npm:2.0.44":
"@ai-sdk/anthropic@npm:2.0.45":
version: 2.0.45
resolution: "@ai-sdk/anthropic@npm:2.0.45"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/ef0e54f032e3b8324c278f3b25d9b388308204d753404c49fd880709a796c2343aee36d335c99f50e683edd39d5b8b6f42b2e9034e1725d8e0db514e2233d104
languageName: node
linkType: hard
"@ai-sdk/anthropic@npm:^2.0.44":
version: 2.0.44
resolution: "@ai-sdk/anthropic@npm:2.0.44"
dependencies:
@@ -179,42 +191,42 @@ __metadata:
languageName: node
linkType: hard
"@ai-sdk/google-vertex@npm:^3.0.62":
version: 3.0.62
resolution: "@ai-sdk/google-vertex@npm:3.0.62"
"@ai-sdk/google-vertex@npm:^3.0.68":
version: 3.0.68
resolution: "@ai-sdk/google-vertex@npm:3.0.68"
dependencies:
"@ai-sdk/anthropic": "npm:2.0.44"
"@ai-sdk/google": "npm:2.0.31"
"@ai-sdk/anthropic": "npm:2.0.45"
"@ai-sdk/google": "npm:2.0.36"
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
google-auth-library: "npm:^9.15.0"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/673bb51e3e0cbe5235ad5e65379b1cb8f099dbc690ab8552e208553a9f1cc6026d2588e956e73468bc6d267066be276e7a9aba98e32e905809dfbeab4ac0e352
checksum: 10c0/6a3f4cb1e649313b46a0c349c717757071f8b012b0a28e59ab7a55fd35d9600f0043f0a4f57417c4cc49e0d3734e89a1e4fb248fc88795b5286c83395d3f617a
languageName: node
linkType: hard
"@ai-sdk/google@npm:2.0.31":
version: 2.0.31
resolution: "@ai-sdk/google@npm:2.0.31"
"@ai-sdk/google@npm:2.0.36":
version: 2.0.36
resolution: "@ai-sdk/google@npm:2.0.36"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/d8f143f058fb62e6e67e30564ec92530d7389c22ad91b1e4bbe781c8570bf718cd417e44dcd4855e347e85c4174538a9a884eac666109e17f20d21467ab3e749
checksum: 10c0/2c6de5e1cf0703b6b932a3f313bf4bc9439897af39c805169ab04bba397185d99b2b1306f3b817f991ca41fdced0365b072ee39e76382c045930256bce47e0e4
languageName: node
linkType: hard
"@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.31#~/.yarn/patches/@ai-sdk-google-npm-2.0.31-b0de047210.patch":
version: 2.0.31
resolution: "@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.31#~/.yarn/patches/@ai-sdk-google-npm-2.0.31-b0de047210.patch::version=2.0.31&hash=9f3835"
"@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.36#~/.yarn/patches/@ai-sdk-google-npm-2.0.36-6f3cc06026.patch":
version: 2.0.36
resolution: "@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.36#~/.yarn/patches/@ai-sdk-google-npm-2.0.36-6f3cc06026.patch::version=2.0.36&hash=2da8c3"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/dd37dfb7abf402caaae3edb2f1a8dab018fddad6ba3190376723e03a2a0c352329c8e41e60df3fb8436b717d9c2ee4b82dff091848f50d026f62565cbdb158f8
checksum: 10c0/ce99a497360377d2917cf3a48278eb6f4337623ce3738ba743cf048c8c2a7731ec4fc27605a50e461e716ed49b3690206ca8e4078f27cb7be162b684bfc2fc22
languageName: node
linkType: hard
@@ -1879,30 +1891,30 @@ __metadata:
languageName: node
linkType: hard
"@cherrystudio/ai-core@workspace:^1.0.0-alpha.18, @cherrystudio/ai-core@workspace:packages/aiCore":
"@cherrystudio/ai-core@workspace:^1.0.9, @cherrystudio/ai-core@workspace:packages/aiCore":
version: 0.0.0-use.local
resolution: "@cherrystudio/ai-core@workspace:packages/aiCore"
dependencies:
"@ai-sdk/anthropic": "npm:^2.0.43"
"@ai-sdk/azure": "npm:^2.0.66"
"@ai-sdk/deepseek": "npm:^1.0.27"
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.31#~/.yarn/patches/@ai-sdk-google-npm-2.0.31-b0de047210.patch"
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch"
"@ai-sdk/openai-compatible": "npm:^1.0.26"
"@ai-sdk/provider": "npm:^2.0.0"
"@ai-sdk/provider-utils": "npm:^3.0.16"
"@ai-sdk/xai": "npm:^2.0.31"
"@cherrystudio/ai-sdk-provider": "workspace:*"
tsdown: "npm:^0.12.9"
typescript: "npm:^5.0.0"
vitest: "npm:^3.2.4"
zod: "npm:^4.1.5"
peerDependencies:
"@ai-sdk/google": ^2.0.36
"@ai-sdk/openai": ^2.0.64
"@cherrystudio/ai-sdk-provider": ^0.1.2
ai: ^5.0.26
languageName: unknown
linkType: soft
"@cherrystudio/ai-sdk-provider@workspace:*, @cherrystudio/ai-sdk-provider@workspace:packages/ai-sdk-provider":
"@cherrystudio/ai-sdk-provider@workspace:packages/ai-sdk-provider":
version: 0.0.0-use.local
resolution: "@cherrystudio/ai-sdk-provider@workspace:packages/ai-sdk-provider"
dependencies:
@@ -2140,9 +2152,9 @@ __metadata:
languageName: unknown
linkType: soft
"@cherrystudio/openai@npm:^6.5.0":
version: 6.5.0
resolution: "@cherrystudio/openai@npm:6.5.0"
"@cherrystudio/openai@npm:^6.9.0":
version: 6.9.0
resolution: "@cherrystudio/openai@npm:6.9.0"
peerDependencies:
ws: ^8.18.0
zod: ^3.25 || ^4.0
@@ -2153,7 +2165,7 @@ __metadata:
optional: true
bin:
openai: bin/cli
checksum: 10c0/0f6cafb97aec17037d5ddcccc88e4b4a9c8de77a989a35bab2394b682a1a69e8a9343e8ee5eb8107d5c495970dbf3567642f154c033f7afc3bf078078666a92e
checksum: 10c0/9c51ef33c5b9d08041a115e3d6a8158412a379998a0eae186923d5bdcc808b634c1fef4471a1d499bb8c624b04c075167bc90a1a60a805005c0657ecebbb58d0
languageName: node
linkType: hard
@@ -4546,38 +4558,38 @@ __metadata:
languageName: node
linkType: hard
"@libsql/client@npm:0.14.0, @libsql/client@npm:^0.14.0":
version: 0.14.0
resolution: "@libsql/client@npm:0.14.0"
"@libsql/client@npm:0.15.15":
version: 0.15.15
resolution: "@libsql/client@npm:0.15.15"
dependencies:
"@libsql/core": "npm:^0.14.0"
"@libsql/core": "npm:^0.15.14"
"@libsql/hrana-client": "npm:^0.7.0"
js-base64: "npm:^3.7.5"
libsql: "npm:^0.4.4"
libsql: "npm:^0.5.22"
promise-limit: "npm:^2.7.0"
checksum: 10c0/9c6bab468453df765f647422c772af3578f1e108b663a80b99063f47ed3542db26ae0fcdba2e153d72e6d5089c5caeba947a167a6c065b0191a0832621539335
checksum: 10c0/1ae67280ebe27903ff142b07e2a256c22ef5ada65185286a72823e8eae8d9d2602e0d72e423d3bd64ae57494791bfffff946aa0fc7c2378b55a227ff63f8df69
languageName: node
linkType: hard
"@libsql/core@npm:^0.14.0":
version: 0.14.0
resolution: "@libsql/core@npm:0.14.0"
"@libsql/core@npm:^0.15.14":
version: 0.15.15
resolution: "@libsql/core@npm:0.15.15"
dependencies:
js-base64: "npm:^3.7.5"
checksum: 10c0/327bb991cf191d5a9a9fc0cc1a17123f7ca88f222187a3bde845fbad8ceaeaa1f139882080e4b2969da57b83e576c52702572e2838d1743c6bff75f95e6f774a
checksum: 10c0/0a619689c9504f4239d9745882a128b81e2f6c0547352bbb0d36932261c053bbcbea4435a17f91abe61556bb791f2f1203b36c36b2d4b4f369953d7949bdc40e
languageName: node
linkType: hard
"@libsql/darwin-arm64@npm:0.4.7":
version: 0.4.7
resolution: "@libsql/darwin-arm64@npm:0.4.7"
"@libsql/darwin-arm64@npm:0.5.22":
version: 0.5.22
resolution: "@libsql/darwin-arm64@npm:0.5.22"
conditions: os=darwin & cpu=arm64
languageName: node
linkType: hard
"@libsql/darwin-x64@npm:0.4.7":
version: 0.4.7
resolution: "@libsql/darwin-x64@npm:0.4.7"
"@libsql/darwin-x64@npm:0.5.22":
version: 0.5.22
resolution: "@libsql/darwin-x64@npm:0.5.22"
conditions: os=darwin & cpu=x64
languageName: node
linkType: hard
@@ -4611,38 +4623,52 @@ __metadata:
languageName: node
linkType: hard
"@libsql/linux-arm64-gnu@npm:0.4.7":
version: 0.4.7
resolution: "@libsql/linux-arm64-gnu@npm:0.4.7"
"@libsql/linux-arm-gnueabihf@npm:0.5.22":
version: 0.5.22
resolution: "@libsql/linux-arm-gnueabihf@npm:0.5.22"
conditions: os=linux & cpu=arm
languageName: node
linkType: hard
"@libsql/linux-arm-musleabihf@npm:0.5.22":
version: 0.5.22
resolution: "@libsql/linux-arm-musleabihf@npm:0.5.22"
conditions: os=linux & cpu=arm
languageName: node
linkType: hard
"@libsql/linux-arm64-gnu@npm:0.5.22":
version: 0.5.22
resolution: "@libsql/linux-arm64-gnu@npm:0.5.22"
conditions: os=linux & cpu=arm64
languageName: node
linkType: hard
"@libsql/linux-arm64-musl@npm:0.4.7":
version: 0.4.7
resolution: "@libsql/linux-arm64-musl@npm:0.4.7"
"@libsql/linux-arm64-musl@npm:0.5.22":
version: 0.5.22
resolution: "@libsql/linux-arm64-musl@npm:0.5.22"
conditions: os=linux & cpu=arm64
languageName: node
linkType: hard
"@libsql/linux-x64-gnu@npm:0.4.7":
version: 0.4.7
resolution: "@libsql/linux-x64-gnu@npm:0.4.7"
"@libsql/linux-x64-gnu@npm:0.5.22":
version: 0.5.22
resolution: "@libsql/linux-x64-gnu@npm:0.5.22"
conditions: os=linux & cpu=x64
languageName: node
linkType: hard
"@libsql/linux-x64-musl@npm:0.4.7":
version: 0.4.7
resolution: "@libsql/linux-x64-musl@npm:0.4.7"
"@libsql/linux-x64-musl@npm:0.5.22":
version: 0.5.22
resolution: "@libsql/linux-x64-musl@npm:0.5.22"
conditions: os=linux & cpu=x64
languageName: node
linkType: hard
"@libsql/win32-x64-msvc@npm:0.4.7, @libsql/win32-x64-msvc@npm:^0.4.7":
version: 0.4.7
resolution: "@libsql/win32-x64-msvc@npm:0.4.7"
checksum: 10c0/2fcb8715b6f0571dec145eaaf3fd53c7c5aa5bf408fe1be9d84b10adc8a909bb6ee60b45e0d7052b0c1722c30ac212356a3f1adcdf7f57d5a59b48f36ca5bdf5
"@libsql/win32-x64-msvc@npm:0.5.22, @libsql/win32-x64-msvc@npm:^0.5.22":
version: 0.5.22
resolution: "@libsql/win32-x64-msvc@npm:0.5.22"
checksum: 10c0/1bb2730563c603c03a229faa352897685648659d85ba0872dda60cc02abc469fbd55539ffd8b86c81d00230d76292e5a4d2a763fe44c05694612ce6db6e929aa
conditions: os=win32 & cpu=x64
languageName: node
linkType: hard
@@ -5169,15 +5195,15 @@ __metadata:
languageName: node
linkType: hard
"@opeoginni/github-copilot-openai-compatible@npm:0.1.19":
version: 0.1.19
resolution: "@opeoginni/github-copilot-openai-compatible@npm:0.1.19"
"@opeoginni/github-copilot-openai-compatible@npm:0.1.21":
version: 0.1.21
resolution: "@opeoginni/github-copilot-openai-compatible@npm:0.1.21"
dependencies:
"@ai-sdk/openai": "npm:^2.0.42"
"@ai-sdk/openai-compatible": "npm:^1.0.19"
"@ai-sdk/provider": "npm:^2.1.0-beta.4"
"@ai-sdk/provider-utils": "npm:^3.0.10"
checksum: 10c0/dfb01832d7c704b2eb080fc09d31b07fc26e5ac4e648ce219dc0d80cf044ef3cae504427781ec2ce3c5a2459c9c81d043046a255642108d5b3de0f83f4a9f20a
checksum: 10c0/05b73d935dc7f24123330ade919698b486ac2a25a7d607c1d3789471f782ead4c803ce6ffd3d97b9ca3f1aadaf6b5c1ea52363c9d24b36894fcfc403fda9cef3
languageName: node
linkType: hard
@@ -9891,11 +9917,14 @@ __metadata:
"@agentic/searxng": "npm:^7.3.3"
"@agentic/tavily": "npm:^7.3.3"
"@ai-sdk/amazon-bedrock": "npm:^3.0.53"
"@ai-sdk/anthropic": "npm:^2.0.44"
"@ai-sdk/cerebras": "npm:^1.0.31"
"@ai-sdk/gateway": "npm:^2.0.9"
"@ai-sdk/google-vertex": "npm:^3.0.62"
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.36#~/.yarn/patches/@ai-sdk-google-npm-2.0.36-6f3cc06026.patch"
"@ai-sdk/google-vertex": "npm:^3.0.68"
"@ai-sdk/huggingface": "patch:@ai-sdk/huggingface@npm%3A0.0.8#~/.yarn/patches/@ai-sdk-huggingface-npm-0.0.8-d4d0aaac93.patch"
"@ai-sdk/mistral": "npm:^2.0.23"
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch"
"@ai-sdk/perplexity": "npm:^2.0.17"
"@ant-design/v5-patch-for-react-19": "npm:^1.0.3"
"@anthropic-ai/claude-agent-sdk": "patch:@anthropic-ai/claude-agent-sdk@npm%3A0.1.30#~/.yarn/patches/@anthropic-ai-claude-agent-sdk-npm-0.1.30-b50a299674.patch"
@@ -9905,7 +9934,7 @@ __metadata:
"@aws-sdk/client-bedrock-runtime": "npm:^3.910.0"
"@aws-sdk/client-s3": "npm:^3.910.0"
"@biomejs/biome": "npm:2.2.4"
"@cherrystudio/ai-core": "workspace:^1.0.0-alpha.18"
"@cherrystudio/ai-core": "workspace:^1.0.9"
"@cherrystudio/embedjs": "npm:^0.1.31"
"@cherrystudio/embedjs-libsql": "npm:^0.1.31"
"@cherrystudio/embedjs-loader-csv": "npm:^0.1.31"
@@ -9919,7 +9948,7 @@ __metadata:
"@cherrystudio/embedjs-ollama": "npm:^0.1.31"
"@cherrystudio/embedjs-openai": "npm:^0.1.31"
"@cherrystudio/extension-table-plus": "workspace:^"
"@cherrystudio/openai": "npm:^6.5.0"
"@cherrystudio/openai": "npm:^6.9.0"
"@dnd-kit/core": "npm:^6.3.1"
"@dnd-kit/modifiers": "npm:^9.0.0"
"@dnd-kit/sortable": "npm:^10.0.0"
@@ -9938,8 +9967,8 @@ __metadata:
"@langchain/community": "npm:^1.0.0"
"@langchain/core": "patch:@langchain/core@npm%3A1.0.2#~/.yarn/patches/@langchain-core-npm-1.0.2-183ef83fe4.patch"
"@langchain/openai": "patch:@langchain/openai@npm%3A1.0.0#~/.yarn/patches/@langchain-openai-npm-1.0.0-474d0ad9d4.patch"
"@libsql/client": "npm:0.14.0"
"@libsql/win32-x64-msvc": "npm:^0.4.7"
"@libsql/client": "npm:0.15.15"
"@libsql/win32-x64-msvc": "npm:^0.5.22"
"@mistralai/mistralai": "npm:^1.7.5"
"@modelcontextprotocol/sdk": "npm:^1.17.5"
"@mozilla/readability": "npm:^0.6.0"
@@ -9952,7 +9981,7 @@ __metadata:
"@opentelemetry/sdk-trace-base": "npm:^2.0.0"
"@opentelemetry/sdk-trace-node": "npm:^2.0.0"
"@opentelemetry/sdk-trace-web": "npm:^2.0.0"
"@opeoginni/github-copilot-openai-compatible": "npm:0.1.19"
"@opeoginni/github-copilot-openai-compatible": "npm:0.1.21"
"@paymoapp/electron-shutdown-handler": "npm:^1.1.2"
"@playwright/test": "npm:^1.52.0"
"@radix-ui/react-context-menu": "npm:^2.2.16"
@@ -17252,17 +17281,19 @@ __metadata:
languageName: node
linkType: hard
"libsql@npm:0.4.7":
version: 0.4.7
resolution: "libsql@npm:0.4.7"
"libsql@npm:^0.5.22":
version: 0.5.22
resolution: "libsql@npm:0.5.22"
dependencies:
"@libsql/darwin-arm64": "npm:0.4.7"
"@libsql/darwin-x64": "npm:0.4.7"
"@libsql/linux-arm64-gnu": "npm:0.4.7"
"@libsql/linux-arm64-musl": "npm:0.4.7"
"@libsql/linux-x64-gnu": "npm:0.4.7"
"@libsql/linux-x64-musl": "npm:0.4.7"
"@libsql/win32-x64-msvc": "npm:0.4.7"
"@libsql/darwin-arm64": "npm:0.5.22"
"@libsql/darwin-x64": "npm:0.5.22"
"@libsql/linux-arm-gnueabihf": "npm:0.5.22"
"@libsql/linux-arm-musleabihf": "npm:0.5.22"
"@libsql/linux-arm64-gnu": "npm:0.5.22"
"@libsql/linux-arm64-musl": "npm:0.5.22"
"@libsql/linux-x64-gnu": "npm:0.5.22"
"@libsql/linux-x64-musl": "npm:0.5.22"
"@libsql/win32-x64-msvc": "npm:0.5.22"
"@neon-rs/load": "npm:^0.0.4"
detect-libc: "npm:2.0.2"
dependenciesMeta:
@@ -17270,6 +17301,10 @@ __metadata:
optional: true
"@libsql/darwin-x64":
optional: true
"@libsql/linux-arm-gnueabihf":
optional: true
"@libsql/linux-arm-musleabihf":
optional: true
"@libsql/linux-arm64-gnu":
optional: true
"@libsql/linux-arm64-musl":
@@ -17280,41 +17315,8 @@ __metadata:
optional: true
"@libsql/win32-x64-msvc":
optional: true
checksum: 10c0/351952440e6bad3477e5f1bb1b9d6570d16e403b894f4a13c5c7e183a1307b2fb04a2fa902728cb8594a259e1726c51c61b822d545bbc88319b126ad15468a87
conditions: (os=darwin | os=linux | os=win32) & (cpu=x64 | cpu=arm64 | cpu=wasm32)
languageName: node
linkType: hard
"libsql@patch:libsql@npm%3A0.4.7#~/.yarn/patches/libsql-npm-0.4.7-444e260fb1.patch":
version: 0.4.7
resolution: "libsql@patch:libsql@npm%3A0.4.7#~/.yarn/patches/libsql-npm-0.4.7-444e260fb1.patch::version=0.4.7&hash=972e11"
dependencies:
"@libsql/darwin-arm64": "npm:0.4.7"
"@libsql/darwin-x64": "npm:0.4.7"
"@libsql/linux-arm64-gnu": "npm:0.4.7"
"@libsql/linux-arm64-musl": "npm:0.4.7"
"@libsql/linux-x64-gnu": "npm:0.4.7"
"@libsql/linux-x64-musl": "npm:0.4.7"
"@libsql/win32-x64-msvc": "npm:0.4.7"
"@neon-rs/load": "npm:^0.0.4"
detect-libc: "npm:2.0.2"
dependenciesMeta:
"@libsql/darwin-arm64":
optional: true
"@libsql/darwin-x64":
optional: true
"@libsql/linux-arm64-gnu":
optional: true
"@libsql/linux-arm64-musl":
optional: true
"@libsql/linux-x64-gnu":
optional: true
"@libsql/linux-x64-musl":
optional: true
"@libsql/win32-x64-msvc":
optional: true
checksum: 10c0/6098770dc6c31ae0dbfe0821719d184d9bb353ac92553923096f6e3420d3786f240f0b3858f519af0aeada93beb4aa83cb9a9a1a6aa18d625511b484dcb53d07
conditions: (os=darwin | os=linux | os=win32) & (cpu=x64 | cpu=arm64 | cpu=wasm32)
checksum: 10c0/6c34f08fc7408ebee16708ba12e5def9d1b2a4fa166070c956a120133ba9be68ec532e2d0b76bdc7005ef9ef69bf70d2ba7208ed824c4288c2a3d881edd5eaf6
conditions: (os=darwin | os=linux | os=win32) & (cpu=x64 | cpu=arm64 | cpu=wasm32 | cpu=arm)
languageName: node
linkType: hard