Compare commits

...

30 Commits

Author SHA1 Message Date
beyondkmp
00754f3644 Merge branch 'main' into betterSqlite 2025-11-21 10:27:20 +08:00
SuYao
dcdd1bf852 refactor: replace renderToolContent function with ToolContent component for improved readability (#11300)
* refactor: replace renderToolContent function with ToolContent component for improved readability

* fix

* fix test
2025-11-21 09:55:46 +08:00
beyondkmp
36a9af3e6b Merge branch 'main' into betterSqlite 2025-11-21 09:16:36 +08:00
beyondkmp
a12b6bfeca feat: enable native language emoji search with CLDR data format (#11381)
* feat: add i18n support and local data to emoji picker

- Add emoji-picker-element-data package for offline-first emoji data
- Implement i18n translations for emoji picker UI (de, en, es, fr, ja, pt, ru, zh)
- Switch from CDN to local emoji data to improve performance and reliability
- Add locale mapping to match app language with emoji picker data
- Move emoji-picker-element import to EmojiPicker component for better encapsulation
- Use proper TypeScript types instead of 'any' for type safety

This improves user experience by providing localized emoji picker interface
and eliminating dependency on external CDN, ensuring the picker works offline.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: enable native language emoji search with CLDR data format

Switch from emojibase to CLDR format for emoji-picker-element data to support full multi-language search functionality. Users can now search for emojis in their native language (e.g., German users can search "Herz" for ❤️, Spanish users can search "corazón"). Also improves type safety by using the LanguageVarious type for locale mappings.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-20 19:23:27 +08:00
beyondkmp
67e032344b Merge branch 'main' into betterSqlite 2025-11-20 15:17:57 +08:00
beyondkmp
59a8f3c47d add rebuild 2025-11-20 15:15:02 +08:00
亢奋猫
0f1a487bb0 refactor: simplify agent creation form (#11369)
* refactor(AgentModal): simplify agent type handling and update default values

- Removed unused agent type options and related logic.
- Updated default agent name from 'Claude Code' to 'Agent'.
- Adjusted padding in button styles and textarea rows for better UI consistency.
- Cleaned up unnecessary imports and code comments for improved readability.

* refactor(AgentSettings): clean up and enhance name setting component

- Removed unused imports and commented-out code in AgentModal and EssentialSettings.
- Updated NameSetting to include an emoji avatar picker for enhanced user experience.
- Simplified the logic for updating the agent's name and avatar.
- Improved overall readability and maintainability of the code.
2025-11-20 10:42:49 +08:00
亢奋猫
2df8bb58df fix: remove light background from MCP NpxUv install alerts (#11372)
- Remove 'banner' prop from Alert components in InstallNpxUv
- Set SettingContainer background to 'inherit' in MCP settings
- Fixes the light background color issue in NpxUv interface

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-20 10:41:41 +08:00
defi-failure
62976f6fe0 refactor: namespace tool call ids with session id to prevent conflicts (#11319) 2025-11-20 10:35:11 +08:00
MyPrototypeWhat
77529b3cd3 chore: update ai-core release scripts and bump version to 1.0.7 (#11370)
* chore: update ai-core release scripts and bump version to 1.0.7

* chore: update ai-sdk-provider release script to include build step and enhance type exports in webSearchPlugin and providers

* chore: bump @cherrystudio/ai-core version to 1.0.8 and update dependencies in package.json and yarn.lock

* chore: bump @cherrystudio/ai-core version to 1.0.9 and @cherrystudio/ai-sdk-provider version to 0.1.2 in package.json and yarn.lock

---------

Co-authored-by: suyao <sy20010504@gmail.com>
2025-11-19 20:44:22 +08:00
SuYao
c8e9a10190 bump ai core version (#11363)
* bump ai core version

* chore

* chore: add patch for @ai-sdk/openai and update peer dependencies in aiCore

* chore: update installation instructions in README to include @ai-sdk/google and @ai-sdk/openai

* chore: bump @cherrystudio/ai-core version to 1.0.6 in package.json and yarn.lock

---------

Co-authored-by: MyPrototypeWhat <daoquqiexing@gmail.com>
2025-11-19 18:13:33 +08:00
scientia
0e011ff35f fix: fix api-host for vercel ai-gateway provider (#11321)
Co-authored-by: scientia <wangdenghui@xiaomi.com>
2025-11-19 17:11:17 +08:00
MyPrototypeWhat
40a64a7c92 feat(options): enhance provider key handling for cherryin in buildPro… (#11361)
feat(options): enhance provider key handling for cherryin in buildProviderOptions function
2025-11-19 16:25:29 +08:00
beyondkmp
fadb436c7d feat: integrate better-sqlite3 for database management
- Replaced @libsql/client with better-sqlite3 for improved database handling.
- Updated database interactions in BaseService, MigrationService, and various repository classes to utilize better-sqlite3 methods.
- Added better-sqlite3 and its types to package.json and yarn.lock.
- Adjusted database connection logic and query execution to align with better-sqlite3 API.
2025-11-19 15:40:26 +08:00
Phantom
dc9503ef8b feat: support gemini 3 (#11356)
* feat(reasoning): add support for gemini-3-pro-preview model

Update regex pattern to include gemini-3-pro-preview as a supported thinking model
Add tests for new gemini-3 model support and edge cases

* fix(reasoning): update gemini model regex to include stable versions

Add support for stable versions of gemini-3-flash and gemini-3-pro in the model regex pattern. Update tests to verify both preview and stable versions are correctly identified.

* feat(providers): add vertexai provider check function

Add isVertexAiProvider function to consistently check for vertexai provider type and use it in websearch model detection

* feat(websearch): update gemini search regex to include v3 models

Add support for gemini 3.x models in the search regex pattern, including preview versions

* feat(vision): add support for gemini-3 models and add tests

Add regex pattern for gemini-3 models in visionAllowedModels
Create comprehensive test suite for isVisionModel function

* refactor(vision): make vision-related model constants private

Remove unused isNotSupportedImageSizeModel function and change exports to const declarations for internal use only

* chore(deps): update @ai-sdk/google to v2.0.36 and related dependencies

update @ai-sdk/google dependency from v2.0.31 to v2.0.36 to include fixes for model path handling and tool support for newer Gemini models

* chore: remove outdated @ai-sdk-google patch file

* chore: remove outdated @ai-sdk/google patch dependency
2025-11-19 14:05:14 +08:00
beyondkmp
f2c8484c48 feat: enable local crash mini dump file (#11348)
* feat: enabel loca crash mini file dump

* update version
2025-11-18 18:27:57 +08:00
kangfenmao
a9c9224835 fix(migrate): update anthropicApiHost for qiniu and longcat providers in migration to version 176
- Added anthropicApiHost configuration for qiniu and longcat providers during state migration.
- Incremented version number in persistedReducer to 176.
- Ensured proper handling of reasoning_effort settings during migration.
2025-11-18 11:05:46 +08:00
caoli5288
43223fd1f5 feat(config): add anthropicApiHost for qiniu and longcat providers (#11335) 2025-11-18 10:10:59 +08:00
Phantom
4bac843b37 fix(InputbarCore): prevent message send when cannotSend is true (#11337)
Add cannotSend check to prevent message sending when conditions aren't met
2025-11-18 10:08:54 +08:00
Phantom
34723934f4 fix: use function as default tool use mode (#11338)
* refactor(assistant): change default tool use mode to function and use default settings

Simplify reset logic by using DEFAULT_ASSISTANT_SETTINGS object instead of hardcoded values

* fix(ApiService): safely fallback to prompt tool use for unsupported models

Add check for function calling model support before using tool use mode to prevent errors with unsupported models.
2025-11-17 23:28:43 +08:00
defi-failure
096c36caf8 fix: improve todo tool status icon visibility and colors (#11323) 2025-11-17 14:01:27 +08:00
beyondkmp
139950e193 fix(i18n): add input placeholder translations for multiple languages (#11320)
feat(i18n): add input placeholder translations for multiple languages

- Introduced a new placeholder for the input field in various language files, providing guidance on message entry and command selection.
- Updated English, Chinese (Simplified and Traditional), German, Greek, Spanish, French, Japanese, Portuguese, and Russian translations to include the new input placeholder text.
- Adjusted the reference in the AgentSessionInputbar component to use the new translation key for consistency.
2025-11-17 11:51:04 +08:00
SuYao
31eec403f7 fix: url context and web search capability (#11306)
* fix: enhance support for interleaved thinking and model compatibility

* fix: type
2025-11-17 10:53:47 +08:00
槑囿脑袋
7fd4837a47 fix: mineru validate pdf error and 403 error (#11312)
* fix: validate pdf error

* fix: net fetch error

* fix: mineru 403 error

* chore: change comment to english

* fix: format
2025-11-16 16:02:15 +00:00
Carlton
90b0c8b4a6 fix: resolve "no such file" error when processing non-English filenames in open-mineru (#11315) 2025-11-16 22:10:43 +08:00
github-actions[bot]
556353e910 docs: Weekly Automated Update: Nov 16, 2025 (#11308)
feat(bot): Weekly automated script run

Co-authored-by: EurFelux <59059173+EurFelux@users.noreply.github.com>
2025-11-16 10:57:32 +08:00
Copilot
11fb730b4d fix: add verbosity parameter support for GPT-5 models across legacy and modern AI SDK (#11281)
* Initial plan

* feat: add verbosity parameter support for GPT-5 models in OpenAIAPIClient

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* fix: ensure gpt-5-pro always uses 'high' verbosity

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* refactor: move verbosity configuration to config/models as suggested

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* refactor: encapsulate verbosity logic in getVerbosity method

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* feat: add support for verbosity and reasoning options for GPT-5 Pro and GPT-5.1 models

* fix comment

* build: add @ai-sdk/google dependency

Add the @ai-sdk/google package to support Google AI SDK integration

* build: add @ai-sdk/anthropic dependency

* refactor(aiCore): update reasoning params handling for AI providers

- Add type imports for provider options
- Handle 'none' reasoning effort consistently across providers
- Improve type safety by using Pick with provider options
- Standardize disabled reasoning config for all providers

* fix: adjust none effort ratio from 0 to 0.01

Prevent potential division by zero errors by ensuring none effort ratio has a small positive value

* feat(reasoning): add support for GPT-5.1 series models

Handle 'none' reasoning effort for GPT-5.1 models and add model type check

* Update src/renderer/src/aiCore/utils/reasoning.ts

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
Co-authored-by: suyao <sy20010504@gmail.com>
Co-authored-by: icarus <eurfelux@gmail.com>
2025-11-16 10:22:14 +08:00
Phantom
2511113b62 feat: support gpt-5.1 (#11294)
* build: update @cherrystudio/openai dependency from v6.5.0 to v6.9.0

* refactor(reasoning): replace 'off' with 'none' for reasoning effort option

Update reasoning effort option from 'off' to 'none' across multiple files for consistency
Add support for gpt5_1 model with reasoning effort options

* fix(openai): handle apply_patch_call and apply_patch_call_output in response conversion

Filter and properly handle apply_patch_call and apply_patch_call_output types in OpenAI response conversion. Ensure undefined/null values are handled appropriately and log warnings for missing required fields.

* feat(models): add gpt-5.1 model logo and configuration

* fix(providers): include cherryin in url context provider check

Add SystemProviderIds.cherryin to the list of providers that support URL context to ensure proper functionality

* feat(models): add logo images for gpt-5.1 model variants

* feat(model): add support for GPT-5.1 series models

- Add new model type check for GPT-5.1 series
- Update reasoning effort and verbosity checks to include GPT-5.1
- Add logging to provider options builder

* feat(models): add gpt5_1_codex model support

Add new model type 'gpt5_1_codex' to ThinkModelTypes and configure its reasoning effort levels
Update model type detection logic to handle gpt5_1_codex variant
2025-11-15 19:09:43 +08:00
beyondkmp
a29b2bb3d6 chore: update @opeoginni/github-copilot-openai-compatible to support gpt5.1 (#11299)
* chore: update @opeoginni/github-copilot-openai-compatible to version 0.1.21

- Updated package version in package.json and yarn.lock.
- Refactored OpenAIBaseClient to enhance getBaseURL method and improve header management for SDK instances.

* format
2025-11-15 19:07:16 +08:00
beyondkmp
d2be450906 fix: update gitcode update config url (#11298)
* fix: update gitcode update config url

* update version

---------

Co-authored-by: Payne Fu <payne@Paynes-MacBook-Pro.local>
2025-11-15 10:01:33 +08:00
92 changed files with 2094 additions and 724 deletions

View File

@@ -1,26 +0,0 @@
diff --git a/dist/index.js b/dist/index.js
index ff305b112779b718f21a636a27b1196125a332d9..cf32ff5086d4d9e56f8fe90c98724559083bafc3 100644
--- a/dist/index.js
+++ b/dist/index.js
@@ -471,7 +471,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {
- return modelId.includes("/") ? modelId : `models/${modelId}`;
+ return modelId.includes("models/") ? modelId : `models/${modelId}`;
}
// src/google-generative-ai-options.ts
diff --git a/dist/index.mjs b/dist/index.mjs
index 57659290f1cec74878a385626ad75b2a4d5cd3fc..d04e5927ec3725b6ffdb80868bfa1b5a48849537 100644
--- a/dist/index.mjs
+++ b/dist/index.mjs
@@ -477,7 +477,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {
- return modelId.includes("/") ? modelId : `models/${modelId}`;
+ return modelId.includes("models/") ? modelId : `models/${modelId}`;
}
// src/google-generative-ai-options.ts

View File

@@ -0,0 +1,152 @@
diff --git a/dist/index.js b/dist/index.js
index c2ef089c42e13a8ee4a833899a415564130e5d79..75efa7baafb0f019fb44dd50dec1641eee8879e7 100644
--- a/dist/index.js
+++ b/dist/index.js
@@ -471,7 +471,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {
- return modelId.includes("/") ? modelId : `models/${modelId}`;
+ return modelId.includes("models/") ? modelId : `models/${modelId}`;
}
// src/google-generative-ai-options.ts
diff --git a/dist/index.mjs b/dist/index.mjs
index d75c0cc13c41192408c1f3f2d29d76a7bffa6268..ada730b8cb97d9b7d4cb32883a1d1ff416404d9b 100644
--- a/dist/index.mjs
+++ b/dist/index.mjs
@@ -477,7 +477,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {
- return modelId.includes("/") ? modelId : `models/${modelId}`;
+ return modelId.includes("models/") ? modelId : `models/${modelId}`;
}
// src/google-generative-ai-options.ts
diff --git a/dist/internal/index.js b/dist/internal/index.js
index 277cac8dc734bea2fb4f3e9a225986b402b24f48..bb704cd79e602eb8b0cee1889e42497d59ccdb7a 100644
--- a/dist/internal/index.js
+++ b/dist/internal/index.js
@@ -432,7 +432,15 @@ function prepareTools({
var _a;
tools = (tools == null ? void 0 : tools.length) ? tools : void 0;
const toolWarnings = [];
- const isGemini2 = modelId.includes("gemini-2");
+ // These changes could be safely removed when @ai-sdk/google v3 released.
+ const isLatest = (
+ [
+ 'gemini-flash-latest',
+ 'gemini-flash-lite-latest',
+ 'gemini-pro-latest',
+ ]
+ ).some(id => id === modelId);
+ const isGemini2OrNewer = modelId.includes("gemini-2") || modelId.includes("gemini-3") || isLatest;
const supportsDynamicRetrieval = modelId.includes("gemini-1.5-flash") && !modelId.includes("-8b");
const supportsFileSearch = modelId.includes("gemini-2.5");
if (tools == null) {
@@ -458,7 +466,7 @@ function prepareTools({
providerDefinedTools.forEach((tool) => {
switch (tool.id) {
case "google.google_search":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ googleSearch: {} });
} else if (supportsDynamicRetrieval) {
googleTools2.push({
@@ -474,7 +482,7 @@ function prepareTools({
}
break;
case "google.url_context":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ urlContext: {} });
} else {
toolWarnings.push({
@@ -485,7 +493,7 @@ function prepareTools({
}
break;
case "google.code_execution":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ codeExecution: {} });
} else {
toolWarnings.push({
@@ -507,7 +515,7 @@ function prepareTools({
}
break;
case "google.vertex_rag_store":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({
retrieval: {
vertex_rag_store: {
diff --git a/dist/internal/index.mjs b/dist/internal/index.mjs
index 03b7cc591be9b58bcc2e775a96740d9f98862a10..347d2c12e1cee79f0f8bb258f3844fb0522a6485 100644
--- a/dist/internal/index.mjs
+++ b/dist/internal/index.mjs
@@ -424,7 +424,15 @@ function prepareTools({
var _a;
tools = (tools == null ? void 0 : tools.length) ? tools : void 0;
const toolWarnings = [];
- const isGemini2 = modelId.includes("gemini-2");
+ // These changes could be safely removed when @ai-sdk/google v3 released.
+ const isLatest = (
+ [
+ 'gemini-flash-latest',
+ 'gemini-flash-lite-latest',
+ 'gemini-pro-latest',
+ ]
+ ).some(id => id === modelId);
+ const isGemini2OrNewer = modelId.includes("gemini-2") || modelId.includes("gemini-3") || isLatest;
const supportsDynamicRetrieval = modelId.includes("gemini-1.5-flash") && !modelId.includes("-8b");
const supportsFileSearch = modelId.includes("gemini-2.5");
if (tools == null) {
@@ -450,7 +458,7 @@ function prepareTools({
providerDefinedTools.forEach((tool) => {
switch (tool.id) {
case "google.google_search":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ googleSearch: {} });
} else if (supportsDynamicRetrieval) {
googleTools2.push({
@@ -466,7 +474,7 @@ function prepareTools({
}
break;
case "google.url_context":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ urlContext: {} });
} else {
toolWarnings.push({
@@ -477,7 +485,7 @@ function prepareTools({
}
break;
case "google.code_execution":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({ codeExecution: {} });
} else {
toolWarnings.push({
@@ -499,7 +507,7 @@ function prepareTools({
}
break;
case "google.vertex_rag_store":
- if (isGemini2) {
+ if (isGemini2OrNewer) {
googleTools2.push({
retrieval: {
vertex_rag_store: {
@@ -1434,9 +1442,7 @@ var googleTools = {
vertexRagStore
};
export {
- GoogleGenerativeAILanguageModel,
getGroundingMetadataSchema,
- getUrlContextMetadataSchema,
- googleTools
+ getUrlContextMetadataSchema, GoogleGenerativeAILanguageModel, googleTools
};
//# sourceMappingURL=index.mjs.map
\ No newline at end of file

View File

@@ -1,6 +1,6 @@
{
"name": "CherryStudio",
"version": "1.6.5",
"version": "1.7.0-rc.1",
"private": true,
"description": "A powerful AI assistant for producer.",
"main": "./out/main/index.js",
@@ -74,9 +74,12 @@
"format:check": "biome format && biome lint",
"prepare": "git config blame.ignoreRevsFile .git-blame-ignore-revs && husky",
"claude": "dotenv -e .env -- claude",
"release:aicore:alpha": "yarn workspace @cherrystudio/ai-core version prerelease --immediate && yarn workspace @cherrystudio/ai-core npm publish --tag alpha --access public",
"release:aicore:beta": "yarn workspace @cherrystudio/ai-core version prerelease --immediate && yarn workspace @cherrystudio/ai-core npm publish --tag beta --access public",
"release:aicore": "yarn workspace @cherrystudio/ai-core version patch --immediate && yarn workspace @cherrystudio/ai-core npm publish --access public"
"release:aicore:alpha": "yarn workspace @cherrystudio/ai-core version prerelease --preid alpha --immediate && yarn workspace @cherrystudio/ai-core build && yarn workspace @cherrystudio/ai-core npm publish --tag alpha --access public",
"release:aicore:beta": "yarn workspace @cherrystudio/ai-core version prerelease --preid beta --immediate && yarn workspace @cherrystudio/ai-core build && yarn workspace @cherrystudio/ai-core npm publish --tag beta --access public",
"release:aicore": "yarn workspace @cherrystudio/ai-core version patch --immediate && yarn workspace @cherrystudio/ai-core build && yarn workspace @cherrystudio/ai-core npm publish --access public",
"release:ai-sdk-provider": "yarn workspace @cherrystudio/ai-sdk-provider version patch --immediate && yarn workspace @cherrystudio/ai-sdk-provider build && yarn workspace @cherrystudio/ai-sdk-provider npm publish --access public",
"rebuild": "electron-rebuild -f -w better-sqlite3",
"postinstall": "electron-builder install-app-deps"
},
"dependencies": {
"@anthropic-ai/claude-agent-sdk": "patch:@anthropic-ai/claude-agent-sdk@npm%3A0.1.30#~/.yarn/patches/@anthropic-ai-claude-agent-sdk-npm-0.1.30-b50a299674.patch",
@@ -85,6 +88,8 @@
"@napi-rs/system-ocr": "patch:@napi-rs/system-ocr@npm%3A1.0.2#~/.yarn/patches/@napi-rs-system-ocr-npm-1.0.2-59e7a78e8b.patch",
"@paymoapp/electron-shutdown-handler": "^1.1.2",
"@strongtz/win32-arm64-msvc": "^0.4.7",
"better-sqlite3": "12.4.1",
"emoji-picker-element-data": "^1",
"express": "^5.1.0",
"font-list": "^2.0.0",
"graceful-fs": "^4.2.11",
@@ -108,11 +113,14 @@
"@agentic/searxng": "^7.3.3",
"@agentic/tavily": "^7.3.3",
"@ai-sdk/amazon-bedrock": "^3.0.53",
"@ai-sdk/anthropic": "^2.0.44",
"@ai-sdk/cerebras": "^1.0.31",
"@ai-sdk/gateway": "^2.0.9",
"@ai-sdk/google-vertex": "^3.0.62",
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.36#~/.yarn/patches/@ai-sdk-google-npm-2.0.36-6f3cc06026.patch",
"@ai-sdk/google-vertex": "^3.0.68",
"@ai-sdk/huggingface": "patch:@ai-sdk/huggingface@npm%3A0.0.8#~/.yarn/patches/@ai-sdk-huggingface-npm-0.0.8-d4d0aaac93.patch",
"@ai-sdk/mistral": "^2.0.23",
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch",
"@ai-sdk/perplexity": "^2.0.17",
"@ant-design/v5-patch-for-react-19": "^1.0.3",
"@anthropic-ai/sdk": "^0.41.0",
@@ -121,7 +129,7 @@
"@aws-sdk/client-bedrock-runtime": "^3.910.0",
"@aws-sdk/client-s3": "^3.910.0",
"@biomejs/biome": "2.2.4",
"@cherrystudio/ai-core": "workspace:^1.0.0-alpha.18",
"@cherrystudio/ai-core": "workspace:^1.0.9",
"@cherrystudio/embedjs": "^0.1.31",
"@cherrystudio/embedjs-libsql": "^0.1.31",
"@cherrystudio/embedjs-loader-csv": "^0.1.31",
@@ -135,7 +143,7 @@
"@cherrystudio/embedjs-ollama": "^0.1.31",
"@cherrystudio/embedjs-openai": "^0.1.31",
"@cherrystudio/extension-table-plus": "workspace:^",
"@cherrystudio/openai": "^6.5.0",
"@cherrystudio/openai": "^6.9.0",
"@dnd-kit/core": "^6.3.1",
"@dnd-kit/modifiers": "^9.0.0",
"@dnd-kit/sortable": "^10.0.0",
@@ -165,7 +173,7 @@
"@opentelemetry/sdk-trace-base": "^2.0.0",
"@opentelemetry/sdk-trace-node": "^2.0.0",
"@opentelemetry/sdk-trace-web": "^2.0.0",
"@opeoginni/github-copilot-openai-compatible": "0.1.19",
"@opeoginni/github-copilot-openai-compatible": "0.1.21",
"@playwright/test": "^1.52.0",
"@radix-ui/react-context-menu": "^2.2.16",
"@reduxjs/toolkit": "^2.2.5",
@@ -196,6 +204,7 @@
"@tiptap/y-tiptap": "^3.0.0",
"@truto/turndown-plugin-gfm": "^1.0.2",
"@tryfabric/martian": "^1.2.4",
"@types/better-sqlite3": "^7.6.12",
"@types/cli-progress": "^3",
"@types/content-type": "^1.1.9",
"@types/cors": "^2.8.19",
@@ -408,7 +417,7 @@
"@langchain/openai@npm:>=0.2.0 <0.7.0": "patch:@langchain/openai@npm%3A1.0.0#~/.yarn/patches/@langchain-openai-npm-1.0.0-474d0ad9d4.patch",
"@ai-sdk/openai@npm:2.0.64": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch",
"@ai-sdk/openai@npm:^2.0.42": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch",
"@ai-sdk/google@npm:2.0.31": "patch:@ai-sdk/google@npm%3A2.0.31#~/.yarn/patches/@ai-sdk-google-npm-2.0.31-b0de047210.patch"
"@ai-sdk/google@npm:2.0.36": "patch:@ai-sdk/google@npm%3A2.0.36#~/.yarn/patches/@ai-sdk-google-npm-2.0.36-6f3cc06026.patch"
},
"packageManager": "yarn@4.9.1",
"lint-staged": {

View File

@@ -1,6 +1,6 @@
{
"name": "@cherrystudio/ai-sdk-provider",
"version": "0.1.0",
"version": "0.1.2",
"description": "Cherry Studio AI SDK provider bundle with CherryIN routing.",
"keywords": [
"ai-sdk",

View File

@@ -71,7 +71,7 @@ Cherry Studio AI Core 是一个基于 Vercel AI SDK 的统一 AI Provider 接口
## 安装
```bash
npm install @cherrystudio/ai-core ai
npm install @cherrystudio/ai-core ai @ai-sdk/google @ai-sdk/openai
```
### React Native

View File

@@ -1,6 +1,6 @@
{
"name": "@cherrystudio/ai-core",
"version": "1.0.1",
"version": "1.0.9",
"description": "Cherry Studio AI Core - Unified AI Provider Interface Based on Vercel AI SDK",
"main": "dist/index.js",
"module": "dist/index.mjs",
@@ -33,19 +33,19 @@
},
"homepage": "https://github.com/CherryHQ/cherry-studio#readme",
"peerDependencies": {
"@ai-sdk/google": "^2.0.36",
"@ai-sdk/openai": "^2.0.64",
"@cherrystudio/ai-sdk-provider": "^0.1.2",
"ai": "^5.0.26"
},
"dependencies": {
"@ai-sdk/anthropic": "^2.0.43",
"@ai-sdk/azure": "^2.0.66",
"@ai-sdk/deepseek": "^1.0.27",
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.31#~/.yarn/patches/@ai-sdk-google-npm-2.0.31-b0de047210.patch",
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch",
"@ai-sdk/openai-compatible": "^1.0.26",
"@ai-sdk/provider": "^2.0.0",
"@ai-sdk/provider-utils": "^3.0.16",
"@ai-sdk/xai": "^2.0.31",
"@cherrystudio/ai-sdk-provider": "workspace:*",
"zod": "^4.1.5"
},
"devDependencies": {

View File

@@ -4,12 +4,7 @@
*/
export const BUILT_IN_PLUGIN_PREFIX = 'built-in:'
export { googleToolsPlugin } from './googleToolsPlugin'
export { createLoggingPlugin } from './logging'
export { createPromptToolUsePlugin } from './toolUsePlugin/promptToolUsePlugin'
export type {
PromptToolUseConfig,
ToolUseRequestContext,
ToolUseResult
} from './toolUsePlugin/type'
export { webSearchPlugin, type WebSearchPluginConfig } from './webSearchPlugin'
export * from './googleToolsPlugin'
export * from './toolUsePlugin/promptToolUsePlugin'
export * from './toolUsePlugin/type'
export * from './webSearchPlugin'

View File

@@ -32,7 +32,7 @@ export const webSearchPlugin = (config: WebSearchPluginConfig = DEFAULT_WEB_SEAR
})
// 导出类型定义供开发者使用
export type { WebSearchPluginConfig, WebSearchToolOutputSchema } from './helper'
export * from './helper'
// 默认导出
export default webSearchPlugin

View File

@@ -44,7 +44,7 @@ export {
// ==================== 基础数据和类型 ====================
// 基础Provider数据源
export { baseProviderIds, baseProviders } from './schemas'
export { baseProviderIds, baseProviders, isBaseProvider } from './schemas'
// 类型定义和Schema
export type {

View File

@@ -7,7 +7,6 @@ import { createAzure } from '@ai-sdk/azure'
import { type AzureOpenAIProviderSettings } from '@ai-sdk/azure'
import { createDeepSeek } from '@ai-sdk/deepseek'
import { createGoogleGenerativeAI } from '@ai-sdk/google'
import { createHuggingFace } from '@ai-sdk/huggingface'
import { createOpenAI, type OpenAIProviderSettings } from '@ai-sdk/openai'
import { createOpenAICompatible } from '@ai-sdk/openai-compatible'
import type { LanguageModelV2 } from '@ai-sdk/provider'
@@ -33,8 +32,7 @@ export const baseProviderIds = [
'deepseek',
'openrouter',
'cherryin',
'cherryin-chat',
'huggingface'
'cherryin-chat'
] as const
/**
@@ -158,12 +156,6 @@ export const baseProviders = [
})
},
supportsImageGeneration: true
},
{
id: 'huggingface',
name: 'HuggingFace',
creator: createHuggingFace,
supportsImageGeneration: true
}
] as const satisfies BaseProvider[]

View File

@@ -41,6 +41,7 @@ export enum IpcChannel {
App_SetFullScreen = 'app:set-full-screen',
App_IsFullScreen = 'app:is-full-screen',
App_GetSystemFonts = 'app:get-system-fonts',
APP_CrashRenderProcess = 'app:crash-render-process',
App_MacIsProcessTrusted = 'app:mac-is-process-trusted',
App_MacRequestProcessTrust = 'app:mac-request-process-trust',

View File

@@ -199,7 +199,7 @@ export enum FeedUrl {
export enum UpdateConfigUrl {
GITHUB = 'https://raw.githubusercontent.com/CherryHQ/cherry-studio/refs/heads/x-files/app-upgrade-config/app-upgrade-config.json',
GITCODE = 'https://raw.gitcode.com/CherryHQ/cherry-studio/raw/x-files/app-upgrade-config/app-upgrade-config.json'
GITCODE = 'https://raw.gitcode.com/CherryHQ/cherry-studio/raw/x-files%2Fapp-upgrade-config/app-upgrade-config.json'
}
export enum UpgradeChannel {

View File

@@ -104,12 +104,6 @@ const router = express
logger.warn('No models available from providers', { filter })
}
logger.info('Models response ready', {
filter,
total: response.total,
modelIds: response.data.map((m) => m.id)
})
return res.json(response satisfies ApiModelsResponse)
} catch (error: any) {
logger.error('Error fetching models', { error })

View File

@@ -32,7 +32,7 @@ export class ModelsService {
for (const model of models) {
const provider = providers.find((p) => p.id === model.provider)
logger.debug(`Processing model ${model.id}`)
// logger.debug(`Processing model ${model.id}`)
if (!provider) {
logger.debug(`Skipping model ${model.id} . Reason: Provider not found.`)
continue

View File

@@ -8,7 +8,7 @@ import '@main/config'
import { loggerService } from '@logger'
import { electronApp, optimizer } from '@electron-toolkit/utils'
import { replaceDevtoolsFont } from '@main/utils/windowUtil'
import { app } from 'electron'
import { app, crashReporter } from 'electron'
import installExtension, { REACT_DEVELOPER_TOOLS, REDUX_DEVTOOLS } from 'electron-devtools-installer'
import { isDev, isLinux, isWin } from './constant'
@@ -37,6 +37,14 @@ import { initWebviewHotkeys } from './services/WebviewService'
const logger = loggerService.withContext('MainEntry')
// enable local crash reports
crashReporter.start({
companyName: 'CherryHQ',
productName: 'CherryStudio',
submitURL: '',
uploadToServer: false
})
/**
* Disable hardware acceleration if setting is enabled
*/

View File

@@ -1038,4 +1038,8 @@ export function registerIpc(mainWindow: BrowserWindow, app: Electron.App) {
ipcMain.handle(IpcChannel.WebSocket_Status, WebSocketService.getStatus)
ipcMain.handle(IpcChannel.WebSocket_SendFile, WebSocketService.sendFile)
ipcMain.handle(IpcChannel.WebSocket_GetAllCandidates, WebSocketService.getAllCandidates)
ipcMain.handle(IpcChannel.APP_CrashRenderProcess, () => {
mainWindow.webContents.forcefullyCrashRenderer()
})
}

View File

@@ -21,6 +21,7 @@ type ApiResponse<T> = {
type BatchUploadResponse = {
batch_id: string
file_urls: string[]
headers?: Record<string, string>[]
}
type ExtractProgress = {
@@ -55,7 +56,7 @@ type QuotaResponse = {
export default class MineruPreprocessProvider extends BasePreprocessProvider {
constructor(provider: PreprocessProvider, userId?: string) {
super(provider, userId)
// todo免费期结束后删除
// TODO: remove after free period ends
this.provider.apiKey = this.provider.apiKey || import.meta.env.MAIN_VITE_MINERU_API_KEY
}
@@ -68,21 +69,21 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
logger.info(`MinerU preprocess processing started: ${filePath}`)
await this.validateFile(filePath)
// 1. 获取上传URL并上传文件
// 1. Get upload URL and upload file
const batchId = await this.uploadFile(file)
logger.info(`MinerU file upload completed: batch_id=${batchId}`)
// 2. 等待处理完成并获取结果
// 2. Wait for completion and fetch results
const extractResult = await this.waitForCompletion(sourceId, batchId, file.origin_name)
logger.info(`MinerU processing completed for batch: ${batchId}`)
// 3. 下载并解压文件
// 3. Download and extract output
const { path: outputPath } = await this.downloadAndExtractFile(extractResult.full_zip_url!, file)
// 4. check quota
const quota = await this.checkQuota()
// 5. 创建处理后的文件信息
// 5. Create processed file metadata
return {
processedFile: this.createProcessedFileInfo(file, outputPath),
quota
@@ -115,23 +116,48 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
}
private async validateFile(filePath: string): Promise<void> {
// Phase 1: check file size (without loading into memory)
logger.info(`Validating PDF file: ${filePath}`)
const stats = await fs.promises.stat(filePath)
const fileSizeBytes = stats.size
// Ensure file size is under 200MB
if (fileSizeBytes >= 200 * 1024 * 1024) {
const fileSizeMB = Math.round(fileSizeBytes / (1024 * 1024))
throw new Error(`PDF file size (${fileSizeMB}MB) exceeds the limit of 200MB`)
}
// Phase 2: check page count (requires reading file with error handling)
const pdfBuffer = await fs.promises.readFile(filePath)
const doc = await this.readPdf(pdfBuffer)
try {
const doc = await this.readPdf(pdfBuffer)
// 文件页数小于600页
if (doc.numPages >= 600) {
throw new Error(`PDF page count (${doc.numPages}) exceeds the limit of 600 pages`)
}
// 文件大小小于200MB
if (pdfBuffer.length >= 200 * 1024 * 1024) {
const fileSizeMB = Math.round(pdfBuffer.length / (1024 * 1024))
throw new Error(`PDF file size (${fileSizeMB}MB) exceeds the limit of 200MB`)
// Ensure page count is under 600 pages
if (doc.numPages >= 600) {
throw new Error(`PDF page count (${doc.numPages}) exceeds the limit of 600 pages`)
}
logger.info(`PDF validation passed: ${doc.numPages} pages, ${Math.round(fileSizeBytes / (1024 * 1024))}MB`)
} catch (error: any) {
// If the page limit is exceeded, rethrow immediately
if (error.message.includes('exceeds the limit')) {
throw error
}
// If PDF parsing fails, log a detailed warning but continue processing
logger.warn(
`Failed to parse PDF structure (file may be corrupted or use non-standard format). ` +
`Skipping page count validation. Will attempt to process with MinerU API. ` +
`Error details: ${error.message}. ` +
`Suggestion: If processing fails, try repairing the PDF using tools like Adobe Acrobat or online PDF repair services.`
)
// Do not throw; continue processing
}
}
private createProcessedFileInfo(file: FileMetadata, outputPath: string): FileMetadata {
// 查找解压后的主要文件
// Locate the main extracted file
let finalPath = ''
let finalName = file.origin_name.replace('.pdf', '.md')
@@ -143,14 +169,14 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
const originalMdPath = path.join(outputPath, mdFile)
const newMdPath = path.join(outputPath, finalName)
// 重命名文件为原始文件名
// Rename the file to match the original name
try {
fs.renameSync(originalMdPath, newMdPath)
finalPath = newMdPath
logger.info(`Renamed markdown file from ${mdFile} to ${finalName}`)
} catch (renameError) {
logger.warn(`Failed to rename file ${mdFile} to ${finalName}: ${renameError}`)
// 如果重命名失败,使用原文件
// If renaming fails, fall back to the original file
finalPath = originalMdPath
finalName = mdFile
}
@@ -178,7 +204,7 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
logger.info(`Downloading MinerU result to: ${zipPath}`)
try {
// 下载ZIP文件
// Download the ZIP file
const response = await net.fetch(zipUrl, { method: 'GET' })
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`)
@@ -187,17 +213,17 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
fs.writeFileSync(zipPath, Buffer.from(arrayBuffer))
logger.info(`Downloaded ZIP file: ${zipPath}`)
// 确保提取目录存在
// Ensure the extraction directory exists
if (!fs.existsSync(extractPath)) {
fs.mkdirSync(extractPath, { recursive: true })
}
// 解压文件
// Extract the ZIP contents
const zip = new AdmZip(zipPath)
zip.extractAllTo(extractPath, true)
logger.info(`Extracted files to: ${extractPath}`)
// 删除临时ZIP文件
// Remove the temporary ZIP file
fs.unlinkSync(zipPath)
return { path: extractPath }
@@ -209,11 +235,11 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
private async uploadFile(file: FileMetadata): Promise<string> {
try {
// 步骤1: 获取上传URL
const { batchId, fileUrls } = await this.getBatchUploadUrls(file)
// 步骤2: 上传文件到获取的URL
// Step 1: obtain the upload URL
const { batchId, fileUrls, uploadHeaders } = await this.getBatchUploadUrls(file)
// Step 2: upload the file to the obtained URL
const filePath = fileStorage.getFilePathById(file)
await this.putFileToUrl(filePath, fileUrls[0])
await this.putFileToUrl(filePath, fileUrls[0], file.origin_name, uploadHeaders?.[0])
logger.info(`File uploaded successfully: ${filePath}`, { batchId, fileUrls })
return batchId
@@ -223,7 +249,9 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
}
}
private async getBatchUploadUrls(file: FileMetadata): Promise<{ batchId: string; fileUrls: string[] }> {
private async getBatchUploadUrls(
file: FileMetadata
): Promise<{ batchId: string; fileUrls: string[]; uploadHeaders?: Record<string, string>[] }> {
const endpoint = `${this.provider.apiHost}/api/v4/file-urls/batch`
const payload = {
@@ -254,10 +282,11 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
if (response.ok) {
const data: ApiResponse<BatchUploadResponse> = await response.json()
if (data.code === 0 && data.data) {
const { batch_id, file_urls } = data.data
const { batch_id, file_urls, headers: uploadHeaders } = data.data
return {
batchId: batch_id,
fileUrls: file_urls
fileUrls: file_urls,
uploadHeaders
}
} else {
throw new Error(`API returned error: ${data.msg || JSON.stringify(data)}`)
@@ -271,18 +300,28 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
}
}
private async putFileToUrl(filePath: string, uploadUrl: string): Promise<void> {
private async putFileToUrl(
filePath: string,
uploadUrl: string,
fileName?: string,
headers?: Record<string, string>
): Promise<void> {
try {
const fileBuffer = await fs.promises.readFile(filePath)
const fileSize = fileBuffer.byteLength
const displayName = fileName ?? path.basename(filePath)
logger.info(`Uploading file to MinerU OSS: ${displayName} (${fileSize} bytes)`)
// https://mineru.net/apiManage/docs
const response = await net.fetch(uploadUrl, {
method: 'PUT',
body: fileBuffer
headers,
body: new Uint8Array(fileBuffer)
})
if (!response.ok) {
// 克隆 response 以避免消费 body stream
// Clone the response to avoid consuming the body stream
const responseClone = response.clone()
try {
@@ -353,20 +392,20 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
try {
const result = await this.getExtractResults(batchId)
// 查找对应文件的处理结果
// Find the corresponding file result
const fileResult = result.extract_result.find((item) => item.file_name === fileName)
if (!fileResult) {
throw new Error(`File ${fileName} not found in batch results`)
}
// 检查处理状态
// Check the processing state
if (fileResult.state === 'done' && fileResult.full_zip_url) {
logger.info(`Processing completed for file: ${fileName}`)
return fileResult
} else if (fileResult.state === 'failed') {
throw new Error(`Processing failed for file: ${fileName}, error: ${fileResult.err_msg}`)
} else if (fileResult.state === 'running') {
// 发送进度更新
// Send progress updates
if (fileResult.extract_progress) {
const progress = Math.round(
(fileResult.extract_progress.extracted_pages / fileResult.extract_progress.total_pages) * 100
@@ -374,7 +413,7 @@ export default class MineruPreprocessProvider extends BasePreprocessProvider {
await this.sendPreprocessProgress(sourceId, progress)
logger.info(`File ${fileName} processing progress: ${progress}%`)
} else {
// 如果没有具体进度信息,发送一个通用进度
// If no detailed progress information is available, send a generic update
await this.sendPreprocessProgress(sourceId, 50)
logger.info(`File ${fileName} is still processing...`)
}

View File

@@ -53,18 +53,43 @@ export default class OpenMineruPreprocessProvider extends BasePreprocessProvider
}
private async validateFile(filePath: string): Promise<void> {
// 第一阶段:检查文件大小(无需读取文件到内存)
logger.info(`Validating PDF file: ${filePath}`)
const stats = await fs.promises.stat(filePath)
const fileSizeBytes = stats.size
// File size must be less than 200MB
if (fileSizeBytes >= 200 * 1024 * 1024) {
const fileSizeMB = Math.round(fileSizeBytes / (1024 * 1024))
throw new Error(`PDF file size (${fileSizeMB}MB) exceeds the limit of 200MB`)
}
// 第二阶段:检查页数(需要读取文件,带错误处理)
const pdfBuffer = await fs.promises.readFile(filePath)
const doc = await this.readPdf(pdfBuffer)
try {
const doc = await this.readPdf(pdfBuffer)
// File page count must be less than 600 pages
if (doc.numPages >= 600) {
throw new Error(`PDF page count (${doc.numPages}) exceeds the limit of 600 pages`)
}
// File size must be less than 200MB
if (pdfBuffer.length >= 200 * 1024 * 1024) {
const fileSizeMB = Math.round(pdfBuffer.length / (1024 * 1024))
throw new Error(`PDF file size (${fileSizeMB}MB) exceeds the limit of 200MB`)
// File page count must be less than 600 pages
if (doc.numPages >= 600) {
throw new Error(`PDF page count (${doc.numPages}) exceeds the limit of 600 pages`)
}
logger.info(`PDF validation passed: ${doc.numPages} pages, ${Math.round(fileSizeBytes / (1024 * 1024))}MB`)
} catch (error: any) {
// 如果是页数超限错误,直接抛出
if (error.message.includes('exceeds the limit')) {
throw error
}
// PDF 解析失败,记录详细警告但允许继续处理
logger.warn(
`Failed to parse PDF structure (file may be corrupted or use non-standard format). ` +
`Skipping page count validation. Will attempt to process with MinerU API. ` +
`Error details: ${error.message}. ` +
`Suggestion: If processing fails, try repairing the PDF using tools like Adobe Acrobat or online PDF repair services.`
)
// 不抛出错误,允许继续处理
}
}
@@ -72,8 +97,8 @@ export default class OpenMineruPreprocessProvider extends BasePreprocessProvider
// Find the main file after extraction
let finalPath = ''
let finalName = file.origin_name.replace('.pdf', '.md')
// Find the corresponding folder by file name
outputPath = path.join(outputPath, `${file.origin_name.replace('.pdf', '')}`)
// Find the corresponding folder by file id
outputPath = path.join(outputPath, file.id)
try {
const files = fs.readdirSync(outputPath)
@@ -125,7 +150,7 @@ export default class OpenMineruPreprocessProvider extends BasePreprocessProvider
formData.append('return_md', 'true')
formData.append('response_format_zip', 'true')
formData.append('files', fileBuffer, {
filename: file.origin_name
filename: file.name
})
while (retries < maxRetries) {
@@ -139,7 +164,7 @@ export default class OpenMineruPreprocessProvider extends BasePreprocessProvider
...(this.provider.apiKey ? { Authorization: `Bearer ${this.provider.apiKey}` } : {}),
...formData.getHeaders()
},
body: formData.getBuffer()
body: new Uint8Array(formData.getBuffer())
})
if (!response.ok) {

View File

@@ -1,11 +1,11 @@
import { type Client, createClient } from '@libsql/client'
import { loggerService } from '@logger'
import { mcpApiService } from '@main/apiServer/services/mcp'
import type { ModelValidationError } from '@main/apiServer/utils'
import { validateModelId } from '@main/apiServer/utils'
import type { AgentType, MCPTool, SlashCommand, Tool } from '@types'
import { objectKeys } from '@types'
import { drizzle, type LibSQLDatabase } from 'drizzle-orm/libsql'
import Database from 'better-sqlite3'
import { type BetterSQLite3Database, drizzle } from 'drizzle-orm/better-sqlite3'
import fs from 'fs'
import path from 'path'
@@ -32,8 +32,8 @@ const logger = loggerService.withContext('BaseService')
* - Connection retry logic with exponential backoff
*/
export abstract class BaseService {
protected static client: Client | null = null
protected static db: LibSQLDatabase<typeof schema> | null = null
protected static client: Database.Database | null = null
protected static db: BetterSQLite3Database<typeof schema> | null = null
protected static isInitialized = false
protected static initializationPromise: Promise<void> | null = null
protected jsonFields: string[] = [
@@ -116,9 +116,7 @@ export abstract class BaseService {
fs.mkdirSync(dbDir, { recursive: true })
}
BaseService.client = createClient({
url: `file:${dbPath}`
})
BaseService.client = new Database(dbPath)
BaseService.db = drizzle(BaseService.client, { schema })
@@ -165,12 +163,12 @@ export abstract class BaseService {
}
}
protected get database(): LibSQLDatabase<typeof schema> {
protected get database(): BetterSQLite3Database<typeof schema> {
this.ensureInitialized()
return BaseService.db!
}
protected get rawClient(): Client {
protected get rawClient(): Database.Database {
this.ensureInitialized()
return BaseService.client!
}

View File

@@ -1,7 +1,7 @@
import { type Client } from '@libsql/client'
import { loggerService } from '@logger'
import { getResourcePath } from '@main/utils'
import { type LibSQLDatabase } from 'drizzle-orm/libsql'
import type Database from 'better-sqlite3'
import { type BetterSQLite3Database } from 'drizzle-orm/better-sqlite3'
import fs from 'fs'
import path from 'path'
@@ -23,11 +23,11 @@ interface MigrationJournal {
}
export class MigrationService {
private db: LibSQLDatabase<typeof schema>
private client: Client
private db: BetterSQLite3Database<typeof schema>
private client: Database.Database
private migrationDir: string
constructor(db: LibSQLDatabase<typeof schema>, client: Client) {
constructor(db: BetterSQLite3Database<typeof schema>, client: Database.Database) {
this.db = db
this.client = client
this.migrationDir = path.join(getResourcePath(), 'database', 'drizzle')
@@ -88,8 +88,8 @@ export class MigrationService {
private async migrationsTableExists(): Promise<boolean> {
try {
const table = await this.client.execute(`SELECT name FROM sqlite_master WHERE type='table' AND name='migrations'`)
return table.rows.length > 0
const rows = this.client.prepare(`SELECT name FROM sqlite_master WHERE type='table' AND name='migrations'`).all()
return rows.length > 0
} catch (error) {
logger.error('Failed to check migrations table status:', { error })
throw error
@@ -136,7 +136,7 @@ export class MigrationService {
// Read and execute SQL
const sqlContent = fs.readFileSync(sqlFilePath, 'utf-8')
await this.client.executeMultiple(sqlContent)
this.client.exec(sqlContent)
// Record migration as applied (store journal idx as version for tracking)
const newMigration: NewMigration = {

View File

@@ -91,17 +91,18 @@ class AgentMessageRepository extends BaseService {
return tx ?? this.database
}
private async findExistingMessageRow(
private findExistingMessageRow(
writer: TxClient,
sessionId: string,
role: string,
messageId: string
): Promise<SessionMessageRow | null> {
const candidateRows: SessionMessageRow[] = await writer
): SessionMessageRow | null {
const candidateRows: SessionMessageRow[] = writer
.select()
.from(sessionMessagesTable)
.where(and(eq(sessionMessagesTable.session_id, sessionId), eq(sessionMessagesTable.role, role)))
.orderBy(asc(sessionMessagesTable.created_at))
.all()
for (const row of candidateRows) {
if (!row?.content) continue
@@ -119,12 +120,9 @@ class AgentMessageRepository extends BaseService {
return null
}
private async upsertMessage(
private upsertMessageSync(
params: PersistUserMessageParams | PersistAssistantMessageParams
): Promise<AgentSessionMessageEntity> {
await AgentMessageRepository.initialize()
this.ensureInitialized()
): AgentSessionMessageEntity {
const { sessionId, agentSessionId = '', payload, metadata, createdAt, tx } = params
if (!payload?.message?.role) {
@@ -140,13 +138,13 @@ class AgentMessageRepository extends BaseService {
const serializedPayload = this.serializeMessage(payload)
const serializedMetadata = this.serializeMetadata(metadata)
const existingRow = await this.findExistingMessageRow(writer, sessionId, payload.message.role, payload.message.id)
const existingRow = this.findExistingMessageRow(writer, sessionId, payload.message.role, payload.message.id)
if (existingRow) {
const metadataToPersist = serializedMetadata ?? existingRow.metadata ?? undefined
const agentSessionToPersist = agentSessionId || existingRow.agent_session_id || ''
await writer
writer
.update(sessionMessagesTable)
.set({
content: serializedPayload,
@@ -155,6 +153,7 @@ class AgentMessageRepository extends BaseService {
updated_at: now
})
.where(eq(sessionMessagesTable.id, existingRow.id))
.run()
return this.deserialize({
...existingRow,
@@ -175,11 +174,19 @@ class AgentMessageRepository extends BaseService {
updated_at: now
}
const [saved] = await writer.insert(sessionMessagesTable).values(insertData).returning()
const [saved] = writer.insert(sessionMessagesTable).values(insertData).returning().all()
return this.deserialize(saved)
}
private async upsertMessage(
params: PersistUserMessageParams | PersistAssistantMessageParams
): Promise<AgentSessionMessageEntity> {
await AgentMessageRepository.initialize()
this.ensureInitialized()
return this.upsertMessageSync(params)
}
async persistUserMessage(params: PersistUserMessageParams): Promise<AgentSessionMessageEntity> {
return this.upsertMessage({ ...params, agentSessionId: params.agentSessionId ?? '' })
}
@@ -194,11 +201,11 @@ class AgentMessageRepository extends BaseService {
const { sessionId, agentSessionId, user, assistant } = params
const result = await this.database.transaction(async (tx) => {
const result = this.database.transaction((tx) => {
const exchangeResult: PersistExchangeResult = {}
if (user?.payload) {
exchangeResult.userMessage = await this.persistUserMessage({
exchangeResult.userMessage = this.upsertMessageSync({
sessionId,
agentSessionId,
payload: user.payload,
@@ -209,7 +216,7 @@ class AgentMessageRepository extends BaseService {
}
if (assistant?.payload) {
exchangeResult.assistantMessage = await this.persistAssistantMessage({
exchangeResult.assistantMessage = this.upsertMessageSync({
sessionId,
agentSessionId,
payload: assistant.payload,

View File

@@ -24,7 +24,7 @@ export default defineConfig({
schema: './src/main/services/agents/database/schema/index.ts',
out: './resources/database/drizzle',
dbCredentials: {
url: `file:${resolvedDbPath}`
url: resolvedDbPath
},
verbose: true,
strict: true

View File

@@ -202,9 +202,9 @@ export class AgentService extends BaseService {
async deleteAgent(id: string): Promise<boolean> {
this.ensureInitialized()
const result = await this.database.delete(agentsTable).where(eq(agentsTable.id, id))
const result = this.database.delete(agentsTable).where(eq(agentsTable.id, id)).run()
return result.rowsAffected > 0
return result.changes > 0
}
async agentExists(id: string): Promise<boolean> {

View File

@@ -148,11 +148,12 @@ export class SessionMessageService extends BaseService {
async deleteSessionMessage(sessionId: string, messageId: number): Promise<boolean> {
this.ensureInitialized()
const result = await this.database
const result = this.database
.delete(sessionMessagesTable)
.where(and(eq(sessionMessagesTable.id, messageId), eq(sessionMessagesTable.session_id, sessionId)))
.run()
return result.rowsAffected > 0
return result.changes > 0
}
async createSessionMessage(

View File

@@ -270,11 +270,12 @@ export class SessionService extends BaseService {
async deleteSession(agentId: string, id: string): Promise<boolean> {
this.ensureInitialized()
const result = await this.database
const result = this.database
.delete(sessionsTable)
.where(and(eq(sessionsTable.id, id), eq(sessionsTable.agent_id, agentId)))
.run()
return result.rowsAffected > 0
return result.changes > 0
}
async sessionExists(agentId: string, id: string): Promise<boolean> {

View File

@@ -21,11 +21,16 @@ describe('stripLocalCommandTags', () => {
'<local-command-stdout>line1</local-command-stdout>\nkeep\n<local-command-stderr>Error</local-command-stderr>'
expect(stripLocalCommandTags(input)).toBe('line1\nkeep\nError')
})
it('if no tags present, returns original string', () => {
const input = 'just some normal text'
expect(stripLocalCommandTags(input)).toBe(input)
})
})
describe('Claude → AiSDK transform', () => {
it('handles tool call streaming lifecycle', () => {
const state = new ClaudeStreamState()
const state = new ClaudeStreamState({ agentSessionId: baseStreamMetadata.session_id })
const parts: ReturnType<typeof transformSDKMessageToStreamParts>[number][] = []
const messages: SDKMessage[] = [
@@ -182,14 +187,119 @@ describe('Claude → AiSDK transform', () => {
(typeof parts)[number],
{ type: 'tool-result' }
>
expect(toolResult.toolCallId).toBe('tool-1')
expect(toolResult.toolCallId).toBe('session-123:tool-1')
expect(toolResult.toolName).toBe('Bash')
expect(toolResult.input).toEqual({ command: 'ls' })
expect(toolResult.output).toBe('ok')
})
it('handles tool calls without streaming events (no content_block_start/stop)', () => {
const state = new ClaudeStreamState({ agentSessionId: '12344' })
const parts: ReturnType<typeof transformSDKMessageToStreamParts>[number][] = []
const messages: SDKMessage[] = [
{
...baseStreamMetadata,
type: 'assistant',
uuid: uuid(20),
message: {
id: 'msg-tool-no-stream',
type: 'message',
role: 'assistant',
model: 'claude-test',
content: [
{
type: 'tool_use',
id: 'tool-read',
name: 'Read',
input: { file_path: '/test.txt' }
},
{
type: 'tool_use',
id: 'tool-bash',
name: 'Bash',
input: { command: 'ls -la' }
}
],
stop_reason: 'tool_use',
stop_sequence: null,
usage: {
input_tokens: 10,
output_tokens: 20
}
}
} as unknown as SDKMessage,
{
...baseStreamMetadata,
type: 'user',
uuid: uuid(21),
message: {
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'tool-read',
content: 'file contents',
is_error: false
}
]
}
} as SDKMessage,
{
...baseStreamMetadata,
type: 'user',
uuid: uuid(22),
message: {
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: 'tool-bash',
content: 'total 42\n...',
is_error: false
}
]
}
} as SDKMessage
]
for (const message of messages) {
const transformed = transformSDKMessageToStreamParts(message, state)
parts.push(...transformed)
}
const types = parts.map((part) => part.type)
expect(types).toEqual(['tool-call', 'tool-call', 'tool-result', 'tool-result'])
const toolCalls = parts.filter((part) => part.type === 'tool-call') as Extract<
(typeof parts)[number],
{ type: 'tool-call' }
>[]
expect(toolCalls).toHaveLength(2)
expect(toolCalls[0].toolName).toBe('Read')
expect(toolCalls[0].toolCallId).toBe('12344:tool-read')
expect(toolCalls[1].toolName).toBe('Bash')
expect(toolCalls[1].toolCallId).toBe('12344:tool-bash')
const toolResults = parts.filter((part) => part.type === 'tool-result') as Extract<
(typeof parts)[number],
{ type: 'tool-result' }
>[]
expect(toolResults).toHaveLength(2)
// This is the key assertion - toolName should NOT be 'unknown'
expect(toolResults[0].toolName).toBe('Read')
expect(toolResults[0].toolCallId).toBe('12344:tool-read')
expect(toolResults[0].input).toEqual({ file_path: '/test.txt' })
expect(toolResults[0].output).toBe('file contents')
expect(toolResults[1].toolName).toBe('Bash')
expect(toolResults[1].toolCallId).toBe('12344:tool-bash')
expect(toolResults[1].input).toEqual({ command: 'ls -la' })
expect(toolResults[1].output).toBe('total 42\n...')
})
it('handles streaming text completion', () => {
const state = new ClaudeStreamState()
const state = new ClaudeStreamState({ agentSessionId: baseStreamMetadata.session_id })
const parts: ReturnType<typeof transformSDKMessageToStreamParts>[number][] = []
const messages: SDKMessage[] = [
@@ -300,4 +410,87 @@ describe('Claude → AiSDK transform', () => {
expect(finishStep.finishReason).toBe('stop')
expect(finishStep.usage).toEqual({ inputTokens: 2, outputTokens: 4, totalTokens: 6 })
})
it('emits fallback text when Claude sends a snapshot instead of deltas', () => {
const state = new ClaudeStreamState({ agentSessionId: '12344' })
const parts: ReturnType<typeof transformSDKMessageToStreamParts>[number][] = []
const messages: SDKMessage[] = [
{
...baseStreamMetadata,
type: 'stream_event',
uuid: uuid(30),
event: {
type: 'message_start',
message: {
id: 'msg-fallback',
type: 'message',
role: 'assistant',
model: 'claude-test',
content: [],
stop_reason: null,
stop_sequence: null,
usage: {}
}
}
} as unknown as SDKMessage,
{
...baseStreamMetadata,
type: 'stream_event',
uuid: uuid(31),
event: {
type: 'content_block_start',
index: 0,
content_block: {
type: 'text',
text: ''
}
}
} as unknown as SDKMessage,
{
...baseStreamMetadata,
type: 'assistant',
uuid: uuid(32),
message: {
id: 'msg-fallback-content',
type: 'message',
role: 'assistant',
model: 'claude-test',
content: [
{
type: 'text',
text: 'Final answer without streaming deltas.'
}
],
stop_reason: 'end_turn',
stop_sequence: null,
usage: {
input_tokens: 3,
output_tokens: 7
}
}
} as unknown as SDKMessage
]
for (const message of messages) {
const transformed = transformSDKMessageToStreamParts(message, state)
parts.push(...transformed)
}
const types = parts.map((part) => part.type)
expect(types).toEqual(['start-step', 'text-start', 'text-delta', 'text-end', 'finish-step'])
const delta = parts.find((part) => part.type === 'text-delta') as Extract<
(typeof parts)[number],
{ type: 'text-delta' }
>
expect(delta.text).toBe('Final answer without streaming deltas.')
const finish = parts.find((part) => part.type === 'finish-step') as Extract<
(typeof parts)[number],
{ type: 'finish-step' }
>
expect(finish.usage).toEqual({ inputTokens: 3, outputTokens: 7, totalTokens: 10 })
expect(finish.finishReason).toBe('stop')
})
})

View File

@@ -10,8 +10,21 @@
* Every Claude turn gets its own instance. `resetStep` should be invoked once the finish event has
* been emitted to avoid leaking state into the next turn.
*/
import { loggerService } from '@logger'
import type { FinishReason, LanguageModelUsage, ProviderMetadata } from 'ai'
/**
* Builds a namespaced tool call ID by combining session ID with raw tool call ID.
* This ensures tool calls from different sessions don't conflict even if they have
* the same raw ID from the SDK.
*
* @param sessionId - The agent session ID
* @param rawToolCallId - The raw tool call ID from SDK (e.g., "WebFetch_0")
*/
export function buildNamespacedToolCallId(sessionId: string, rawToolCallId: string): string {
return `${sessionId}:${rawToolCallId}`
}
/**
* Shared fields for every block that Claude can stream (text, reasoning, tool).
*/
@@ -34,6 +47,7 @@ type ReasoningBlockState = BaseBlockState & {
type ToolBlockState = BaseBlockState & {
kind: 'tool'
toolCallId: string
rawToolCallId: string
toolName: string
inputBuffer: string
providerMetadata?: ProviderMetadata
@@ -48,12 +62,17 @@ type PendingUsageState = {
}
type PendingToolCall = {
rawToolCallId: string
toolCallId: string
toolName: string
input: unknown
providerMetadata?: ProviderMetadata
}
type ClaudeStreamStateOptions = {
agentSessionId: string
}
/**
* Tracks the lifecycle of Claude streaming blocks (text, thinking, tool calls)
* across individual websocket events. The transformer relies on this class to
@@ -61,12 +80,20 @@ type PendingToolCall = {
* usage/finish metadata once Anthropic closes a message.
*/
export class ClaudeStreamState {
private logger
private readonly agentSessionId: string
private blocksByIndex = new Map<number, BlockState>()
private toolIndexById = new Map<string, number>()
private toolIndexByNamespacedId = new Map<string, number>()
private pendingUsage: PendingUsageState = {}
private pendingToolCalls = new Map<string, PendingToolCall>()
private stepActive = false
constructor(options: ClaudeStreamStateOptions) {
this.logger = loggerService.withContext('ClaudeStreamState')
this.agentSessionId = options.agentSessionId
this.logger.silly('ClaudeStreamState', options)
}
/** Marks the beginning of a new AiSDK step. */
beginStep(): void {
this.stepActive = true
@@ -104,19 +131,21 @@ export class ClaudeStreamState {
/** Caches tool metadata so subsequent input deltas and results can find it. */
openToolBlock(
index: number,
params: { toolCallId: string; toolName: string; providerMetadata?: ProviderMetadata }
params: { rawToolCallId: string; toolName: string; providerMetadata?: ProviderMetadata }
): ToolBlockState {
const toolCallId = buildNamespacedToolCallId(this.agentSessionId, params.rawToolCallId)
const block: ToolBlockState = {
kind: 'tool',
id: params.toolCallId,
id: toolCallId,
index,
toolCallId: params.toolCallId,
toolCallId,
rawToolCallId: params.rawToolCallId,
toolName: params.toolName,
inputBuffer: '',
providerMetadata: params.providerMetadata
}
this.blocksByIndex.set(index, block)
this.toolIndexById.set(params.toolCallId, index)
this.toolIndexByNamespacedId.set(toolCallId, index)
return block
}
@@ -124,14 +153,32 @@ export class ClaudeStreamState {
return this.blocksByIndex.get(index)
}
getFirstOpenTextBlock(): TextBlockState | undefined {
const candidates: TextBlockState[] = []
for (const block of this.blocksByIndex.values()) {
if (block.kind === 'text') {
candidates.push(block)
}
}
if (candidates.length === 0) {
return undefined
}
candidates.sort((a, b) => a.index - b.index)
return candidates[0]
}
getToolBlockById(toolCallId: string): ToolBlockState | undefined {
const index = this.toolIndexById.get(toolCallId)
const index = this.toolIndexByNamespacedId.get(toolCallId)
if (index === undefined) return undefined
const block = this.blocksByIndex.get(index)
if (!block || block.kind !== 'tool') return undefined
return block
}
getToolBlockByRawId(rawToolCallId: string): ToolBlockState | undefined {
return this.getToolBlockById(buildNamespacedToolCallId(this.agentSessionId, rawToolCallId))
}
/** Appends streamed text to a text block, returning the updated state when present. */
appendTextDelta(index: number, text: string): TextBlockState | undefined {
const block = this.blocksByIndex.get(index)
@@ -158,10 +205,12 @@ export class ClaudeStreamState {
/** Records a tool call to be consumed once its result arrives from the user. */
registerToolCall(
toolCallId: string,
rawToolCallId: string,
payload: { toolName: string; input: unknown; providerMetadata?: ProviderMetadata }
): void {
this.pendingToolCalls.set(toolCallId, {
const toolCallId = buildNamespacedToolCallId(this.agentSessionId, rawToolCallId)
this.pendingToolCalls.set(rawToolCallId, {
rawToolCallId,
toolCallId,
toolName: payload.toolName,
input: payload.input,
@@ -170,10 +219,10 @@ export class ClaudeStreamState {
}
/** Retrieves and clears the buffered tool call metadata for the given id. */
consumePendingToolCall(toolCallId: string): PendingToolCall | undefined {
const entry = this.pendingToolCalls.get(toolCallId)
consumePendingToolCall(rawToolCallId: string): PendingToolCall | undefined {
const entry = this.pendingToolCalls.get(rawToolCallId)
if (entry) {
this.pendingToolCalls.delete(toolCallId)
this.pendingToolCalls.delete(rawToolCallId)
}
return entry
}
@@ -182,13 +231,13 @@ export class ClaudeStreamState {
* Persists the final input payload for a tool block once the provider signals
* completion so that downstream tool results can reference the original call.
*/
completeToolBlock(toolCallId: string, input: unknown, providerMetadata?: ProviderMetadata): void {
completeToolBlock(toolCallId: string, toolName: string, input: unknown, providerMetadata?: ProviderMetadata): void {
const block = this.getToolBlockByRawId(toolCallId)
this.registerToolCall(toolCallId, {
toolName: this.getToolBlockById(toolCallId)?.toolName ?? 'unknown',
toolName,
input,
providerMetadata
})
const block = this.getToolBlockById(toolCallId)
if (block) {
block.resolvedInput = input
}
@@ -200,7 +249,7 @@ export class ClaudeStreamState {
if (!block) return undefined
this.blocksByIndex.delete(index)
if (block.kind === 'tool') {
this.toolIndexById.delete(block.toolCallId)
this.toolIndexByNamespacedId.delete(block.toolCallId)
}
return block
}
@@ -227,7 +276,7 @@ export class ClaudeStreamState {
/** Drops cached block metadata for the currently active message. */
resetBlocks(): void {
this.blocksByIndex.clear()
this.toolIndexById.clear()
this.toolIndexByNamespacedId.clear()
}
/** Resets the entire step lifecycle after emitting a terminal frame. */
@@ -236,6 +285,10 @@ export class ClaudeStreamState {
this.resetPendingUsage()
this.stepActive = false
}
getNamespacedToolCallId(rawToolCallId: string): string {
return buildNamespacedToolCallId(this.agentSessionId, rawToolCallId)
}
}
export type { PendingToolCall }

View File

@@ -13,6 +13,7 @@ import { app } from 'electron'
import type { GetAgentSessionResponse } from '../..'
import type { AgentServiceInterface, AgentStream, AgentStreamEvent } from '../../interfaces/AgentStreamInterface'
import { sessionService } from '../SessionService'
import { buildNamespacedToolCallId } from './claude-stream-state'
import { promptForToolApproval } from './tool-permissions'
import { ClaudeStreamState, transformSDKMessageToStreamParts } from './transform'
@@ -150,7 +151,10 @@ class ClaudeCodeService implements AgentServiceInterface {
return { behavior: 'allow', updatedInput: input }
}
return promptForToolApproval(toolName, input, options)
return promptForToolApproval(toolName, input, {
...options,
toolCallId: buildNamespacedToolCallId(session.id, options.toolUseID)
})
}
// Build SDK options from parameters
@@ -346,7 +350,7 @@ class ClaudeCodeService implements AgentServiceInterface {
const jsonOutput: SDKMessage[] = []
let hasCompleted = false
const startTime = Date.now()
const streamState = new ClaudeStreamState()
const streamState = new ClaudeStreamState({ agentSessionId: sessionId })
try {
for await (const message of query({ prompt: promptStream, options })) {
@@ -410,23 +414,6 @@ class ClaudeCodeService implements AgentServiceInterface {
}
}
if (message.type === 'assistant' || message.type === 'user') {
logger.silly('claude response', {
message,
content: JSON.stringify(message.message.content)
})
} else if (message.type === 'stream_event') {
// logger.silly('Claude stream event', {
// message,
// event: JSON.stringify(message.event)
// })
} else {
logger.silly('Claude response', {
message,
event: JSON.stringify(message)
})
}
const chunks = transformSDKMessageToStreamParts(message, streamState)
for (const chunk of chunks) {
stream.emit('data', {

View File

@@ -37,6 +37,7 @@ type RendererPermissionRequestPayload = {
requestId: string
toolName: string
toolId: string
toolCallId: string
description?: string
requiresPermissions: boolean
input: Record<string, unknown>
@@ -206,10 +207,19 @@ const ensureIpcHandlersRegistered = () => {
})
}
type PromptForToolApprovalOptions = {
signal: AbortSignal
suggestions?: PermissionUpdate[]
// NOTICE: This ID is namespaced with session ID, not the raw SDK tool call ID.
// Format: `${sessionId}:${rawToolCallId}`, e.g., `session_123:WebFetch_0`
toolCallId: string
}
export async function promptForToolApproval(
toolName: string,
input: Record<string, unknown>,
options?: { signal: AbortSignal; suggestions?: PermissionUpdate[] }
options: PromptForToolApprovalOptions
): Promise<PermissionResult> {
if (shouldAutoApproveTools) {
logger.debug('promptForToolApproval auto-approving tool for test', {
@@ -245,6 +255,7 @@ export async function promptForToolApproval(
logger.info('Requesting user approval for tool usage', {
requestId,
toolName,
toolCallId: options.toolCallId,
description: toolMetadata?.description
})
@@ -252,6 +263,7 @@ export async function promptForToolApproval(
requestId,
toolName,
toolId: toolMetadata?.id ?? toolName,
toolCallId: options.toolCallId,
description: toolMetadata?.description,
requiresPermissions: toolMetadata?.requirePermissions ?? false,
input: sanitizedInput,
@@ -266,6 +278,7 @@ export async function promptForToolApproval(
logger.debug('Registering tool permission request', {
requestId,
toolName,
toolCallId: options.toolCallId,
requiresPermissions: requestPayload.requiresPermissions,
timeoutMs: TOOL_APPROVAL_TIMEOUT_MS,
suggestionCount: sanitizedSuggestions.length
@@ -273,7 +286,11 @@ export async function promptForToolApproval(
return new Promise<PermissionResult>((resolve) => {
const timeout = setTimeout(() => {
logger.info('User tool permission request timed out', { requestId, toolName })
logger.info('User tool permission request timed out', {
requestId,
toolName,
toolCallId: options.toolCallId
})
finalizeRequest(requestId, { behavior: 'deny', message: 'Timed out waiting for approval' }, 'timeout')
}, TOOL_APPROVAL_TIMEOUT_MS)
@@ -287,7 +304,11 @@ export async function promptForToolApproval(
if (options?.signal) {
const abortListener = () => {
logger.info('Tool permission request aborted before user responded', { requestId, toolName })
logger.info('Tool permission request aborted before user responded', {
requestId,
toolName,
toolCallId: options.toolCallId
})
finalizeRequest(requestId, defaultDenyUpdate, 'aborted')
}

View File

@@ -110,7 +110,7 @@ const sdkMessageToProviderMetadata = (message: SDKMessage): ProviderMetadata =>
* blocks across calls so that incremental deltas can be correlated correctly.
*/
export function transformSDKMessageToStreamParts(sdkMessage: SDKMessage, state: ClaudeStreamState): AgentStreamPart[] {
logger.silly('Transforming SDKMessage', { message: sdkMessage })
logger.silly('Transforming SDKMessage', { message: JSON.stringify(sdkMessage) })
switch (sdkMessage.type) {
case 'assistant':
return handleAssistantMessage(sdkMessage, state)
@@ -186,14 +186,13 @@ function handleAssistantMessage(
for (const block of content) {
switch (block.type) {
case 'text':
if (!isStreamingActive) {
const sanitizedText = stripLocalCommandTags(block.text)
if (sanitizedText) {
textBlocks.push(sanitizedText)
}
case 'text': {
const sanitizedText = stripLocalCommandTags(block.text)
if (sanitizedText) {
textBlocks.push(sanitizedText)
}
break
}
case 'tool_use':
handleAssistantToolUse(block as ToolUseContent, providerMetadata, state, chunks)
break
@@ -203,7 +202,16 @@ function handleAssistantMessage(
}
}
if (!isStreamingActive && textBlocks.length > 0) {
if (textBlocks.length === 0) {
return chunks
}
const combinedText = textBlocks.join('')
if (!combinedText) {
return chunks
}
if (!isStreamingActive) {
const id = message.uuid?.toString() || generateMessageId()
state.beginStep()
chunks.push({
@@ -219,7 +227,7 @@ function handleAssistantMessage(
chunks.push({
type: 'text-delta',
id,
text: textBlocks.join(''),
text: combinedText,
providerMetadata
})
chunks.push({
@@ -230,7 +238,27 @@ function handleAssistantMessage(
return finalizeNonStreamingStep(message, state, chunks)
}
return chunks
const existingTextBlock = state.getFirstOpenTextBlock()
const fallbackId = existingTextBlock?.id || message.uuid?.toString() || generateMessageId()
if (!existingTextBlock) {
chunks.push({
type: 'text-start',
id: fallbackId,
providerMetadata
})
}
chunks.push({
type: 'text-delta',
id: fallbackId,
text: combinedText,
providerMetadata
})
chunks.push({
type: 'text-end',
id: fallbackId,
providerMetadata
})
return finalizeNonStreamingStep(message, state, chunks)
}
/**
@@ -243,15 +271,16 @@ function handleAssistantToolUse(
state: ClaudeStreamState,
chunks: AgentStreamPart[]
): void {
const toolCallId = state.getNamespacedToolCallId(block.id)
chunks.push({
type: 'tool-call',
toolCallId: block.id,
toolCallId,
toolName: block.name,
input: block.input,
providerExecuted: true,
providerMetadata
})
state.completeToolBlock(block.id, block.input, providerMetadata)
state.completeToolBlock(block.id, block.name, block.input, providerMetadata)
}
/**
@@ -331,10 +360,11 @@ function handleUserMessage(
if (block.type === 'tool_result') {
const toolResult = block as ToolResultContent
const pendingCall = state.consumePendingToolCall(toolResult.tool_use_id)
const toolCallId = pendingCall?.toolCallId ?? state.getNamespacedToolCallId(toolResult.tool_use_id)
if (toolResult.is_error) {
chunks.push({
type: 'tool-error',
toolCallId: toolResult.tool_use_id,
toolCallId,
toolName: pendingCall?.toolName ?? 'unknown',
input: pendingCall?.input,
error: toolResult.content,
@@ -343,7 +373,7 @@ function handleUserMessage(
} else {
chunks.push({
type: 'tool-result',
toolCallId: toolResult.tool_use_id,
toolCallId,
toolName: pendingCall?.toolName ?? 'unknown',
input: pendingCall?.input,
output: toolResult.content,
@@ -457,6 +487,9 @@ function handleStreamEvent(
}
case 'message_stop': {
if (!state.hasActiveStep()) {
break
}
const pending = state.getPendingUsage()
chunks.push({
type: 'finish-step',
@@ -514,7 +547,7 @@ function handleContentBlockStart(
}
case 'tool_use': {
const block = state.openToolBlock(index, {
toolCallId: contentBlock.id,
rawToolCallId: contentBlock.id,
toolName: contentBlock.name,
providerMetadata
})

View File

@@ -111,6 +111,7 @@ const api = {
setFullScreen: (value: boolean): Promise<void> => ipcRenderer.invoke(IpcChannel.App_SetFullScreen, value),
isFullScreen: (): Promise<boolean> => ipcRenderer.invoke(IpcChannel.App_IsFullScreen),
getSystemFonts: (): Promise<string[]> => ipcRenderer.invoke(IpcChannel.App_GetSystemFonts),
mockCrashRenderProcess: () => ipcRenderer.invoke(IpcChannel.APP_CrashRenderProcess),
mac: {
isProcessTrusted: (): Promise<boolean> => ipcRenderer.invoke(IpcChannel.App_MacIsProcessTrusted),
requestProcessTrust: (): Promise<boolean> => ipcRenderer.invoke(IpcChannel.App_MacRequestProcessTrust)

View File

@@ -1,5 +1,6 @@
import { loggerService } from '@logger'
import {
getModelSupportedVerbosity,
isFunctionCallingModel,
isNotSupportTemperatureAndTopP,
isOpenAIModel,
@@ -242,12 +243,18 @@ export abstract class BaseApiClient<
return serviceTierSetting
}
protected getVerbosity(): OpenAIVerbosity {
protected getVerbosity(model?: Model): OpenAIVerbosity {
try {
const state = window.store?.getState()
const verbosity = state?.settings?.openAI?.verbosity
if (verbosity && ['low', 'medium', 'high'].includes(verbosity)) {
// If model is provided, check if the verbosity is supported by the model
if (model) {
const supportedVerbosity = getModelSupportedVerbosity(model)
// Use user's verbosity if supported, otherwise use the first supported option
return supportedVerbosity.includes(verbosity) ? verbosity : supportedVerbosity[0]
}
return verbosity
}
} catch (error) {

View File

@@ -35,6 +35,7 @@ import {
isSupportedThinkingTokenModel,
isSupportedThinkingTokenQwenModel,
isSupportedThinkingTokenZhipuModel,
isSupportVerbosityModel,
isVisionModel,
MODEL_SUPPORTED_REASONING_EFFORT,
ZHIPU_RESULT_TOKENS
@@ -733,6 +734,13 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
...modalities,
// groq 有不同的 service tier 配置,不符合 openai 接口类型
service_tier: this.getServiceTier(model) as OpenAIServiceTier,
...(isSupportVerbosityModel(model)
? {
text: {
verbosity: this.getVerbosity(model)
}
}
: {}),
...this.getProviderSpecificParameters(assistant, model),
...reasoningEffort,
...getOpenAIWebSearchParams(model, enableWebSearch),

View File

@@ -48,9 +48,8 @@ export abstract class OpenAIBaseClient<
}
// 仅适用于openai
override getBaseURL(): string {
const host = this.provider.apiHost
return formatApiHost(host)
override getBaseURL(isSupportedAPIVerion: boolean = true): string {
return formatApiHost(this.provider.apiHost, isSupportedAPIVerion)
}
override async generateImage({
@@ -144,6 +143,11 @@ export abstract class OpenAIBaseClient<
}
let apiKeyForSdkInstance = this.apiKey
let baseURLForSdkInstance = this.getBaseURL()
let headersForSdkInstance = {
...this.defaultHeaders(),
...this.provider.extra_headers
}
if (this.provider.id === 'copilot') {
const defaultHeaders = store.getState().copilot.defaultHeaders
@@ -151,6 +155,11 @@ export abstract class OpenAIBaseClient<
// this.provider.apiKey不允许修改
// this.provider.apiKey = token
apiKeyForSdkInstance = token
baseURLForSdkInstance = this.getBaseURL(false)
headersForSdkInstance = {
...headersForSdkInstance,
...COPILOT_DEFAULT_HEADERS
}
}
if (this.provider.id === 'azure-openai' || this.provider.type === 'azure-openai') {
@@ -164,12 +173,8 @@ export abstract class OpenAIBaseClient<
this.sdkInstance = new OpenAI({
dangerouslyAllowBrowser: true,
apiKey: apiKeyForSdkInstance,
baseURL: this.getBaseURL(),
defaultHeaders: {
...this.defaultHeaders(),
...this.provider.extra_headers,
...(this.provider.id === 'copilot' ? COPILOT_DEFAULT_HEADERS : {})
}
baseURL: baseURLForSdkInstance,
defaultHeaders: headersForSdkInstance
}) as TSdkInstance
}
return this.sdkInstance

View File

@@ -297,7 +297,31 @@ export class OpenAIResponseAPIClient extends OpenAIBaseClient<
private convertResponseToMessageContent(response: OpenAI.Responses.Response): ResponseInput {
const content: OpenAI.Responses.ResponseInput = []
content.push(...response.output)
response.output.forEach((item) => {
if (item.type !== 'apply_patch_call' && item.type !== 'apply_patch_call_output') {
content.push(item)
} else if (item.type === 'apply_patch_call') {
if (item.operation !== undefined) {
const applyPatchToolCall: OpenAI.Responses.ResponseInputItem.ApplyPatchCall = {
...item,
operation: item.operation
}
content.push(applyPatchToolCall)
} else {
logger.warn('Undefined tool call operation for ApplyPatchToolCall.')
}
} else if (item.type === 'apply_patch_call_output') {
if (item.output !== undefined) {
const applyPatchToolCallOutput: OpenAI.Responses.ResponseInputItem.ApplyPatchCallOutput = {
...item,
output: item.output === null ? undefined : item.output
}
content.push(applyPatchToolCallOutput)
} else {
logger.warn('Undefined tool call operation for ApplyPatchToolCall.')
}
}
})
return content
}
@@ -496,7 +520,7 @@ export class OpenAIResponseAPIClient extends OpenAIBaseClient<
...(isSupportVerbosityModel(model)
? {
text: {
verbosity: this.getVerbosity()
verbosity: this.getVerbosity(model)
}
}
: {}),

View File

@@ -0,0 +1,13 @@
import { isClaude45ReasoningModel } from '@renderer/config/models'
import type { Assistant, Model } from '@renderer/types'
import { isToolUseModeFunction } from '@renderer/utils/assistant'
const INTERLEAVED_THINKING_HEADER = 'interleaved-thinking-2025-05-14'
export function addAnthropicHeaders(assistant: Assistant, model: Model): string[] {
const anthropicHeaders: string[] = []
if (isClaude45ReasoningModel(model) && isToolUseModeFunction(assistant)) {
anthropicHeaders.push(INTERLEAVED_THINKING_HEADER)
}
return anthropicHeaders
}

View File

@@ -7,10 +7,12 @@ import { anthropic } from '@ai-sdk/anthropic'
import { google } from '@ai-sdk/google'
import { vertexAnthropic } from '@ai-sdk/google-vertex/anthropic/edge'
import { vertex } from '@ai-sdk/google-vertex/edge'
import { combineHeaders } from '@ai-sdk/provider-utils'
import type { WebSearchPluginConfig } from '@cherrystudio/ai-core/built-in/plugins'
import { isBaseProvider } from '@cherrystudio/ai-core/core/providers/schemas'
import { loggerService } from '@logger'
import {
isAnthropicModel,
isGenerateImageModel,
isOpenRouterBuiltInWebSearchModel,
isReasoningModel,
@@ -19,6 +21,8 @@ import {
isSupportedThinkingTokenModel,
isWebSearchModel
} from '@renderer/config/models'
import { isAwsBedrockProvider } from '@renderer/config/providers'
import { isVertexProvider } from '@renderer/hooks/useVertexAI'
import { getAssistantSettings, getDefaultModel } from '@renderer/services/AssistantService'
import store from '@renderer/store'
import type { CherryWebSearchConfig } from '@renderer/store/websearch'
@@ -34,6 +38,7 @@ import { setupToolsConfig } from '../utils/mcp'
import { buildProviderOptions } from '../utils/options'
import { getAnthropicThinkingBudget } from '../utils/reasoning'
import { buildProviderBuiltinWebSearchConfig } from '../utils/websearch'
import { addAnthropicHeaders } from './header'
import { supportsTopP } from './modelCapabilities'
import { getTemperature, getTopP } from './modelParameters'
@@ -172,13 +177,21 @@ export async function buildStreamTextParams(
}
}
let headers: Record<string, string | undefined> = options.requestOptions?.headers ?? {}
// https://docs.claude.com/en/docs/build-with-claude/extended-thinking#interleaved-thinking
if (!isVertexProvider(provider) && !isAwsBedrockProvider(provider) && isAnthropicModel(model)) {
const newBetaHeaders = { 'anthropic-beta': addAnthropicHeaders(assistant, model).join(',') }
headers = combineHeaders(headers, newBetaHeaders)
}
// 构建基础参数
const params: StreamTextParams = {
messages: sdkMessages,
maxOutputTokens: maxTokens,
temperature: getTemperature(assistant, model),
abortSignal: options.requestOptions?.signal,
headers: options.requestOptions?.headers,
headers,
providerOptions,
stopWhen: stepCountIs(20),
maxRetries: 0

View File

@@ -1,5 +1,12 @@
import { baseProviderIdSchema, customProviderIdSchema } from '@cherrystudio/ai-core/provider'
import { isOpenAIModel, isQwenMTModel, isSupportFlexServiceTierModel } from '@renderer/config/models'
import { loggerService } from '@logger'
import {
getModelSupportedVerbosity,
isOpenAIModel,
isQwenMTModel,
isSupportFlexServiceTierModel,
isSupportVerbosityModel
} from '@renderer/config/models'
import { isSupportServiceTierProvider } from '@renderer/config/providers'
import { mapLanguageToQwenMTModel } from '@renderer/config/translate'
import type { Assistant, Model, Provider } from '@renderer/types'
@@ -26,6 +33,8 @@ import {
} from './reasoning'
import { getWebSearchParams } from './websearch'
const logger = loggerService.withContext('aiCore.utils.options')
// copy from BaseApiClient.ts
const getServiceTier = (model: Model, provider: Provider) => {
const serviceTierSetting = provider.serviceTier
@@ -70,6 +79,7 @@ export function buildProviderOptions(
enableGenerateImage: boolean
}
): Record<string, any> {
logger.debug('buildProviderOptions', { assistant, model, actualProvider, capabilities })
const rawProviderId = getAiSdkProviderId(actualProvider)
// 构建 provider 特定的选项
let providerSpecificOptions: Record<string, any> = {}
@@ -89,9 +99,6 @@ export function buildProviderOptions(
serviceTier: serviceTierSetting
}
break
case 'huggingface':
providerSpecificOptions = buildOpenAIProviderOptions(assistant, model, capabilities)
break
case 'anthropic':
providerSpecificOptions = buildAnthropicProviderOptions(assistant, model, capabilities)
break
@@ -134,6 +141,9 @@ export function buildProviderOptions(
case 'bedrock':
providerSpecificOptions = buildBedrockProviderOptions(assistant, model, capabilities)
break
case 'huggingface':
providerSpecificOptions = buildOpenAIProviderOptions(assistant, model, capabilities)
break
default:
// 对于其他 provider使用通用的构建逻辑
providerSpecificOptions = {
@@ -152,13 +162,17 @@ export function buildProviderOptions(
...getCustomParameters(assistant)
}
const rawProviderKey =
let rawProviderKey =
{
'google-vertex': 'google',
'google-vertex-anthropic': 'anthropic',
'ai-gateway': 'gateway'
}[rawProviderId] || rawProviderId
if (rawProviderKey === 'cherryin') {
rawProviderKey = { gemini: 'google' }[actualProvider.type] || actualProvider.type
}
// 返回 AI Core SDK 要求的格式:{ 'providerId': providerOptions }
return {
[rawProviderKey]: providerSpecificOptions
@@ -187,6 +201,23 @@ function buildOpenAIProviderOptions(
...reasoningParams
}
}
if (isSupportVerbosityModel(model)) {
const state = window.store?.getState()
const userVerbosity = state?.settings?.openAI?.verbosity
if (userVerbosity && ['low', 'medium', 'high'].includes(userVerbosity)) {
const supportedVerbosity = getModelSupportedVerbosity(model)
// Use user's verbosity if supported, otherwise use the first supported option
const verbosity = supportedVerbosity.includes(userVerbosity) ? userVerbosity : supportedVerbosity[0]
providerOptions = {
...providerOptions,
textVerbosity: verbosity
}
}
}
return providerOptions
}

View File

@@ -1,3 +1,7 @@
import type { BedrockProviderOptions } from '@ai-sdk/amazon-bedrock'
import type { AnthropicProviderOptions } from '@ai-sdk/anthropic'
import type { GoogleGenerativeAIProviderOptions } from '@ai-sdk/google'
import type { XaiProviderOptions } from '@ai-sdk/xai'
import { loggerService } from '@logger'
import { DEFAULT_MAX_TOKENS } from '@renderer/config/constant'
import {
@@ -7,6 +11,7 @@ import {
isDeepSeekHybridInferenceModel,
isDoubaoSeedAfter251015,
isDoubaoThinkingAutoModel,
isGPT51SeriesModel,
isGrok4FastReasoningModel,
isGrokReasoningModel,
isOpenAIDeepResearchModel,
@@ -56,13 +61,20 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
}
const reasoningEffort = assistant?.settings?.reasoning_effort
if (!reasoningEffort) {
// Handle undefined and 'none' reasoningEffort.
// TODO: They should be separated.
if (!reasoningEffort || reasoningEffort === 'none') {
// openrouter: use reasoning
if (model.provider === SystemProviderIds.openrouter) {
// Don't disable reasoning for Gemini models that support thinking tokens
if (isSupportedThinkingTokenGeminiModel(model) && !GEMINI_FLASH_MODEL_REGEX.test(model.id)) {
return {}
}
// 'none' is not an available value for effort for now.
// I think they should resolve this issue soon, so I'll just go ahead and use this value.
if (isGPT51SeriesModel(model) && reasoningEffort === 'none') {
return { reasoning: { effort: 'none' } }
}
// Don't disable reasoning for models that require it
if (
isGrokReasoningModel(model) ||
@@ -117,6 +129,13 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
return { thinking: { type: 'disabled' } }
}
// Specially for GPT-5.1. Suppose this is a OpenAI Compatible provider
if (isGPT51SeriesModel(model) && reasoningEffort === 'none') {
return {
reasoningEffort: 'none'
}
}
return {}
}
@@ -371,7 +390,7 @@ export function getOpenAIReasoningParams(assistant: Assistant, model: Model): Re
export function getAnthropicThinkingBudget(assistant: Assistant, model: Model): number {
const { maxTokens, reasoning_effort: reasoningEffort } = getAssistantSettings(assistant)
if (reasoningEffort === undefined) {
if (reasoningEffort === undefined || reasoningEffort === 'none') {
return 0
}
const effortRatio = EFFORT_RATIO[reasoningEffort]
@@ -393,14 +412,17 @@ export function getAnthropicThinkingBudget(assistant: Assistant, model: Model):
* 获取 Anthropic 推理参数
* 从 AnthropicAPIClient 中提取的逻辑
*/
export function getAnthropicReasoningParams(assistant: Assistant, model: Model): Record<string, any> {
export function getAnthropicReasoningParams(
assistant: Assistant,
model: Model
): Pick<AnthropicProviderOptions, 'thinking'> {
if (!isReasoningModel(model)) {
return {}
}
const reasoningEffort = assistant?.settings?.reasoning_effort
if (reasoningEffort === undefined) {
if (reasoningEffort === undefined || reasoningEffort === 'none') {
return {
thinking: {
type: 'disabled'
@@ -429,7 +451,10 @@ export function getAnthropicReasoningParams(assistant: Assistant, model: Model):
* 注意Gemini/GCP 端点所使用的 thinkingBudget 等参数应该按照驼峰命名法传递
* 而在 Google 官方提供的 OpenAI 兼容端点中则使用蛇形命名法 thinking_budget
*/
export function getGeminiReasoningParams(assistant: Assistant, model: Model): Record<string, any> {
export function getGeminiReasoningParams(
assistant: Assistant,
model: Model
): Pick<GoogleGenerativeAIProviderOptions, 'thinkingConfig'> {
if (!isReasoningModel(model)) {
return {}
}
@@ -438,7 +463,7 @@ export function getGeminiReasoningParams(assistant: Assistant, model: Model): Re
// Gemini 推理参数
if (isSupportedThinkingTokenGeminiModel(model)) {
if (reasoningEffort === undefined) {
if (reasoningEffort === undefined || reasoningEffort === 'none') {
return {
thinkingConfig: {
includeThoughts: false,
@@ -478,27 +503,35 @@ export function getGeminiReasoningParams(assistant: Assistant, model: Model): Re
* @param model - The model being used
* @returns XAI-specific reasoning parameters
*/
export function getXAIReasoningParams(assistant: Assistant, model: Model): Record<string, any> {
export function getXAIReasoningParams(assistant: Assistant, model: Model): Pick<XaiProviderOptions, 'reasoningEffort'> {
if (!isSupportedReasoningEffortGrokModel(model)) {
return {}
}
const { reasoning_effort: reasoningEffort } = getAssistantSettings(assistant)
if (!reasoningEffort) {
if (!reasoningEffort || reasoningEffort === 'none') {
return {}
}
// For XAI provider Grok models, use reasoningEffort parameter directly
return {
reasoningEffort
switch (reasoningEffort) {
case 'auto':
case 'minimal':
case 'medium':
return { reasoningEffort: 'low' }
case 'low':
case 'high':
return { reasoningEffort }
}
}
/**
* Get Bedrock reasoning parameters
*/
export function getBedrockReasoningParams(assistant: Assistant, model: Model): Record<string, any> {
export function getBedrockReasoningParams(
assistant: Assistant,
model: Model
): Pick<BedrockProviderOptions, 'reasoningConfig'> {
if (!isReasoningModel(model)) {
return {}
}
@@ -509,6 +542,14 @@ export function getBedrockReasoningParams(assistant: Assistant, model: Model): R
return {}
}
if (reasoningEffort === 'none') {
return {
reasoningConfig: {
type: 'disabled'
}
}
}
// Only apply thinking budget for Claude reasoning models
if (!isSupportedThinkingTokenClaudeModel(model)) {
return {}

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@@ -1,35 +1,120 @@
import 'emoji-picker-element'
import TwemojiCountryFlagsWoff2 from '@renderer/assets/fonts/country-flag-fonts/TwemojiCountryFlags.woff2?url'
import { useTheme } from '@renderer/context/ThemeProvider'
import type { LanguageVarious } from '@renderer/types'
import { polyfillCountryFlagEmojis } from 'country-flag-emoji-polyfill'
// i18n translations from emoji-picker-element
import de from 'emoji-picker-element/i18n/de'
import en from 'emoji-picker-element/i18n/en'
import es from 'emoji-picker-element/i18n/es'
import fr from 'emoji-picker-element/i18n/fr'
import ja from 'emoji-picker-element/i18n/ja'
import pt_PT from 'emoji-picker-element/i18n/pt_PT'
import ru_RU from 'emoji-picker-element/i18n/ru_RU'
import zh_CN from 'emoji-picker-element/i18n/zh_CN'
import type Picker from 'emoji-picker-element/picker'
import type { EmojiClickEvent, NativeEmoji } from 'emoji-picker-element/shared'
// Emoji data from emoji-picker-element-data (local, no CDN)
// Using CLDR format for full multi-language search support (28 languages)
import dataDE from 'emoji-picker-element-data/de/cldr/data.json?url'
import dataEN from 'emoji-picker-element-data/en/cldr/data.json?url'
import dataES from 'emoji-picker-element-data/es/cldr/data.json?url'
import dataFR from 'emoji-picker-element-data/fr/cldr/data.json?url'
import dataJA from 'emoji-picker-element-data/ja/cldr/data.json?url'
import dataPT from 'emoji-picker-element-data/pt/cldr/data.json?url'
import dataRU from 'emoji-picker-element-data/ru/cldr/data.json?url'
import dataZH from 'emoji-picker-element-data/zh/cldr/data.json?url'
import dataZH_HANT from 'emoji-picker-element-data/zh-hant/cldr/data.json?url'
import type { FC } from 'react'
import { useEffect, useRef } from 'react'
import { useTranslation } from 'react-i18next'
interface Props {
onEmojiClick: (emoji: string) => void
}
// Mapping from app locale to emoji-picker-element i18n
const i18nMap: Record<LanguageVarious, typeof en> = {
'en-US': en,
'zh-CN': zh_CN,
'zh-TW': zh_CN, // Closest available
'de-DE': de,
'el-GR': en, // No Greek available, fallback to English
'es-ES': es,
'fr-FR': fr,
'ja-JP': ja,
'pt-PT': pt_PT,
'ru-RU': ru_RU
}
// Mapping from app locale to emoji data URL
// Using CLDR format provides native language search support for all locales
const dataSourceMap: Record<LanguageVarious, string> = {
'en-US': dataEN,
'zh-CN': dataZH,
'zh-TW': dataZH_HANT,
'de-DE': dataDE,
'el-GR': dataEN, // No Greek CLDR available, fallback to English
'es-ES': dataES,
'fr-FR': dataFR,
'ja-JP': dataJA,
'pt-PT': dataPT,
'ru-RU': dataRU
}
// Mapping from app locale to emoji-picker-element locale string
// Must match the data source locale for proper IndexedDB caching
const localeMap: Record<LanguageVarious, string> = {
'en-US': 'en',
'zh-CN': 'zh',
'zh-TW': 'zh-hant',
'de-DE': 'de',
'el-GR': 'en',
'es-ES': 'es',
'fr-FR': 'fr',
'ja-JP': 'ja',
'pt-PT': 'pt',
'ru-RU': 'ru'
}
const EmojiPicker: FC<Props> = ({ onEmojiClick }) => {
const { theme } = useTheme()
const ref = useRef<HTMLDivElement>(null)
const { i18n } = useTranslation()
const ref = useRef<Picker>(null)
const currentLocale = i18n.language as LanguageVarious
useEffect(() => {
polyfillCountryFlagEmojis('Twemoji Mozilla', TwemojiCountryFlagsWoff2)
}, [])
// Configure picker with i18n and dataSource
useEffect(() => {
const refValue = ref.current
const picker = ref.current
if (picker) {
picker.i18n = i18nMap[currentLocale] || en
picker.dataSource = dataSourceMap[currentLocale] || dataEN
picker.locale = localeMap[currentLocale] || 'en'
}
}, [currentLocale])
if (refValue) {
const handleEmojiClick = (event: any) => {
useEffect(() => {
const picker = ref.current
if (picker) {
const handleEmojiClick = (event: EmojiClickEvent) => {
event.stopPropagation()
onEmojiClick(event.detail.unicode || event.detail.emoji.unicode)
const { detail } = event
// Use detail.unicode (processed with skin tone) or fallback to emoji's unicode for native emoji
const unicode = detail.unicode || ('unicode' in detail.emoji ? (detail.emoji as NativeEmoji).unicode : '')
onEmojiClick(unicode)
}
// 添加事件监听器
refValue.addEventListener('emoji-click', handleEmojiClick)
picker.addEventListener('emoji-click', handleEmojiClick)
// 清理事件监听器
return () => {
refValue.removeEventListener('emoji-click', handleEmojiClick)
picker.removeEventListener('emoji-click', handleEmojiClick)
}
}
return

View File

@@ -1,5 +1,4 @@
import { loggerService } from '@logger'
import ClaudeIcon from '@renderer/assets/images/models/claude.png'
import { ErrorBoundary } from '@renderer/components/ErrorBoundary'
import { TopView } from '@renderer/components/TopView'
import { permissionModeCards } from '@renderer/config/agent'
@@ -9,7 +8,6 @@ import SelectAgentBaseModelButton from '@renderer/pages/home/components/SelectAg
import type {
AddAgentForm,
AgentEntity,
AgentType,
ApiModel,
BaseAgentForm,
PermissionMode,
@@ -17,30 +15,22 @@ import type {
UpdateAgentForm
} from '@renderer/types'
import { AgentConfigurationSchema, isAgentType } from '@renderer/types'
import { Avatar, Button, Input, Modal, Select } from 'antd'
import { Button, Input, Modal, Select } from 'antd'
import { AlertTriangleIcon } from 'lucide-react'
import type { ChangeEvent, FormEvent } from 'react'
import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import { useTranslation } from 'react-i18next'
import styled from 'styled-components'
import type { BaseOption } from './shared'
const { TextArea } = Input
const logger = loggerService.withContext('AddAgentPopup')
interface AgentTypeOption extends BaseOption {
type: 'type'
key: AgentEntity['type']
name: AgentEntity['name']
}
type AgentWithTools = AgentEntity & { tools?: Tool[] }
const buildAgentForm = (existing?: AgentWithTools): BaseAgentForm => ({
type: existing?.type ?? 'claude-code',
name: existing?.name ?? 'Claude Code',
name: existing?.name ?? 'Agent',
description: existing?.description,
instructions: existing?.instructions,
model: existing?.model ?? '',
@@ -100,54 +90,6 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
})
}, [])
// add supported agents type here.
const agentConfig = useMemo(
() =>
[
{
type: 'type',
key: 'claude-code',
label: 'Claude Code',
name: 'Claude Code',
avatar: ClaudeIcon
}
] as const satisfies AgentTypeOption[],
[]
)
const agentOptions = useMemo(
() =>
agentConfig.map((option) => ({
value: option.key,
label: (
<OptionWrapper>
<Avatar src={option.avatar} size={24} />
<span>{option.label}</span>
</OptionWrapper>
)
})),
[agentConfig]
)
const onAgentTypeChange = useCallback(
(value: AgentType) => {
const prevConfig = agentConfig.find((config) => config.key === form.type)
let newName: string | undefined = form.name
if (prevConfig && prevConfig.name === form.name) {
const newConfig = agentConfig.find((config) => config.key === value)
if (newConfig) {
newName = newConfig.name
}
}
setForm((prev) => ({
...prev,
type: value,
name: newName
}))
},
[agentConfig, form.name, form.type]
)
const onNameChange = useCallback((e: ChangeEvent<HTMLInputElement>) => {
setForm((prev) => ({
...prev,
@@ -155,12 +97,12 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
}))
}, [])
const onDescChange = useCallback((e: ChangeEvent<HTMLTextAreaElement>) => {
setForm((prev) => ({
...prev,
description: e.target.value
}))
}, [])
// const onDescChange = useCallback((e: ChangeEvent<HTMLTextAreaElement>) => {
// setForm((prev) => ({
// ...prev,
// description: e.target.value
// }))
// }, [])
const onInstChange = useCallback((e: ChangeEvent<HTMLTextAreaElement>) => {
setForm((prev) => ({
@@ -334,16 +276,6 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
<StyledForm onSubmit={onSubmit}>
<FormContent>
<FormRow>
<FormItem style={{ flex: 1 }}>
<Label>{t('agent.type.label')}</Label>
<Select
value={form.type}
onChange={onAgentTypeChange}
options={agentOptions}
disabled={isEditing(agent)}
style={{ width: '100%' }}
/>
</FormItem>
<FormItem style={{ flex: 1 }}>
<Label>
{t('common.name')} <RequiredMark>*</RequiredMark>
@@ -363,7 +295,7 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
avatarSize={24}
iconSize={16}
buttonStyle={{
padding: '8px 12px',
padding: '3px 8px',
width: '100%',
border: '1px solid var(--color-border)',
borderRadius: 6,
@@ -382,7 +314,6 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
onChange={onPermissionModeChange}
style={{ width: '100%' }}
placeholder={t('agent.settings.tooling.permissionMode.placeholder', 'Select permission mode')}
dropdownStyle={{ minWidth: '500px' }}
optionLabelProp="label">
{permissionModeCards.map((item) => (
<Select.Option key={item.mode} value={item.mode} label={t(item.titleKey, item.titleFallback)}>
@@ -438,10 +369,10 @@ const PopupContainer: React.FC<Props> = ({ agent, afterSubmit, resolve }) => {
<TextArea rows={3} value={form.instructions ?? ''} onChange={onInstChange} />
</FormItem>
<FormItem>
{/* <FormItem>
<Label>{t('common.description')}</Label>
<TextArea rows={2} value={form.description ?? ''} onChange={onDescChange} />
</FormItem>
<TextArea rows={1} value={form.description ?? ''} onChange={onDescChange} />
</FormItem> */}
</FormContent>
<FormFooter>
@@ -575,14 +506,7 @@ const FormFooter = styled.div`
display: flex;
justify-content: flex-end;
gap: 8px;
padding-top: 16px;
border-top: 1px solid var(--color-border);
`
const OptionWrapper = styled.div`
display: flex;
align-items: center;
gap: 8px;
padding: 10px;
`
const PermissionOptionWrapper = styled.div`

View File

@@ -1,6 +1,12 @@
import { describe, expect, it, vi } from 'vitest'
import { isDoubaoSeedAfter251015, isDoubaoThinkingAutoModel, isLingReasoningModel } from '../models/reasoning'
import {
isDoubaoSeedAfter251015,
isDoubaoThinkingAutoModel,
isGeminiReasoningModel,
isLingReasoningModel,
isSupportedThinkingTokenGeminiModel
} from '../models/reasoning'
vi.mock('@renderer/store', () => ({
default: {
@@ -231,3 +237,284 @@ describe('Ling Models', () => {
})
})
})
describe('Gemini Models', () => {
describe('isSupportedThinkingTokenGeminiModel', () => {
it('should return true for gemini 2.5 models', () => {
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-flash-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-pro-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini latest models', () => {
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-flash-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-pro-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-flash-lite-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini 3 models', () => {
// Preview versions
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'google/gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
// Future stable versions
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'google/gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'google/gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return false for image and tts models', () => {
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-flash-image',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-2.5-flash-preview-tts',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
it('should return false for older gemini models', () => {
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-1.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-1.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-1.0-pro',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
})
describe('isGeminiReasoningModel', () => {
it('should return true for gemini thinking models', () => {
expect(
isGeminiReasoningModel({
id: 'gemini-2.0-flash-thinking',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'gemini-thinking-exp',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for supported thinking token gemini models', () => {
expect(
isGeminiReasoningModel({
id: 'gemini-2.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'gemini-2.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini-3 models', () => {
// Preview versions
expect(
isGeminiReasoningModel({
id: 'gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'google/gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
// Future stable versions
expect(
isGeminiReasoningModel({
id: 'gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'google/gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGeminiReasoningModel({
id: 'google/gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return false for older gemini models without thinking', () => {
expect(
isGeminiReasoningModel({
id: 'gemini-1.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isGeminiReasoningModel({
id: 'gemini-1.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
it('should return false for undefined model', () => {
expect(isGeminiReasoningModel(undefined)).toBe(false)
})
})
})

View File

@@ -0,0 +1,167 @@
import { describe, expect, it, vi } from 'vitest'
import { isVisionModel } from '../models/vision'
vi.mock('@renderer/store', () => ({
default: {
getState: () => ({
llm: {
settings: {}
}
})
}
}))
// FIXME: Idk why it's imported. Maybe circular dependency somewhere
vi.mock('@renderer/services/AssistantService.ts', () => ({
getDefaultAssistant: () => {
return {
id: 'default',
name: 'default',
emoji: '😀',
prompt: '',
topics: [],
messages: [],
type: 'assistant',
regularPhrases: [],
settings: {}
}
},
getProviderByModel: () => null
}))
describe('isVisionModel', () => {
describe('Gemini Models', () => {
it('should return true for gemini 1.5 models', () => {
expect(
isVisionModel({
id: 'gemini-1.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-1.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini 2.x models', () => {
expect(
isVisionModel({
id: 'gemini-2.0-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-2.0-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-2.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-2.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini latest models', () => {
expect(
isVisionModel({
id: 'gemini-flash-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-pro-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-flash-lite-latest',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini 3 models', () => {
// Preview versions
expect(
isVisionModel({
id: 'gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
// Future stable versions
expect(
isVisionModel({
id: 'gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isVisionModel({
id: 'gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return true for gemini exp models', () => {
expect(
isVisionModel({
id: 'gemini-exp-1206',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return false for gemini 1.0 models', () => {
expect(
isVisionModel({
id: 'gemini-1.0-pro',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
})
})

View File

@@ -0,0 +1,64 @@
import { describe, expect, it, vi } from 'vitest'
import { GEMINI_SEARCH_REGEX } from '../models/websearch'
vi.mock('@renderer/store', () => ({
default: {
getState: () => ({
llm: {
settings: {}
}
})
}
}))
// FIXME: Idk why it's imported. Maybe circular dependency somewhere
vi.mock('@renderer/services/AssistantService.ts', () => ({
getDefaultAssistant: () => {
return {
id: 'default',
name: 'default',
emoji: '😀',
prompt: '',
topics: [],
messages: [],
type: 'assistant',
regularPhrases: [],
settings: {}
}
},
getProviderByModel: () => null
}))
describe('Gemini Search Models', () => {
describe('GEMINI_SEARCH_REGEX', () => {
it('should match gemini 2.x models', () => {
expect(GEMINI_SEARCH_REGEX.test('gemini-2.0-flash')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-2.0-pro')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-2.5-flash')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-2.5-pro')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-2.5-flash-latest')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-2.5-pro-latest')).toBe(true)
})
it('should match gemini latest models', () => {
expect(GEMINI_SEARCH_REGEX.test('gemini-flash-latest')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-pro-latest')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-flash-lite-latest')).toBe(true)
})
it('should match gemini 3 models', () => {
// Preview versions
expect(GEMINI_SEARCH_REGEX.test('gemini-3-pro-preview')).toBe(true)
// Future stable versions
expect(GEMINI_SEARCH_REGEX.test('gemini-3-flash')).toBe(true)
expect(GEMINI_SEARCH_REGEX.test('gemini-3-pro')).toBe(true)
})
it('should not match older gemini models', () => {
expect(GEMINI_SEARCH_REGEX.test('gemini-1.5-flash')).toBe(false)
expect(GEMINI_SEARCH_REGEX.test('gemini-1.5-pro')).toBe(false)
expect(GEMINI_SEARCH_REGEX.test('gemini-1.0-pro')).toBe(false)
})
})
})

View File

@@ -59,6 +59,10 @@ import {
} from '@renderer/assets/images/models/gpt_dark.png'
import ChatGPTImageModelLogo from '@renderer/assets/images/models/gpt_image_1.png'
import ChatGPTo1ModelLogo from '@renderer/assets/images/models/gpt_o1.png'
import GPT51ModelLogo from '@renderer/assets/images/models/gpt-5.1.png'
import GPT51ChatModelLogo from '@renderer/assets/images/models/gpt-5.1-chat.png'
import GPT51CodexModelLogo from '@renderer/assets/images/models/gpt-5.1-codex.png'
import GPT51CodexMiniModelLogo from '@renderer/assets/images/models/gpt-5.1-codex-mini.png'
import GPT5ModelLogo from '@renderer/assets/images/models/gpt-5.png'
import GPT5ChatModelLogo from '@renderer/assets/images/models/gpt-5-chat.png'
import GPT5CodexModelLogo from '@renderer/assets/images/models/gpt-5-codex.png'
@@ -182,6 +186,10 @@ export function getModelLogoById(modelId: string): string | undefined {
'gpt-5-nano': GPT5NanoModelLogo,
'gpt-5-chat': GPT5ChatModelLogo,
'gpt-5-codex': GPT5CodexModelLogo,
'gpt-5.1-codex-mini': GPT51CodexMiniModelLogo,
'gpt-5.1-codex': GPT51CodexModelLogo,
'gpt-5.1-chat': GPT51ChatModelLogo,
'gpt-5.1': GPT51ModelLogo,
'gpt-5': GPT5ModelLogo,
gpts: isLight ? ChatGPT4ModelLogo : ChatGPT4ModelLogoDark,
'gpt-oss(?:-[\\w-]+)': isLight ? ChatGptModelLogo : ChatGptModelLogoDark,

View File

@@ -8,7 +8,7 @@ import type {
import { getLowerBaseModelName, isUserSelectedModelType } from '@renderer/utils'
import { isEmbeddingModel, isRerankModel } from './embedding'
import { isGPT5SeriesModel } from './utils'
import { isGPT5ProModel, isGPT5SeriesModel, isGPT51SeriesModel } from './utils'
import { isTextToImageModel } from './vision'
import { GEMINI_FLASH_MODEL_REGEX, isOpenAIDeepResearchModel } from './websearch'
@@ -24,6 +24,9 @@ export const MODEL_SUPPORTED_REASONING_EFFORT: ReasoningEffortConfig = {
openai_deep_research: ['medium'] as const,
gpt5: ['minimal', 'low', 'medium', 'high'] as const,
gpt5_codex: ['low', 'medium', 'high'] as const,
gpt5_1: ['none', 'low', 'medium', 'high'] as const,
gpt5_1_codex: ['none', 'medium', 'high'] as const,
gpt5pro: ['high'] as const,
grok: ['low', 'high'] as const,
grok4_fast: ['auto'] as const,
gemini: ['low', 'medium', 'high', 'auto'] as const,
@@ -41,24 +44,27 @@ export const MODEL_SUPPORTED_REASONING_EFFORT: ReasoningEffortConfig = {
// 模型类型到支持选项的映射表
export const MODEL_SUPPORTED_OPTIONS: ThinkingOptionConfig = {
default: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.default] as const,
default: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.default] as const,
o: MODEL_SUPPORTED_REASONING_EFFORT.o,
openai_deep_research: MODEL_SUPPORTED_REASONING_EFFORT.openai_deep_research,
gpt5: [...MODEL_SUPPORTED_REASONING_EFFORT.gpt5] as const,
gpt5pro: MODEL_SUPPORTED_REASONING_EFFORT.gpt5pro,
gpt5_codex: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_codex,
gpt5_1: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_1,
gpt5_1_codex: MODEL_SUPPORTED_REASONING_EFFORT.gpt5_1_codex,
grok: MODEL_SUPPORTED_REASONING_EFFORT.grok,
grok4_fast: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.grok4_fast] as const,
gemini: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini] as const,
grok4_fast: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.grok4_fast] as const,
gemini: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.gemini] as const,
gemini_pro: MODEL_SUPPORTED_REASONING_EFFORT.gemini_pro,
qwen: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.qwen] as const,
qwen: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.qwen] as const,
qwen_thinking: MODEL_SUPPORTED_REASONING_EFFORT.qwen_thinking,
doubao: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao] as const,
doubao_no_auto: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao_no_auto] as const,
doubao: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao] as const,
doubao_no_auto: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.doubao_no_auto] as const,
doubao_after_251015: MODEL_SUPPORTED_REASONING_EFFORT.doubao_after_251015,
hunyuan: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.hunyuan] as const,
zhipu: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.zhipu] as const,
hunyuan: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.hunyuan] as const,
zhipu: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.zhipu] as const,
perplexity: MODEL_SUPPORTED_REASONING_EFFORT.perplexity,
deepseek_hybrid: ['off', ...MODEL_SUPPORTED_REASONING_EFFORT.deepseek_hybrid] as const
deepseek_hybrid: ['none', ...MODEL_SUPPORTED_REASONING_EFFORT.deepseek_hybrid] as const
} as const
const withModelIdAndNameAsId = <T>(model: Model, fn: (model: Model) => T): { idResult: T; nameResult: T } => {
@@ -75,11 +81,20 @@ const _getThinkModelType = (model: Model): ThinkingModelType => {
if (isOpenAIDeepResearchModel(model)) {
return 'openai_deep_research'
}
if (isGPT5SeriesModel(model)) {
if (isGPT51SeriesModel(model)) {
if (modelId.includes('codex')) {
thinkingModelType = 'gpt5_1_codex'
} else {
thinkingModelType = 'gpt5_1'
}
} else if (isGPT5SeriesModel(model)) {
if (modelId.includes('codex')) {
thinkingModelType = 'gpt5_codex'
} else {
thinkingModelType = 'gpt5'
if (isGPT5ProModel(model)) {
thinkingModelType = 'gpt5pro'
}
}
} else if (isSupportedReasoningEffortOpenAIModel(model)) {
thinkingModelType = 'o'
@@ -239,7 +254,7 @@ export function isGeminiReasoningModel(model?: Model): boolean {
// Gemini 支持思考模式的模型正则
export const GEMINI_THINKING_MODEL_REGEX =
/gemini-(?:2\.5.*(?:-latest)?|flash-latest|pro-latest|flash-lite-latest)(?:-[\w-]+)*$/i
/gemini-(?:2\.5.*(?:-latest)?|3-(?:flash|pro)(?:-preview)?|flash-latest|pro-latest|flash-lite-latest)(?:-[\w-]+)*$/i
export const isSupportedThinkingTokenGeminiModel = (model: Model): boolean => {
const modelId = getLowerBaseModelName(model.id, '/')
@@ -526,7 +541,7 @@ export function isSupportedReasoningEffortOpenAIModel(model: Model): boolean {
modelId.includes('o3') ||
modelId.includes('o4') ||
modelId.includes('gpt-oss') ||
(isGPT5SeriesModel(model) && !modelId.includes('chat'))
((isGPT5SeriesModel(model) || isGPT51SeriesModel(model)) && !modelId.includes('chat'))
)
}

View File

@@ -54,7 +54,7 @@ export function isSupportedFlexServiceTier(model: Model): boolean {
export function isSupportVerbosityModel(model: Model): boolean {
const modelId = getLowerBaseModelName(model.id)
return isGPT5SeriesModel(model) && !modelId.includes('chat')
return (isGPT5SeriesModel(model) || isGPT51SeriesModel(model)) && !modelId.includes('chat')
}
export function isOpenAIChatCompletionOnlyModel(model: Model): boolean {
@@ -227,12 +227,32 @@ export const isNotSupportSystemMessageModel = (model: Model): boolean => {
export const isGPT5SeriesModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gpt-5')
return modelId.includes('gpt-5') && !modelId.includes('gpt-5.1')
}
export const isGPT5SeriesReasoningModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gpt-5') && !modelId.includes('chat')
return isGPT5SeriesModel(model) && !modelId.includes('chat')
}
export const isGPT51SeriesModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gpt-5.1')
}
// GPT-5 verbosity configuration
// gpt-5-pro only supports 'high', other GPT-5 models support all levels
export const MODEL_SUPPORTED_VERBOSITY: Record<string, ('low' | 'medium' | 'high')[]> = {
'gpt-5-pro': ['high'],
default: ['low', 'medium', 'high']
}
export const getModelSupportedVerbosity = (model: Model): ('low' | 'medium' | 'high')[] => {
const modelId = getLowerBaseModelName(model.id)
if (modelId.includes('gpt-5-pro')) {
return MODEL_SUPPORTED_VERBOSITY['gpt-5-pro']
}
return MODEL_SUPPORTED_VERBOSITY.default
}
export const isGeminiModel = (model: Model) => {
@@ -251,3 +271,8 @@ export const ZHIPU_RESULT_TOKENS = ['<|begin_of_box|>', '<|end_of_box|>'] as con
export const agentModelFilter = (model: Model): boolean => {
return !isEmbeddingModel(model) && !isRerankModel(model) && !isTextToImageModel(model)
}
export const isGPT5ProModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gpt-5-pro')
}

View File

@@ -12,6 +12,7 @@ const visionAllowedModels = [
'gemini-1\\.5',
'gemini-2\\.0',
'gemini-2\\.5',
'gemini-3-(?:flash|pro)(?:-preview)?',
'gemini-(flash|pro|flash-lite)-latest',
'gemini-exp',
'claude-3',
@@ -64,13 +65,13 @@ const visionExcludedModels = [
'o1-preview',
'AIDC-AI/Marco-o1'
]
export const VISION_REGEX = new RegExp(
const VISION_REGEX = new RegExp(
`\\b(?!(?:${visionExcludedModels.join('|')})\\b)(${visionAllowedModels.join('|')})\\b`,
'i'
)
// For middleware to identify models that must use the dedicated Image API
export const DEDICATED_IMAGE_MODELS = [
const DEDICATED_IMAGE_MODELS = [
'grok-2-image',
'grok-2-image-1212',
'grok-2-image-latest',
@@ -79,7 +80,7 @@ export const DEDICATED_IMAGE_MODELS = [
'gpt-image-1'
]
export const IMAGE_ENHANCEMENT_MODELS = [
const IMAGE_ENHANCEMENT_MODELS = [
'grok-2-image(?:-[\\w-]+)?',
'qwen-image-edit',
'gpt-image-1',
@@ -90,9 +91,9 @@ export const IMAGE_ENHANCEMENT_MODELS = [
const IMAGE_ENHANCEMENT_MODELS_REGEX = new RegExp(IMAGE_ENHANCEMENT_MODELS.join('|'), 'i')
// Models that should auto-enable image generation button when selected
export const AUTO_ENABLE_IMAGE_MODELS = ['gemini-2.5-flash-image', ...DEDICATED_IMAGE_MODELS]
const AUTO_ENABLE_IMAGE_MODELS = ['gemini-2.5-flash-image', ...DEDICATED_IMAGE_MODELS]
export const OPENAI_TOOL_USE_IMAGE_GENERATION_MODELS = [
const OPENAI_TOOL_USE_IMAGE_GENERATION_MODELS = [
'o3',
'gpt-4o',
'gpt-4o-mini',
@@ -102,9 +103,9 @@ export const OPENAI_TOOL_USE_IMAGE_GENERATION_MODELS = [
'gpt-5'
]
export const OPENAI_IMAGE_GENERATION_MODELS = [...OPENAI_TOOL_USE_IMAGE_GENERATION_MODELS, 'gpt-image-1']
const OPENAI_IMAGE_GENERATION_MODELS = [...OPENAI_TOOL_USE_IMAGE_GENERATION_MODELS, 'gpt-image-1']
export const GENERATE_IMAGE_MODELS = [
const GENERATE_IMAGE_MODELS = [
'gemini-2.0-flash-exp',
'gemini-2.0-flash-exp-image-generation',
'gemini-2.0-flash-preview-image-generation',
@@ -169,22 +170,23 @@ export function isPureGenerateImageModel(model: Model): boolean {
}
// Text to image models
export const TEXT_TO_IMAGE_REGEX = /flux|diffusion|stabilityai|sd-|dall|cogview|janus|midjourney|mj-|image|gpt-image/i
const TEXT_TO_IMAGE_REGEX = /flux|diffusion|stabilityai|sd-|dall|cogview|janus|midjourney|mj-|image|gpt-image/i
export function isTextToImageModel(model: Model): boolean {
const modelId = getLowerBaseModelName(model.id)
return TEXT_TO_IMAGE_REGEX.test(modelId)
}
export function isNotSupportedImageSizeModel(model?: Model): boolean {
if (!model) {
return false
}
// It's not used now
// export function isNotSupportedImageSizeModel(model?: Model): boolean {
// if (!model) {
// return false
// }
const baseName = getLowerBaseModelName(model.id, '/')
// const baseName = getLowerBaseModelName(model.id, '/')
return baseName.includes('grok-2-image')
}
// return baseName.includes('grok-2-image')
// }
/**
* 判断模型是否支持图片增强(包括编辑、增强、修复等)

View File

@@ -3,7 +3,13 @@ import type { Model } from '@renderer/types'
import { SystemProviderIds } from '@renderer/types'
import { getLowerBaseModelName, isUserSelectedModelType } from '@renderer/utils'
import { isGeminiProvider, isNewApiProvider, isOpenAICompatibleProvider, isOpenAIProvider } from '../providers'
import {
isGeminiProvider,
isNewApiProvider,
isOpenAICompatibleProvider,
isOpenAIProvider,
isVertexAiProvider
} from '../providers'
import { isEmbeddingModel, isRerankModel } from './embedding'
import { isAnthropicModel } from './utils'
import { isPureGenerateImageModel, isTextToImageModel } from './vision'
@@ -16,7 +22,7 @@ export const CLAUDE_SUPPORTED_WEBSEARCH_REGEX = new RegExp(
export const GEMINI_FLASH_MODEL_REGEX = new RegExp('gemini.*-flash.*$')
export const GEMINI_SEARCH_REGEX = new RegExp(
'gemini-(?:2.*(?:-latest)?|flash-latest|pro-latest|flash-lite-latest)(?:-[\\w-]+)*$',
'gemini-(?:2.*(?:-latest)?|3-(?:flash|pro)(?:-preview)?|flash-latest|pro-latest|flash-lite-latest)(?:-[\\w-]+)*$',
'i'
)
@@ -70,7 +76,7 @@ export function isWebSearchModel(model: Model): boolean {
// bedrock和vertex不支持
if (
isAnthropicModel(model) &&
(provider.id === SystemProviderIds['aws-bedrock'] || provider.id === SystemProviderIds.vertexai)
!(provider.id === SystemProviderIds['aws-bedrock'] || provider.id === SystemProviderIds.vertexai)
) {
return CLAUDE_SUPPORTED_WEBSEARCH_REGEX.test(modelId)
}
@@ -107,7 +113,7 @@ export function isWebSearchModel(model: Model): boolean {
}
}
if (isGeminiProvider(provider) || provider.id === SystemProviderIds.vertexai) {
if (isGeminiProvider(provider) || isVertexAiProvider(provider)) {
return GEMINI_SEARCH_REGEX.test(modelId)
}

View File

@@ -67,7 +67,7 @@ import type {
SystemProvider,
SystemProviderId
} from '@renderer/types'
import { isSystemProvider, OpenAIServiceTiers } from '@renderer/types'
import { isSystemProvider, OpenAIServiceTiers, SystemProviderIds } from '@renderer/types'
import { TOKENFLUX_HOST } from './constant'
import { glm45FlashModel, qwen38bModel, SYSTEM_MODELS } from './models'
@@ -275,6 +275,7 @@ export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> =
type: 'openai',
apiKey: '',
apiHost: 'https://api.qnaigc.com',
anthropicApiHost: 'https://api.qnaigc.com',
models: SYSTEM_MODELS.qiniu,
isSystem: true,
enabled: false
@@ -665,6 +666,7 @@ export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> =
type: 'openai',
apiKey: '',
apiHost: 'https://api.longcat.chat/openai',
anthropicApiHost: 'https://api.longcat.chat/anthropic',
models: SYSTEM_MODELS.longcat,
isSystem: true,
enabled: false
@@ -684,7 +686,7 @@ export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> =
name: 'AI Gateway',
type: 'ai-gateway',
apiKey: '',
apiHost: 'https://ai-gateway.vercel.sh/v1',
apiHost: 'https://ai-gateway.vercel.sh/v1/ai',
models: [],
isSystem: true,
enabled: false
@@ -1519,7 +1521,10 @@ const SUPPORT_URL_CONTEXT_PROVIDER_TYPES = [
] as const satisfies ProviderType[]
export const isSupportUrlContextProvider = (provider: Provider) => {
return SUPPORT_URL_CONTEXT_PROVIDER_TYPES.some((type) => type === provider.type)
return (
SUPPORT_URL_CONTEXT_PROVIDER_TYPES.some((type) => type === provider.type) ||
provider.id === SystemProviderIds.cherryin
)
}
const SUPPORT_GEMINI_NATIVE_WEB_SEARCH_PROVIDERS = ['gemini', 'vertexai'] as const satisfies SystemProviderId[]
@@ -1566,10 +1571,18 @@ export function isGeminiProvider(provider: Provider): boolean {
return provider.type === 'gemini'
}
export function isVertexAiProvider(provider: Provider): boolean {
return provider.type === 'vertexai'
}
export function isAIGatewayProvider(provider: Provider): boolean {
return provider.type === 'ai-gateway'
}
export function isAwsBedrockProvider(provider: Provider): boolean {
return provider.type === 'aws-bedrock'
}
const NOT_SUPPORT_API_VERSION_PROVIDERS = ['github', 'copilot', 'perplexity'] as const satisfies SystemProviderId[]
export const isSupportAPIVersionProvider = (provider: Provider) => {

View File

@@ -123,9 +123,9 @@ export function useAssistant(id: string) {
}
updateAssistantSettings({
reasoning_effort: fallbackOption === 'off' ? undefined : fallbackOption,
reasoning_effort_cache: fallbackOption === 'off' ? undefined : fallbackOption,
qwenThinkMode: fallbackOption === 'off' ? undefined : true
reasoning_effort: fallbackOption === 'none' ? undefined : fallbackOption,
reasoning_effort_cache: fallbackOption === 'none' ? undefined : fallbackOption,
qwenThinkMode: fallbackOption === 'none' ? undefined : true
})
} else {
// 对于支持的选项, 不再更新 cache.

View File

@@ -311,7 +311,7 @@ export const getHttpMessageLabel = (key: string): string => {
}
const reasoningEffortOptionsKeyMap: Record<ThinkingOption, string> = {
off: 'assistants.settings.reasoning_effort.off',
none: 'assistants.settings.reasoning_effort.off',
minimal: 'assistants.settings.reasoning_effort.minimal',
high: 'assistants.settings.reasoning_effort.high',
low: 'assistants.settings.reasoning_effort.low',

View File

@@ -27,6 +27,9 @@
"null_id": "Agent ID is null."
}
},
"input": {
"placeholder": "Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Failed to list agents."

View File

@@ -27,6 +27,9 @@
"null_id": "智能体 ID 为空。"
}
},
"input": {
"placeholder": "在这里输入消息,按 {{key}} 发送 - @ 选择路径, / 选择命令"
},
"list": {
"error": {
"failed": "获取智能体列表失败"
@@ -4478,7 +4481,7 @@
"confirm": "确认",
"forward": "前进",
"multiple": "多选",
"noResult": "[to be translated]:No results found",
"noResult": "未找到结果",
"page": "翻页",
"select": "选择",
"title": "快捷菜单"

View File

@@ -27,6 +27,9 @@
"null_id": "代理程式 ID 為空。"
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "無法列出代理程式。"
@@ -4478,7 +4481,7 @@
"confirm": "確認",
"forward": "前進",
"multiple": "多選",
"noResult": "[to be translated]:No results found",
"noResult": "未找到結果",
"page": "翻頁",
"select": "選擇",
"title": "快捷選單"

View File

@@ -27,6 +27,9 @@
"null_id": "Agent ID ist leer."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Agent-Liste abrufen fehlgeschlagen"
@@ -4478,7 +4481,7 @@
"confirm": "Bestätigen",
"forward": "Vorwärts",
"multiple": "Mehrfachauswahl",
"noResult": "[to be translated]:No results found",
"noResult": "Keine Ergebnisse gefunden",
"page": "Seite umblättern",
"select": "Auswählen",
"title": "Schnellmenü"

View File

@@ -27,6 +27,9 @@
"null_id": "Το ID του πράκτορα είναι null."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Αποτυχία καταχώρησης πρακτόρων."
@@ -4478,7 +4481,7 @@
"confirm": "Επιβεβαίωση",
"forward": "Μπρος",
"multiple": "Πολλαπλή επιλογή",
"noResult": "[to be translated]:No results found",
"noResult": "Δεν βρέθηκαν αποτελέσματα",
"page": "Σελίδα",
"select": "Επιλογή",
"title": "Γρήγορη Πρόσβαση"

View File

@@ -27,6 +27,9 @@
"null_id": "El ID del agente es nulo."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Error al listar agentes."
@@ -4478,7 +4481,7 @@
"confirm": "Confirmar",
"forward": "Adelante",
"multiple": "Selección múltiple",
"noResult": "[to be translated]:No results found",
"noResult": "No se encontraron resultados",
"page": "Página",
"select": "Seleccionar",
"title": "Menú de acceso rápido"

View File

@@ -27,6 +27,9 @@
"null_id": "L'ID de l'agent est nul."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Échec de la liste des agents."
@@ -4478,7 +4481,7 @@
"confirm": "Подтвердить",
"forward": "Вперед",
"multiple": "Множественный выбор",
"noResult": "[to be translated]:No results found",
"noResult": "Aucun résultat trouvé",
"page": "Перелистнуть страницу",
"select": "Выбрать",
"title": "Быстрое меню"

View File

@@ -27,6 +27,9 @@
"null_id": "エージェント ID が null です。"
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "エージェントの一覧取得に失敗しました。"
@@ -4478,7 +4481,7 @@
"confirm": "確認",
"forward": "進む",
"multiple": "複数選択",
"noResult": "[to be translated]:No results found",
"noResult": "結果が見つかりません",
"page": "ページ",
"select": "選択",
"title": "クイックメニュー"

View File

@@ -27,6 +27,9 @@
"null_id": "O ID do agente é nulo."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Falha ao listar agentes."
@@ -4478,7 +4481,7 @@
"confirm": "Confirmar",
"forward": "Avançar",
"multiple": "Múltipla Seleção",
"noResult": "[to be translated]:No results found",
"noResult": "Nenhum resultado encontrado",
"page": "Página",
"select": "Selecionar",
"title": "Menu de Atalho"

View File

@@ -27,6 +27,9 @@
"null_id": "ID агента равен null."
}
},
"input": {
"placeholder": "[to be translated]:Enter your message here, send with {{key}} - @ select path, / select command"
},
"list": {
"error": {
"failed": "Не удалось получить список агентов."
@@ -4478,7 +4481,7 @@
"confirm": "Подтвердить",
"forward": "Вперед",
"multiple": "Множественный выбор",
"noResult": "[to be translated]:No results found",
"noResult": "Результаты не найдены",
"page": "Страница",
"select": "Выбрать",
"title": "Быстрое меню"

View File

@@ -470,7 +470,7 @@ const AgentSessionInputbarInner: FC<InnerProps> = ({ assistant, agentId, session
)
const placeholderText = useMemo(
() =>
t('chat.input.placeholder', {
t('agent.input.placeholder', {
key: getSendMessageShortcutLabel(sendMessageShortcut)
}),
[sendMessageShortcut, t]

View File

@@ -313,7 +313,7 @@ export const InputbarCore: FC<InputbarCoreProps> = ({
const isEnterPressed = event.key === 'Enter' && !event.nativeEvent.isComposing
if (isEnterPressed) {
if (isSendMessageKeyPressed(event, sendMessageShortcut)) {
if (isSendMessageKeyPressed(event, sendMessageShortcut) && !cannotSend) {
handleSendMessage()
event.preventDefault()
return
@@ -359,6 +359,7 @@ export const InputbarCore: FC<InputbarCoreProps> = ({
translate,
handleToggleExpanded,
sendMessageShortcut,
cannotSend,
handleSendMessage,
setText,
setTimeoutTimer,

View File

@@ -36,7 +36,7 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
const { assistant, updateAssistantSettings } = useAssistant(assistantId)
const currentReasoningEffort = useMemo(() => {
return assistant.settings?.reasoning_effort || 'off'
return assistant.settings?.reasoning_effort || 'none'
}, [assistant.settings?.reasoning_effort])
// 确定当前模型支持的选项类型
@@ -46,21 +46,21 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
const supportedOptions: ThinkingOption[] = useMemo(() => {
if (modelType === 'doubao') {
if (isDoubaoThinkingAutoModel(model)) {
return ['off', 'auto', 'high']
return ['none', 'auto', 'high']
}
return ['off', 'high']
return ['none', 'high']
}
return MODEL_SUPPORTED_OPTIONS[modelType]
}, [model, modelType])
const onThinkingChange = useCallback(
(option?: ThinkingOption) => {
const isEnabled = option !== undefined && option !== 'off'
const isEnabled = option !== undefined && option !== 'none'
// 然后更新设置
if (!isEnabled) {
updateAssistantSettings({
reasoning_effort: undefined,
reasoning_effort_cache: undefined,
reasoning_effort: option,
reasoning_effort_cache: option,
qwenThinkMode: false
})
return
@@ -96,10 +96,10 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
}))
}, [currentReasoningEffort, supportedOptions, onThinkingChange])
const isThinkingEnabled = currentReasoningEffort !== undefined && currentReasoningEffort !== 'off'
const isThinkingEnabled = currentReasoningEffort !== undefined && currentReasoningEffort !== 'none'
const disableThinking = useCallback(() => {
onThinkingChange('off')
onThinkingChange('none')
}, [onThinkingChange])
const openQuickPanel = useCallback(() => {
@@ -116,7 +116,7 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
return
}
if (isThinkingEnabled && supportedOptions.includes('off')) {
if (isThinkingEnabled && supportedOptions.includes('none')) {
disableThinking()
return
}
@@ -146,13 +146,13 @@ const ThinkingButton: FC<Props> = ({ quickPanel, model, assistantId }): ReactEle
<Tooltip
placement="top"
title={
isThinkingEnabled && supportedOptions.includes('off')
isThinkingEnabled && supportedOptions.includes('none')
? t('common.close')
: t('assistants.settings.reasoning_effort.label')
}
mouseLeaveDelay={0}
arrow>
<ActionIconButton onClick={handleOpenQuickPanel} active={currentReasoningEffort !== 'off'}>
<ActionIconButton onClick={handleOpenQuickPanel} active={currentReasoningEffort !== 'none'}>
{ThinkingIcon(currentReasoningEffort)}
</ActionIconButton>
</Tooltip>
@@ -178,7 +178,7 @@ const ThinkingIcon = (option?: ThinkingOption) => {
case 'auto':
IconComponent = MdiLightbulbAutoOutline
break
case 'off':
case 'none':
IconComponent = MdiLightbulbOffOutline
break
default:

View File

@@ -1,4 +1,4 @@
import { isGeminiModel } from '@renderer/config/models'
import { isAnthropicModel, isGeminiModel } from '@renderer/config/models'
import { isSupportUrlContextProvider } from '@renderer/config/providers'
import { defineTool, registerTool, TopicType } from '@renderer/pages/home/Inputbar/types'
import { getProviderByModel } from '@renderer/services/AssistantService'
@@ -10,9 +10,8 @@ const urlContextTool = defineTool({
label: (t) => t('chat.input.url_context'),
visibleInScopes: [TopicType.Chat],
condition: ({ model }) => {
if (!isGeminiModel(model)) return false
const provider = getProviderByModel(model)
return !!provider && isSupportUrlContextProvider(provider)
return !!provider && isSupportUrlContextProvider(provider) && (isGeminiModel(model) || isAnthropicModel(model))
},
render: ({ assistant }) => <UrlContextButton assistantId={assistant.id} />
})

View File

@@ -1,7 +1,6 @@
import type { CollapseProps } from 'antd'
import { Tag } from 'antd'
import { CheckCircle, Terminal, XCircle } from 'lucide-react'
import { useMemo } from 'react'
import { ToolTitle } from './GenericTools'
import type { BashOutputToolInput, BashOutputToolOutput } from './types'
@@ -16,6 +15,63 @@ interface ParsedBashOutput {
tool_use_error?: string
}
const parseBashOutput = (output?: BashOutputToolOutput): ParsedBashOutput | null => {
if (!output) return null
try {
const parser = new DOMParser()
const hasToolError = output.includes('<tool_use_error>')
const xmlStr = output.includes('<status>') || hasToolError ? `<root>${output}</root>` : output
const xmlDoc = parser.parseFromString(xmlStr, 'application/xml')
const parserError = xmlDoc.querySelector('parsererror')
if (parserError) return null
const getElementText = (tagName: string): string | undefined => {
const element = xmlDoc.getElementsByTagName(tagName)[0]
return element?.textContent?.trim()
}
return {
status: getElementText('status'),
exit_code: getElementText('exit_code') ? parseInt(getElementText('exit_code')!) : undefined,
stdout: getElementText('stdout'),
stderr: getElementText('stderr'),
timestamp: getElementText('timestamp'),
tool_use_error: getElementText('tool_use_error')
}
} catch {
return null
}
}
const getStatusConfig = (parsedOutput: ParsedBashOutput | null) => {
if (!parsedOutput) return null
if (parsedOutput.tool_use_error) {
return {
color: 'danger',
icon: <XCircle className="h-3.5 w-3.5" />,
text: 'Error'
} as const
}
const isCompleted = parsedOutput.status === 'completed'
const isSuccess = parsedOutput.exit_code === 0
return {
color: isCompleted && isSuccess ? 'success' : isCompleted && !isSuccess ? 'danger' : 'warning',
icon:
isCompleted && isSuccess ? (
<CheckCircle className="h-3.5 w-3.5" />
) : isCompleted && !isSuccess ? (
<XCircle className="h-3.5 w-3.5" />
) : (
<Terminal className="h-3.5 w-3.5" />
),
text: isCompleted ? (isSuccess ? 'Success' : 'Failed') : 'Running'
} as const
}
export function BashOutputTool({
input,
output
@@ -23,73 +79,8 @@ export function BashOutputTool({
input: BashOutputToolInput
output?: BashOutputToolOutput
}): NonNullable<CollapseProps['items']>[number] {
// 解析 XML 输出
const parsedOutput = useMemo(() => {
if (!output) return null
try {
const parser = new DOMParser()
// 检查是否包含 tool_use_error 标签
const hasToolError = output.includes('<tool_use_error>')
// 包装成有效的 XML如果还没有根元素
const xmlStr = output.includes('<status>') || hasToolError ? `<root>${output}</root>` : output
const xmlDoc = parser.parseFromString(xmlStr, 'application/xml')
// 检查是否有解析错误
const parserError = xmlDoc.querySelector('parsererror')
if (parserError) {
return null
}
const getElementText = (tagName: string): string | undefined => {
const element = xmlDoc.getElementsByTagName(tagName)[0]
return element?.textContent?.trim()
}
const result: ParsedBashOutput = {
status: getElementText('status'),
exit_code: getElementText('exit_code') ? parseInt(getElementText('exit_code')!) : undefined,
stdout: getElementText('stdout'),
stderr: getElementText('stderr'),
timestamp: getElementText('timestamp'),
tool_use_error: getElementText('tool_use_error')
}
return result
} catch {
return null
}
}, [output])
// 获取状态配置
const statusConfig = useMemo(() => {
if (!parsedOutput) return null
// 如果有 tool_use_error直接显示错误状态
if (parsedOutput.tool_use_error) {
return {
color: 'danger',
icon: <XCircle className="h-3.5 w-3.5" />,
text: 'Error'
} as const
}
const isCompleted = parsedOutput.status === 'completed'
const isSuccess = parsedOutput.exit_code === 0
return {
color: isCompleted && isSuccess ? 'success' : isCompleted && !isSuccess ? 'danger' : 'warning',
icon:
isCompleted && isSuccess ? (
<CheckCircle className="h-3.5 w-3.5" />
) : isCompleted && !isSuccess ? (
<XCircle className="h-3.5 w-3.5" />
) : (
<Terminal className="h-3.5 w-3.5" />
),
text: isCompleted ? (isSuccess ? 'Success' : 'Failed') : 'Running'
} as const
}, [parsedOutput])
const parsedOutput = parseBashOutput(output)
const statusConfig = getStatusConfig(parsedOutput)
const children = parsedOutput ? (
<div className="flex flex-col gap-4">

View File

@@ -1,12 +1,47 @@
import type { CollapseProps } from 'antd'
import { FileText } from 'lucide-react'
import { useMemo } from 'react'
import ReactMarkdown from 'react-markdown'
import { ToolTitle } from './GenericTools'
import type { ReadToolInput as ReadToolInputType, ReadToolOutput as ReadToolOutputType, TextOutput } from './types'
import { AgentToolsType } from './types'
const removeSystemReminderTags = (text: string): string => {
return text.replace(/<system-reminder>[\s\S]*?<\/system-reminder>/gi, '')
}
const normalizeOutputString = (output?: ReadToolOutputType): string | null => {
if (!output) return null
const toText = (item: TextOutput) => removeSystemReminderTags(item.text)
if (Array.isArray(output)) {
return output
.filter((item): item is TextOutput => item.type === 'text')
.map(toText)
.join('')
}
return removeSystemReminderTags(output)
}
const getOutputStats = (outputString: string | null) => {
if (!outputString) return null
const bytes = new Blob([outputString]).size
const formatSize = (size: number) => {
if (size < 1024) return `${size} B`
if (size < 1024 * 1024) return `${(size / 1024).toFixed(1)} KB`
return `${(size / (1024 * 1024)).toFixed(1)} MB`
}
return {
lineCount: outputString.split('\n').length,
fileSize: bytes,
formatSize
}
}
export function ReadTool({
input,
output
@@ -14,50 +49,8 @@ export function ReadTool({
input: ReadToolInputType
output?: ReadToolOutputType
}): NonNullable<CollapseProps['items']>[number] {
// 移除 system-reminder 标签及其内容的辅助函数
const removeSystemReminderTags = (text: string): string => {
// 使用正则表达式匹配 <system-reminder> 标签及其内容,包括换行符
return text.replace(/<system-reminder>[\s\S]*?<\/system-reminder>/gi, '')
}
// 将 output 统一转换为字符串
const outputString = useMemo(() => {
if (!output) return null
let processedOutput: string
// 如果是 TextOutput[] 类型,提取所有 text 内容
if (Array.isArray(output)) {
processedOutput = output
.filter((item): item is TextOutput => item.type === 'text')
.map((item) => removeSystemReminderTags(item.text))
.join('')
} else {
// 如果是字符串,直接使用
processedOutput = output
}
// 移除 system-reminder 标签及其内容
return removeSystemReminderTags(processedOutput)
}, [output])
// 如果有输出,计算统计信息
const stats = useMemo(() => {
if (!outputString) return null
const bytes = new Blob([outputString]).size
const formatSize = (bytes: number) => {
if (bytes < 1024) return `${bytes} B`
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`
}
return {
lineCount: outputString.split('\n').length,
fileSize: bytes,
formatSize
}
}, [outputString])
const outputString = normalizeOutputString(output)
const stats = getOutputStats(outputString)
return {
key: AgentToolsType.Read,

View File

@@ -1,4 +1,3 @@
import { cn } from '@renderer/utils'
import type { CollapseProps } from 'antd'
import { Card } from 'antd'
import { CheckCircle, Circle, Clock, ListTodo } from 'lucide-react'
@@ -11,23 +10,27 @@ const getStatusConfig = (status: TodoItem['status']) => {
switch (status) {
case 'completed':
return {
color: 'success' as const,
icon: <CheckCircle className="h-3 w-3" />
color: 'var(--color-status-success)',
opacity: 0.6,
icon: <CheckCircle className="h-4 w-4" strokeWidth={2.5} />
}
case 'in_progress':
return {
color: 'primary' as const,
icon: <Clock className="h-3 w-3" />
color: 'var(--color-primary)',
opacity: 0.9,
icon: <Clock className="h-4 w-4" strokeWidth={2.5} />
}
case 'pending':
return {
color: 'default' as const,
icon: <Circle className="h-3 w-3" />
color: 'var(--color-border)',
opacity: 0.4,
icon: <Circle className="h-4 w-4" strokeWidth={2.5} />
}
default:
return {
color: 'default' as const,
icon: <Circle className="h-3 w-3" />
color: 'var(--color-border)',
opacity: 0.4,
icon: <Circle className="h-4 w-4" strokeWidth={2.5} />
}
}
}
@@ -64,10 +67,8 @@ export function TodoWriteTool({
<div className="p-2">
<div className="flex items-center justify-center gap-3">
<div
className={cn(
'flex items-center justify-center rounded-full border bg-opacity-50 p-2',
`bg-${statusConfig.color}`
)}>
className="flex items-center justify-center rounded-full border p-1"
style={{ backgroundColor: statusConfig.color, opacity: statusConfig.opacity }}>
{statusConfig.icon}
</div>
<div className="min-w-0 flex-1">

View File

@@ -11,11 +11,24 @@ interface UnknownToolProps {
output?: unknown
}
export function UnknownToolRenderer({
toolName = '',
input,
output
}: UnknownToolProps): NonNullable<CollapseProps['items']>[number] {
const getToolDisplayName = (name: string) => {
if (name.startsWith('mcp__')) {
const parts = name.substring(5).split('__')
if (parts.length >= 2) {
return `${parts[0]}:${parts.slice(1).join(':')}`
}
}
return name
}
const getToolDescription = (toolName: string) => {
if (toolName.startsWith('mcp__')) {
return 'MCP Server Tool'
}
return 'Tool'
}
const UnknownToolContent = ({ input, output }: { input?: unknown; output?: unknown }) => {
const { highlightCode } = useCodeStyle()
const [inputHtml, setInputHtml] = useState<string>('')
const [outputHtml, setOutputHtml] = useState<string>('')
@@ -34,58 +47,49 @@ export function UnknownToolRenderer({
}
}, [output, highlightCode])
const getToolDisplayName = (name: string) => {
if (name.startsWith('mcp__')) {
const parts = name.substring(5).split('__')
if (parts.length >= 2) {
return `${parts[0]}:${parts.slice(1).join(':')}`
}
}
return name
if (input === undefined && output === undefined) {
return <div className="text-foreground-500 text-xs">No data available for this tool</div>
}
const getToolDescription = () => {
if (toolName.startsWith('mcp__')) {
return 'MCP Server Tool'
}
return 'Tool'
}
return (
<div className="space-y-3">
{input !== undefined && (
<div>
<div className="mb-1 font-semibold text-foreground-600 text-xs dark:text-foreground-400">Input:</div>
<div
className="overflow-x-auto rounded bg-gray-50 dark:bg-gray-900"
dangerouslySetInnerHTML={{ __html: inputHtml }}
/>
</div>
)}
{output !== undefined && (
<div>
<div className="mb-1 font-semibold text-foreground-600 text-xs dark:text-foreground-400">Output:</div>
<div
className="rounded bg-gray-50 dark:bg-gray-900 [&>*]:whitespace-pre-line"
dangerouslySetInnerHTML={{ __html: outputHtml }}
/>
</div>
)}
</div>
)
}
export function UnknownToolRenderer({
toolName = '',
input,
output
}: UnknownToolProps): NonNullable<CollapseProps['items']>[number] {
return {
key: 'unknown-tool',
label: (
<ToolTitle
icon={<Wrench className="h-4 w-4" />}
label={getToolDisplayName(toolName)}
params={getToolDescription()}
params={getToolDescription(toolName)}
/>
),
children: (
<div className="space-y-3">
{input !== undefined && (
<div>
<div className="mb-1 font-semibold text-foreground-600 text-xs dark:text-foreground-400">Input:</div>
<div
className="overflow-x-auto rounded bg-gray-50 dark:bg-gray-900"
dangerouslySetInnerHTML={{ __html: inputHtml }}
/>
</div>
)}
{output !== undefined && (
<div>
<div className="mb-1 font-semibold text-foreground-600 text-xs dark:text-foreground-400">Output:</div>
<div
className="rounded bg-gray-50 dark:bg-gray-900 [&>*]:whitespace-pre-line"
dangerouslySetInnerHTML={{ __html: outputHtml }}
/>
</div>
)}
{input === undefined && output === undefined && (
<div className="text-foreground-500 text-xs">No data available for this tool</div>
)}
</div>
)
children: <UnknownToolContent input={input} output={output} />
}
}

View File

@@ -6,8 +6,6 @@ import { Collapse } from 'antd'
// 导出所有类型
export * from './types'
import { useMemo } from 'react'
// 导入所有渲染器
import ToolPermissionRequestCard from '../ToolPermissionRequestCard'
import { BashOutputTool } from './BashOutputTool'
@@ -57,22 +55,19 @@ export function isValidAgentToolsType(toolName: unknown): toolName is AgentTools
return typeof toolName === 'string' && Object.values(AgentToolsType).includes(toolName as AgentToolsType)
}
// 统一的渲染函数
function renderToolContent(toolName: AgentToolsType, input: ToolInput, output?: ToolOutput) {
// 统一的渲染组件
function ToolContent({ toolName, input, output }: { toolName: AgentToolsType; input: ToolInput; output?: ToolOutput }) {
const Renderer = toolRenderers[toolName]
const renderedItem = Renderer
? Renderer({ input: input as any, output: output as any })
: UnknownToolRenderer({ input: input as any, output: output as any, toolName })
// eslint-disable-next-line react-hooks/rules-of-hooks
const toolContentItem = useMemo(() => {
const rendered = Renderer
? Renderer({ input: input as any, output: output as any })
: UnknownToolRenderer({ input: input as any, output: output as any, toolName })
return {
...rendered,
classNames: {
body: 'bg-foreground-50 p-2 text-foreground-900 dark:bg-foreground-100 max-h-96 p-2 overflow-scroll'
} as NonNullable<CollapseProps['items']>[number]['classNames']
} as NonNullable<CollapseProps['items']>[number]
}, [Renderer, input, output, toolName])
const toolContentItem: NonNullable<CollapseProps['items']>[number] = {
...renderedItem,
classNames: {
body: 'bg-foreground-50 p-2 text-foreground-900 dark:bg-foreground-100 max-h-96 p-2 overflow-scroll'
}
}
return (
<Collapse
@@ -98,5 +93,7 @@ export function MessageAgentTools({ toolResponse }: { toolResponse: NormalToolRe
return <ToolPermissionRequestCard toolResponse={toolResponse} />
}
return renderToolContent(tool.name as AgentToolsType, args as ToolInput, response as ToolOutput)
return (
<ToolContent toolName={tool.name as AgentToolsType} input={args as ToolInput} output={response as ToolOutput} />
)
}

View File

@@ -1,7 +1,7 @@
import type { PermissionUpdate } from '@anthropic-ai/claude-agent-sdk'
import { loggerService } from '@logger'
import { useAppDispatch, useAppSelector } from '@renderer/store'
import { selectPendingPermissionByToolName, toolPermissionsActions } from '@renderer/store/toolPermissions'
import { selectPendingPermission, toolPermissionsActions } from '@renderer/store/toolPermissions'
import type { NormalToolResponse } from '@renderer/types'
import { Button } from 'antd'
import { ChevronDown, CirclePlay, CircleX } from 'lucide-react'
@@ -17,9 +17,7 @@ interface Props {
export function ToolPermissionRequestCard({ toolResponse }: Props) {
const { t } = useTranslation()
const dispatch = useAppDispatch()
const request = useAppSelector((state) =>
selectPendingPermissionByToolName(state.toolPermissions, toolResponse.tool.name)
)
const request = useAppSelector((state) => selectPendingPermission(state.toolPermissions, toolResponse.toolCallId))
const [now, setNow] = useState(() => Date.now())
const [showDetails, setShowDetails] = useState(false)

View File

@@ -1,5 +1,6 @@
import Selector from '@renderer/components/Selector'
import {
getModelSupportedVerbosity,
isSupportedReasoningEffortOpenAIModel,
isSupportFlexServiceTierModel,
isSupportVerbosityModel
@@ -80,20 +81,24 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
}
]
const verbosityOptions = [
{
value: 'low',
label: t('settings.openai.verbosity.low')
},
{
value: 'medium',
label: t('settings.openai.verbosity.medium')
},
{
value: 'high',
label: t('settings.openai.verbosity.high')
}
]
const verbosityOptions = useMemo(() => {
const allOptions = [
{
value: 'low',
label: t('settings.openai.verbosity.low')
},
{
value: 'medium',
label: t('settings.openai.verbosity.medium')
},
{
value: 'high',
label: t('settings.openai.verbosity.high')
}
]
const supportedVerbosityLevels = getModelSupportedVerbosity(model)
return allOptions.filter((option) => supportedVerbosityLevels.includes(option.value as any))
}, [model, t])
const serviceTierOptions = useMemo(() => {
let baseOptions: { value: ServiceTier; label: string }[]
@@ -155,6 +160,15 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
}
}, [provider.id, serviceTierMode, serviceTierOptions, setServiceTierMode])
useEffect(() => {
if (verbosity && !verbosityOptions.some((option) => option.value === verbosity)) {
const supportedVerbosityLevels = getModelSupportedVerbosity(model)
// Default to the highest supported verbosity level
const defaultVerbosity = supportedVerbosityLevels[supportedVerbosityLevels.length - 1]
setVerbosity(defaultVerbosity)
}
}, [model, verbosity, verbosityOptions, setVerbosity])
if (!isOpenAIReasoning && !isSupportServiceTier && !isSupportVerbosity) {
return null
}

View File

@@ -1,21 +1,13 @@
import { getAgentTypeAvatar } from '@renderer/config/agent'
import type { useUpdateAgent } from '@renderer/hooks/agents/useUpdateAgent'
import type { useUpdateSession } from '@renderer/hooks/agents/useUpdateSession'
import { getAgentTypeLabel } from '@renderer/i18n/label'
import type { GetAgentResponse, GetAgentSessionResponse } from '@renderer/types'
import { isAgentEntity } from '@renderer/types'
import { Avatar } from 'antd'
import type { FC } from 'react'
import { useTranslation } from 'react-i18next'
import { AccessibleDirsSetting } from './AccessibleDirsSetting'
import { AvatarSetting } from './AvatarSetting'
import { DescriptionSetting } from './DescriptionSetting'
import { ModelSetting } from './ModelSetting'
import { NameSetting } from './NameSetting'
import { SettingsContainer, SettingsItem, SettingsTitle } from './shared'
// const logger = loggerService.withContext('AgentEssentialSettings')
import { SettingsContainer } from './shared'
type EssentialSettingsProps =
| {
@@ -30,26 +22,10 @@ type EssentialSettingsProps =
}
const EssentialSettings: FC<EssentialSettingsProps> = ({ agentBase, update, showModelSetting = true }) => {
const { t } = useTranslation()
if (!agentBase) return null
const isAgent = isAgentEntity(agentBase)
return (
<SettingsContainer>
{isAgent && (
<SettingsItem inline>
<SettingsTitle>{t('agent.type.label')}</SettingsTitle>
<div className="flex items-center gap-2">
<Avatar size={24} src={getAgentTypeAvatar(agentBase.type)} className="h-6 w-6 text-lg" />
<span>{(agentBase?.name ?? agentBase?.type) ? getAgentTypeLabel(agentBase.type) : ''}</span>
</div>
</SettingsItem>
)}
{isAgent && (
<AvatarSetting agent={agentBase} update={update as ReturnType<typeof useUpdateAgent>['updateAgent']} />
)}
<NameSetting base={agentBase} update={update} />
{showModelSetting && <ModelSetting base={agentBase} update={update} />}
<AccessibleDirsSetting base={agentBase} update={update} />

View File

@@ -1,6 +1,8 @@
import { EmojiAvatarWithPicker } from '@renderer/components/Avatar/EmojiAvatarWithPicker'
import type { AgentBaseWithId, UpdateAgentBaseForm, UpdateAgentFunctionUnion } from '@renderer/types'
import { AgentConfigurationSchema, isAgentEntity, isAgentType } from '@renderer/types'
import { Input } from 'antd'
import { useState } from 'react'
import { useCallback, useState } from 'react'
import { useTranslation } from 'react-i18next'
import { SettingsItem, SettingsTitle } from './shared'
@@ -13,26 +15,61 @@ export interface NameSettingsProps {
export const NameSetting = ({ base, update }: NameSettingsProps) => {
const { t } = useTranslation()
const [name, setName] = useState<string | undefined>(base?.name?.trim())
const updateName = async (name: UpdateAgentBaseForm['name']) => {
if (!base) return
return update({ id: base.id, name: name?.trim() })
}
// Avatar logic
const isAgent = isAgentEntity(base)
const isDefault = isAgent ? isAgentType(base.configuration?.avatar) : false
const [emoji, setEmoji] = useState(isAgent && !isDefault ? (base.configuration?.avatar ?? '⭐️') : '⭐️')
const updateAvatar = useCallback(
(avatar: string) => {
if (!isAgent || !base) return
const parsedConfiguration = AgentConfigurationSchema.parse(base.configuration ?? {})
const payload = {
id: base.id,
configuration: {
...parsedConfiguration,
avatar
}
}
update(payload)
},
[base, update, isAgent]
)
if (!base) return null
return (
<SettingsItem inline>
<SettingsTitle>{t('common.name')}</SettingsTitle>
<Input
placeholder={t('common.agent_one') + t('common.name')}
value={name}
onChange={(e) => setName(e.target.value)}
onBlur={() => {
if (name !== base.name) {
updateName(name)
}
}}
className="max-w-70 flex-1"
/>
<div className="flex max-w-70 flex-1 items-center gap-1">
{isAgent && (
<EmojiAvatarWithPicker
emoji={emoji}
onPick={(emoji: string) => {
setEmoji(emoji)
if (isAgent && emoji === base?.configuration?.avatar) return
updateAvatar(emoji)
}}
/>
)}
<Input
placeholder={t('common.agent_one') + t('common.name')}
value={name}
onChange={(e) => setName(e.target.value)}
onBlur={() => {
if (name !== base.name) {
updateName(name)
}
}}
className="flex-1"
/>
</div>
</SettingsItem>
)
}

View File

@@ -9,6 +9,7 @@ import { DEFAULT_CONTEXTCOUNT, DEFAULT_TEMPERATURE, MAX_CONTEXT_COUNT } from '@r
import { isEmbeddingModel, isRerankModel } from '@renderer/config/models'
import { useTimer } from '@renderer/hooks/useTimer'
import { SettingRow } from '@renderer/pages/settings'
import { DEFAULT_ASSISTANT_SETTINGS } from '@renderer/services/AssistantService'
import type { Assistant, AssistantSettingCustomParameters, AssistantSettings, Model } from '@renderer/types'
import { modalConfirm } from '@renderer/utils'
import { Button, Col, Divider, Input, InputNumber, Row, Select, Slider, Switch, Tooltip } from 'antd'
@@ -31,7 +32,9 @@ const AssistantModelSettings: FC<Props> = ({ assistant, updateAssistant, updateA
const [enableMaxTokens, setEnableMaxTokens] = useState(assistant?.settings?.enableMaxTokens ?? false)
const [maxTokens, setMaxTokens] = useState(assistant?.settings?.maxTokens ?? 0)
const [streamOutput, setStreamOutput] = useState(assistant?.settings?.streamOutput)
const [toolUseMode, setToolUseMode] = useState(assistant?.settings?.toolUseMode ?? 'prompt')
const [toolUseMode, setToolUseMode] = useState<AssistantSettings['toolUseMode']>(
assistant?.settings?.toolUseMode ?? 'function'
)
const [defaultModel, setDefaultModel] = useState(assistant?.defaultModel)
const [topP, setTopP] = useState(assistant?.settings?.topP ?? 1)
const [enableTopP, setEnableTopP] = useState(assistant?.settings?.enableTopP ?? false)
@@ -158,28 +161,17 @@ const AssistantModelSettings: FC<Props> = ({ assistant, updateAssistant, updateA
}
const onReset = () => {
setTemperature(DEFAULT_TEMPERATURE)
setEnableTemperature(true)
setContextCount(DEFAULT_CONTEXTCOUNT)
setEnableMaxTokens(false)
setMaxTokens(0)
setStreamOutput(true)
setTopP(1)
setEnableTopP(false)
setCustomParameters([])
setToolUseMode('prompt')
updateAssistantSettings({
temperature: DEFAULT_TEMPERATURE,
enableTemperature: true,
contextCount: DEFAULT_CONTEXTCOUNT,
enableMaxTokens: false,
maxTokens: 0,
streamOutput: true,
topP: 1,
enableTopP: false,
customParameters: [],
toolUseMode: 'prompt'
})
setTemperature(DEFAULT_ASSISTANT_SETTINGS.temperature)
setEnableTemperature(DEFAULT_ASSISTANT_SETTINGS.enableTemperature ?? true)
setContextCount(DEFAULT_ASSISTANT_SETTINGS.contextCount)
setEnableMaxTokens(DEFAULT_ASSISTANT_SETTINGS.enableMaxTokens ?? false)
setMaxTokens(DEFAULT_ASSISTANT_SETTINGS.maxTokens ?? 0)
setStreamOutput(DEFAULT_ASSISTANT_SETTINGS.streamOutput)
setTopP(DEFAULT_ASSISTANT_SETTINGS.topP)
setEnableTopP(DEFAULT_ASSISTANT_SETTINGS.enableTopP ?? false)
setCustomParameters(DEFAULT_ASSISTANT_SETTINGS.customParameters ?? [])
setToolUseMode(DEFAULT_ASSISTANT_SETTINGS.toolUseMode)
updateAssistantSettings(DEFAULT_ASSISTANT_SETTINGS)
}
const modelFilter = (model: Model) => !isEmbeddingModel(model) && !isRerankModel(model)

View File

@@ -109,7 +109,6 @@ const InstallNpxUv: FC<Props> = ({ mini = false }) => {
<Container>
<Alert
type={isUvInstalled ? 'success' : 'warning'}
banner
style={{ borderRadius: 'var(--list-item-border-radius)' }}
description={
<VStack>
@@ -140,7 +139,6 @@ const InstallNpxUv: FC<Props> = ({ mini = false }) => {
/>
<Alert
type={isBunInstalled ? 'success' : 'warning'}
banner
style={{ borderRadius: 'var(--list-item-border-radius)' }}
description={
<VStack>

View File

@@ -140,7 +140,7 @@ const MCPSettings: FC = () => {
<Route
path="mcp-install"
element={
<SettingContainer theme={theme}>
<SettingContainer style={{ backgroundColor: 'inherit' }}>
<InstallNpxUv />
</SettingContainer>
}

View File

@@ -1,5 +1,3 @@
import 'emoji-picker-element'
import { CheckOutlined, LoadingOutlined, RollbackOutlined, ThunderboltOutlined } from '@ant-design/icons'
import { loggerService } from '@logger'
import EmojiPicker from '@renderer/components/EmojiPicker'

View File

@@ -6,7 +6,7 @@ import AiProvider from '@renderer/aiCore'
import type { CompletionsParams } from '@renderer/aiCore/legacy/middleware/schemas'
import type { AiSdkMiddlewareConfig } from '@renderer/aiCore/middleware/AiSdkMiddlewareBuilder'
import { buildStreamTextParams } from '@renderer/aiCore/prepareParams'
import { isDedicatedImageGenerationModel, isEmbeddingModel } from '@renderer/config/models'
import { isDedicatedImageGenerationModel, isEmbeddingModel, isFunctionCallingModel } from '@renderer/config/models'
import { getStoreSetting } from '@renderer/hooks/useSettings'
import i18n from '@renderer/i18n'
import store from '@renderer/store'
@@ -18,6 +18,7 @@ import type { Message } from '@renderer/types/newMessage'
import type { SdkModel } from '@renderer/types/sdk'
import { removeSpecialCharactersForTopicName, uuid } from '@renderer/utils'
import { abortCompletion, readyToAbort } from '@renderer/utils/abortController'
import { isToolUseModeFunction } from '@renderer/utils/assistant'
import { isAbortError } from '@renderer/utils/error'
import { purifyMarkdownImages } from '@renderer/utils/markdown'
import { isPromptToolUse, isSupportedToolUse } from '@renderer/utils/mcp-tools'
@@ -126,12 +127,16 @@ export async function fetchChatCompletion({
requestOptions: options
})
// Safely fallback to prompt tool use when function calling is not supported by model.
const usePromptToolUse =
isPromptToolUse(assistant) || (isToolUseModeFunction(assistant) && !isFunctionCallingModel(assistant.model))
const middlewareConfig: AiSdkMiddlewareConfig = {
streamOutput: assistant.settings?.streamOutput ?? true,
onChunk: onChunkReceived,
model: assistant.model,
enableReasoning: capabilities.enableReasoning,
isPromptToolUse: isPromptToolUse(assistant),
isPromptToolUse: usePromptToolUse,
isSupportedToolUse: isSupportedToolUse(assistant),
isImageGenerationEndpoint: isDedicatedImageGenerationModel(assistant.model || getDefaultModel()),
webSearchPluginConfig: webSearchPluginConfig,

View File

@@ -36,9 +36,10 @@ export const DEFAULT_ASSISTANT_SETTINGS: AssistantSettings = {
streamOutput: true,
topP: 1,
enableTopP: false,
toolUseMode: 'prompt',
// It would gracefully fallback to prompt if not supported by model.
toolUseMode: 'function',
customParameters: []
}
} as const
export function getDefaultAssistant(): Assistant {
return {
@@ -176,7 +177,7 @@ export const getAssistantSettings = (assistant: Assistant): AssistantSettings =>
enableMaxTokens: assistant?.settings?.enableMaxTokens ?? false,
maxTokens: getAssistantMaxTokens(),
streamOutput: assistant?.settings?.streamOutput ?? true,
toolUseMode: assistant?.settings?.toolUseMode ?? 'prompt',
toolUseMode: assistant?.settings?.toolUseMode ?? 'function',
defaultModel: assistant?.defaultModel ?? undefined,
reasoning_effort: assistant?.settings?.reasoning_effort ?? undefined,
customParameters: assistant?.settings?.customParameters ?? []

View File

@@ -67,7 +67,7 @@ const persistedReducer = persistReducer(
{
key: 'cherry-studio',
storage,
version: 174,
version: 176,
blacklist: ['runtime', 'messages', 'messageBlocks', 'tabs', 'toolPermissions'],
migrate
},

View File

@@ -2819,6 +2819,43 @@ const migrateConfig = {
logger.error('migrate 174 error', error as Error)
return state
}
},
'175': (state: RootState) => {
try {
state.assistants.assistants.forEach((assistant) => {
// @ts-ignore
if (assistant.settings?.reasoning_effort === 'off') {
// @ts-ignore
assistant.settings.reasoning_effort = 'none'
}
// @ts-ignore
if (assistant.settings?.reasoning_effort_cache === 'off') {
// @ts-ignore
assistant.settings.reasoning_effort_cache = 'none'
}
})
logger.info('migrate 175 success')
return state
} catch (error) {
logger.error('migrate 175 error', error as Error)
return state
}
},
'176': (state: RootState) => {
try {
state.llm.providers.forEach((provider) => {
if (provider.id === SystemProviderIds.qiniu) {
provider.anthropicApiHost = 'https://api.qnaigc.com'
}
if (provider.id === SystemProviderIds.longcat) {
provider.anthropicApiHost = 'https://api.longcat.chat/anthropic'
}
})
return state
} catch (error) {
logger.error('migrate 176 error', error as Error)
return state
}
}
}

View File

@@ -585,9 +585,11 @@ const fetchAndProcessAgentResponseImpl = async (
return
}
// Only mark as cleared if there was a previous session ID (not initial assignment)
sessionWasCleared = !!latestAgentSessionId
latestAgentSessionId = sessionId
agentSession.agentSessionId = sessionId
sessionWasCleared = true
logger.debug(`Agent session ID updated`, {
topicId,

View File

@@ -6,6 +6,7 @@ export type ToolPermissionRequestPayload = {
requestId: string
toolName: string
toolId: string
toolCallId: string
description?: string
requiresPermissions: boolean
input: Record<string, unknown>
@@ -82,12 +83,12 @@ export const selectActiveToolPermission = (state: ToolPermissionsState): ToolPer
return activeEntries[0]
}
export const selectPendingPermissionByToolName = (
export const selectPendingPermission = (
state: ToolPermissionsState,
toolName: string
toolCallId: string
): ToolPermissionEntry | undefined => {
const activeEntries = Object.values(state.requests)
.filter((entry) => entry.toolName === toolName)
.filter((entry) => entry.toolCallId === toolCallId)
.filter(
(entry) => entry.status === 'pending' || entry.status === 'submitting-allow' || entry.status === 'submitting-deny'
)

View File

@@ -83,7 +83,10 @@ const ThinkModelTypes = [
'o',
'openai_deep_research',
'gpt5',
'gpt5_1',
'gpt5_codex',
'gpt5_1_codex',
'gpt5pro',
'grok',
'grok4_fast',
'gemini',
@@ -100,7 +103,7 @@ const ThinkModelTypes = [
] as const
export type ReasoningEffortOption = NonNullable<OpenAI.ReasoningEffort> | 'auto'
export type ThinkingOption = ReasoningEffortOption | 'off'
export type ThinkingOption = ReasoningEffortOption
export type ThinkingModelType = (typeof ThinkModelTypes)[number]
export type ThinkingOptionConfig = Record<ThinkingModelType, ThinkingOption[]>
export type ReasoningEffortConfig = Record<ThinkingModelType, ReasoningEffortOption[]>
@@ -111,6 +114,7 @@ export function isThinkModelType(type: string): type is ThinkingModelType {
}
export const EFFORT_RATIO: EffortRatio = {
none: 0.01,
minimal: 0.05,
low: 0.05,
medium: 0.5,

View File

@@ -126,6 +126,10 @@ export type OpenAIExtraBody = {
source_lang: 'auto'
target_lang: string
}
// for gpt-5 series models verbosity control
text?: {
verbosity?: 'low' | 'medium' | 'high'
}
}
// image is for openrouter. audio is ignored for now
export type OpenAIModality = OpenAI.ChatCompletionModality | 'image'

127
yarn.lock
View File

@@ -102,7 +102,19 @@ __metadata:
languageName: node
linkType: hard
"@ai-sdk/anthropic@npm:2.0.44":
"@ai-sdk/anthropic@npm:2.0.45":
version: 2.0.45
resolution: "@ai-sdk/anthropic@npm:2.0.45"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/ef0e54f032e3b8324c278f3b25d9b388308204d753404c49fd880709a796c2343aee36d335c99f50e683edd39d5b8b6f42b2e9034e1725d8e0db514e2233d104
languageName: node
linkType: hard
"@ai-sdk/anthropic@npm:^2.0.44":
version: 2.0.44
resolution: "@ai-sdk/anthropic@npm:2.0.44"
dependencies:
@@ -179,42 +191,42 @@ __metadata:
languageName: node
linkType: hard
"@ai-sdk/google-vertex@npm:^3.0.62":
version: 3.0.62
resolution: "@ai-sdk/google-vertex@npm:3.0.62"
"@ai-sdk/google-vertex@npm:^3.0.68":
version: 3.0.68
resolution: "@ai-sdk/google-vertex@npm:3.0.68"
dependencies:
"@ai-sdk/anthropic": "npm:2.0.44"
"@ai-sdk/google": "npm:2.0.31"
"@ai-sdk/anthropic": "npm:2.0.45"
"@ai-sdk/google": "npm:2.0.36"
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
google-auth-library: "npm:^9.15.0"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/673bb51e3e0cbe5235ad5e65379b1cb8f099dbc690ab8552e208553a9f1cc6026d2588e956e73468bc6d267066be276e7a9aba98e32e905809dfbeab4ac0e352
checksum: 10c0/6a3f4cb1e649313b46a0c349c717757071f8b012b0a28e59ab7a55fd35d9600f0043f0a4f57417c4cc49e0d3734e89a1e4fb248fc88795b5286c83395d3f617a
languageName: node
linkType: hard
"@ai-sdk/google@npm:2.0.31":
version: 2.0.31
resolution: "@ai-sdk/google@npm:2.0.31"
"@ai-sdk/google@npm:2.0.36":
version: 2.0.36
resolution: "@ai-sdk/google@npm:2.0.36"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/d8f143f058fb62e6e67e30564ec92530d7389c22ad91b1e4bbe781c8570bf718cd417e44dcd4855e347e85c4174538a9a884eac666109e17f20d21467ab3e749
checksum: 10c0/2c6de5e1cf0703b6b932a3f313bf4bc9439897af39c805169ab04bba397185d99b2b1306f3b817f991ca41fdced0365b072ee39e76382c045930256bce47e0e4
languageName: node
linkType: hard
"@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.31#~/.yarn/patches/@ai-sdk-google-npm-2.0.31-b0de047210.patch":
version: 2.0.31
resolution: "@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.31#~/.yarn/patches/@ai-sdk-google-npm-2.0.31-b0de047210.patch::version=2.0.31&hash=9f3835"
"@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.36#~/.yarn/patches/@ai-sdk-google-npm-2.0.36-6f3cc06026.patch":
version: 2.0.36
resolution: "@ai-sdk/google@patch:@ai-sdk/google@npm%3A2.0.36#~/.yarn/patches/@ai-sdk-google-npm-2.0.36-6f3cc06026.patch::version=2.0.36&hash=2da8c3"
dependencies:
"@ai-sdk/provider": "npm:2.0.0"
"@ai-sdk/provider-utils": "npm:3.0.17"
peerDependencies:
zod: ^3.25.76 || ^4.1.8
checksum: 10c0/dd37dfb7abf402caaae3edb2f1a8dab018fddad6ba3190376723e03a2a0c352329c8e41e60df3fb8436b717d9c2ee4b82dff091848f50d026f62565cbdb158f8
checksum: 10c0/ce99a497360377d2917cf3a48278eb6f4337623ce3738ba743cf048c8c2a7731ec4fc27605a50e461e716ed49b3690206ca8e4078f27cb7be162b684bfc2fc22
languageName: node
linkType: hard
@@ -1879,30 +1891,30 @@ __metadata:
languageName: node
linkType: hard
"@cherrystudio/ai-core@workspace:^1.0.0-alpha.18, @cherrystudio/ai-core@workspace:packages/aiCore":
"@cherrystudio/ai-core@workspace:^1.0.9, @cherrystudio/ai-core@workspace:packages/aiCore":
version: 0.0.0-use.local
resolution: "@cherrystudio/ai-core@workspace:packages/aiCore"
dependencies:
"@ai-sdk/anthropic": "npm:^2.0.43"
"@ai-sdk/azure": "npm:^2.0.66"
"@ai-sdk/deepseek": "npm:^1.0.27"
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.31#~/.yarn/patches/@ai-sdk-google-npm-2.0.31-b0de047210.patch"
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch"
"@ai-sdk/openai-compatible": "npm:^1.0.26"
"@ai-sdk/provider": "npm:^2.0.0"
"@ai-sdk/provider-utils": "npm:^3.0.16"
"@ai-sdk/xai": "npm:^2.0.31"
"@cherrystudio/ai-sdk-provider": "workspace:*"
tsdown: "npm:^0.12.9"
typescript: "npm:^5.0.0"
vitest: "npm:^3.2.4"
zod: "npm:^4.1.5"
peerDependencies:
"@ai-sdk/google": ^2.0.36
"@ai-sdk/openai": ^2.0.64
"@cherrystudio/ai-sdk-provider": ^0.1.2
ai: ^5.0.26
languageName: unknown
linkType: soft
"@cherrystudio/ai-sdk-provider@workspace:*, @cherrystudio/ai-sdk-provider@workspace:packages/ai-sdk-provider":
"@cherrystudio/ai-sdk-provider@workspace:packages/ai-sdk-provider":
version: 0.0.0-use.local
resolution: "@cherrystudio/ai-sdk-provider@workspace:packages/ai-sdk-provider"
dependencies:
@@ -2140,9 +2152,9 @@ __metadata:
languageName: unknown
linkType: soft
"@cherrystudio/openai@npm:^6.5.0":
version: 6.5.0
resolution: "@cherrystudio/openai@npm:6.5.0"
"@cherrystudio/openai@npm:^6.9.0":
version: 6.9.0
resolution: "@cherrystudio/openai@npm:6.9.0"
peerDependencies:
ws: ^8.18.0
zod: ^3.25 || ^4.0
@@ -2153,7 +2165,7 @@ __metadata:
optional: true
bin:
openai: bin/cli
checksum: 10c0/0f6cafb97aec17037d5ddcccc88e4b4a9c8de77a989a35bab2394b682a1a69e8a9343e8ee5eb8107d5c495970dbf3567642f154c033f7afc3bf078078666a92e
checksum: 10c0/9c51ef33c5b9d08041a115e3d6a8158412a379998a0eae186923d5bdcc808b634c1fef4471a1d499bb8c624b04c075167bc90a1a60a805005c0657ecebbb58d0
languageName: node
linkType: hard
@@ -5169,15 +5181,15 @@ __metadata:
languageName: node
linkType: hard
"@opeoginni/github-copilot-openai-compatible@npm:0.1.19":
version: 0.1.19
resolution: "@opeoginni/github-copilot-openai-compatible@npm:0.1.19"
"@opeoginni/github-copilot-openai-compatible@npm:0.1.21":
version: 0.1.21
resolution: "@opeoginni/github-copilot-openai-compatible@npm:0.1.21"
dependencies:
"@ai-sdk/openai": "npm:^2.0.42"
"@ai-sdk/openai-compatible": "npm:^1.0.19"
"@ai-sdk/provider": "npm:^2.1.0-beta.4"
"@ai-sdk/provider-utils": "npm:^3.0.10"
checksum: 10c0/dfb01832d7c704b2eb080fc09d31b07fc26e5ac4e648ce219dc0d80cf044ef3cae504427781ec2ce3c5a2459c9c81d043046a255642108d5b3de0f83f4a9f20a
checksum: 10c0/05b73d935dc7f24123330ade919698b486ac2a25a7d607c1d3789471f782ead4c803ce6ffd3d97b9ca3f1aadaf6b5c1ea52363c9d24b36894fcfc403fda9cef3
languageName: node
linkType: hard
@@ -8070,6 +8082,15 @@ __metadata:
languageName: node
linkType: hard
"@types/better-sqlite3@npm:^7.6.12":
version: 7.6.13
resolution: "@types/better-sqlite3@npm:7.6.13"
dependencies:
"@types/node": "npm:*"
checksum: 10c0/c4336e7b92343eb0e988ded007c53fa9887b98a38d61175226e86124a1a2c28b1a4e3892873c5041e350b7bfa2901f85c82db1542c4f0eed1d3a899682c92106
languageName: node
linkType: hard
"@types/body-parser@npm:*":
version: 1.19.6
resolution: "@types/body-parser@npm:1.19.6"
@@ -9891,11 +9912,14 @@ __metadata:
"@agentic/searxng": "npm:^7.3.3"
"@agentic/tavily": "npm:^7.3.3"
"@ai-sdk/amazon-bedrock": "npm:^3.0.53"
"@ai-sdk/anthropic": "npm:^2.0.44"
"@ai-sdk/cerebras": "npm:^1.0.31"
"@ai-sdk/gateway": "npm:^2.0.9"
"@ai-sdk/google-vertex": "npm:^3.0.62"
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.36#~/.yarn/patches/@ai-sdk-google-npm-2.0.36-6f3cc06026.patch"
"@ai-sdk/google-vertex": "npm:^3.0.68"
"@ai-sdk/huggingface": "patch:@ai-sdk/huggingface@npm%3A0.0.8#~/.yarn/patches/@ai-sdk-huggingface-npm-0.0.8-d4d0aaac93.patch"
"@ai-sdk/mistral": "npm:^2.0.23"
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch"
"@ai-sdk/perplexity": "npm:^2.0.17"
"@ant-design/v5-patch-for-react-19": "npm:^1.0.3"
"@anthropic-ai/claude-agent-sdk": "patch:@anthropic-ai/claude-agent-sdk@npm%3A0.1.30#~/.yarn/patches/@anthropic-ai-claude-agent-sdk-npm-0.1.30-b50a299674.patch"
@@ -9905,7 +9929,7 @@ __metadata:
"@aws-sdk/client-bedrock-runtime": "npm:^3.910.0"
"@aws-sdk/client-s3": "npm:^3.910.0"
"@biomejs/biome": "npm:2.2.4"
"@cherrystudio/ai-core": "workspace:^1.0.0-alpha.18"
"@cherrystudio/ai-core": "workspace:^1.0.9"
"@cherrystudio/embedjs": "npm:^0.1.31"
"@cherrystudio/embedjs-libsql": "npm:^0.1.31"
"@cherrystudio/embedjs-loader-csv": "npm:^0.1.31"
@@ -9919,7 +9943,7 @@ __metadata:
"@cherrystudio/embedjs-ollama": "npm:^0.1.31"
"@cherrystudio/embedjs-openai": "npm:^0.1.31"
"@cherrystudio/extension-table-plus": "workspace:^"
"@cherrystudio/openai": "npm:^6.5.0"
"@cherrystudio/openai": "npm:^6.9.0"
"@dnd-kit/core": "npm:^6.3.1"
"@dnd-kit/modifiers": "npm:^9.0.0"
"@dnd-kit/sortable": "npm:^10.0.0"
@@ -9952,7 +9976,7 @@ __metadata:
"@opentelemetry/sdk-trace-base": "npm:^2.0.0"
"@opentelemetry/sdk-trace-node": "npm:^2.0.0"
"@opentelemetry/sdk-trace-web": "npm:^2.0.0"
"@opeoginni/github-copilot-openai-compatible": "npm:0.1.19"
"@opeoginni/github-copilot-openai-compatible": "npm:0.1.21"
"@paymoapp/electron-shutdown-handler": "npm:^1.1.2"
"@playwright/test": "npm:^1.52.0"
"@radix-ui/react-context-menu": "npm:^2.2.16"
@@ -9985,6 +10009,7 @@ __metadata:
"@tiptap/y-tiptap": "npm:^3.0.0"
"@truto/turndown-plugin-gfm": "npm:^1.0.2"
"@tryfabric/martian": "npm:^1.2.4"
"@types/better-sqlite3": "npm:^7.6.12"
"@types/cli-progress": "npm:^3"
"@types/content-type": "npm:^1.1.9"
"@types/cors": "npm:^2.8.19"
@@ -10028,6 +10053,7 @@ __metadata:
archiver: "npm:^7.0.1"
async-mutex: "npm:^0.5.0"
axios: "npm:^1.7.3"
better-sqlite3: "npm:12.4.1"
browser-image-compression: "npm:^2.0.2"
chardet: "npm:^2.1.0"
check-disk-space: "npm:3.4.0"
@@ -10059,6 +10085,7 @@ __metadata:
electron-window-state: "npm:^5.0.3"
emittery: "npm:^1.0.3"
emoji-picker-element: "npm:^1.22.1"
emoji-picker-element-data: "npm:^1"
epub: "patch:epub@npm%3A1.3.0#~/.yarn/patches/epub-npm-1.3.0-8325494ffe.patch"
eslint: "npm:^9.22.0"
eslint-plugin-import-zod: "npm:^1.2.0"
@@ -10886,6 +10913,17 @@ __metadata:
languageName: node
linkType: hard
"better-sqlite3@npm:12.4.1":
version: 12.4.1
resolution: "better-sqlite3@npm:12.4.1"
dependencies:
bindings: "npm:^1.5.0"
node-gyp: "npm:latest"
prebuild-install: "npm:^7.1.1"
checksum: 10c0/88773a75d996b4171e5690a38459b05dc814a792701b224bd9909ee084dc0b4c64aaffbdbcf4bbbc6d4e247faf19e91b2a56cf4175d746d3bd9ff14764eb05aa
languageName: node
linkType: hard
"bignumber.js@npm:^9.0.0":
version: 9.2.1
resolution: "bignumber.js@npm:9.2.1"
@@ -10900,6 +10938,15 @@ __metadata:
languageName: node
linkType: hard
"bindings@npm:^1.5.0":
version: 1.5.0
resolution: "bindings@npm:1.5.0"
dependencies:
file-uri-to-path: "npm:1.0.0"
checksum: 10c0/3dab2491b4bb24124252a91e656803eac24292473e56554e35bbfe3cc1875332cfa77600c3bac7564049dc95075bf6fcc63a4609920ff2d64d0fe405fcf0d4ba
languageName: node
linkType: hard
"birecord@npm:^0.1.1":
version: 0.1.1
resolution: "birecord@npm:0.1.1"
@@ -13640,6 +13687,13 @@ __metadata:
languageName: node
linkType: hard
"emoji-picker-element-data@npm:^1":
version: 1.8.0
resolution: "emoji-picker-element-data@npm:1.8.0"
checksum: 10c0/c8976b636205a0cc90d2690859a1193add71a948dadf743962b47c338a4c3715768404d0ccbc02156608b44abf41f3e1d51756e06f1bbed9d164dd4cb1752103
languageName: node
linkType: hard
"emoji-picker-element@npm:^1.22.1":
version: 1.26.3
resolution: "emoji-picker-element@npm:1.26.3"
@@ -14907,6 +14961,13 @@ __metadata:
languageName: node
linkType: hard
"file-uri-to-path@npm:1.0.0":
version: 1.0.0
resolution: "file-uri-to-path@npm:1.0.0"
checksum: 10c0/3b545e3a341d322d368e880e1c204ef55f1d45cdea65f7efc6c6ce9e0c4d22d802d5629320eb779d006fe59624ac17b0e848d83cc5af7cd101f206cb704f5519
languageName: node
linkType: hard
"filelist@npm:^1.0.4":
version: 1.0.4
resolution: "filelist@npm:1.0.4"
@@ -20637,7 +20698,7 @@ __metadata:
languageName: node
linkType: hard
"prebuild-install@npm:^7.1.2":
"prebuild-install@npm:^7.1.1, prebuild-install@npm:^7.1.2":
version: 7.1.3
resolution: "prebuild-install@npm:7.1.3"
dependencies: