Compare commits

...

30 Commits

Author SHA1 Message Date
suyao
76d48f9ccb fix: address PR review issues for Volcengine integration
- Fix region field being ignored: pass user-configured region to listFoundationModels and listEndpoints
- Add user notification before silent fallback when API fails
- Throw error on credential corruption instead of returning null
- Remove redundant credentials (accessKeyId, secretAccessKey) from Redux store (they're securely stored via safeStorage)
- Add warnings field to ListModelsResult for partial API failures
- Fix Redux/IPC order: save to secure storage first, then update Redux on success
- Update related tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 19:39:15 +08:00
suyao
115cd80432 fix: format 2025-11-27 01:33:17 +08:00
suyao
531101742e feat: add project name support for Volcengine integration 2025-11-27 01:29:02 +08:00
suyao
c3c577dff4 feat: add Volcengine integration with settings and API client
- Implement Volcengine configuration in multiple languages (el-gr, es-es, fr-fr, ja-jp, pt-pt, ru-ru).
- Add Volcengine settings component to manage access key ID, secret access key, and region.
- Create Volcengine service for API interactions, including credential management and model listing.
- Extend OpenAI API client to support Volcengine's signed API for model retrieval.
- Update Redux store to handle Volcengine settings and credentials.
- Implement migration for Volcengine settings in the store.
- Add hooks for accessing and managing Volcengine settings in the application.
2025-11-27 01:15:22 +08:00
Apine
eb4670c22c docs: correct the links on the readme (#11477) 2025-11-26 21:17:25 +08:00
fullex
c0beab0f8a chore: update release notes for v1.7.0-rc.3
- Updated version to 1.7.0-rc.3 in package.json
- Added new features including support for Silicon provider and AIHubMix
- Consolidated bug fixes related to providers, models, UI, and settings
- Improved SDK integration with upgraded dependencies
2025-11-26 21:09:27 +08:00
chenxue
97519d96d7 feat(aihubmix): support nano banana (#11476)
support nano banana
2025-11-26 20:51:52 +08:00
Phantom
cbf1d461f0 fix(i18n): clean up translation tags and untranslated strings (#11471)
fix(i18n): update translation strings in ja-jp and ru-ru files

Remove unnecessary translate_input tags and fix incorrect translations
2025-11-26 20:08:04 +08:00
SuYao
bed55c418d fix: silicon provider code list (#11474) 2025-11-26 19:59:57 +08:00
Copilot
82ef4a32eb Fix Poe API reasoning parameters for GPT-5 and reasoning models (#11379)
* Initial plan

* feat: Add proper Poe API reasoning parameters support for GPT-5 and other models

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* test: Add comprehensive tests for Poe API reasoning support

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* fix: Add missing isGPT5SeriesModel import in reasoning.ts

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* fix: Use correct extra_body format for Poe API reasoning parameters

Per Poe API documentation, custom bot parameters like reasoning_effort
and thinking_budget should be passed directly in extra_body, not as
nested structures.

Changed from:
- reasoning_effort: 'low' -> extra_body: { reasoning_effort: 'low' }
- thinking: { type: 'enabled', budget_tokens: X } -> extra_body: { thinking_budget: X }
- extra_body: { google: { thinking_config: {...} } } -> extra_body: { thinking_budget: X }

Updated tests to match the corrected implementation.

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* fix: Update reasoning parameters and improve type definitions for GPT-5 support

* fix lint

* docs

* fix(reasoning): handle edge cases for models without token limit configuration

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
Co-authored-by: suyao <sy20010504@gmail.com>
2025-11-26 19:56:31 +08:00
槑囿脑袋
79f75843a7 fix: get quota and quota tips (#11472)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 19:53:59 +08:00
Phantom
91f0c47b33 fix(anthropic): prevent duplicate /v1 in API endpoints (#11467)
* fix(anthropic): prevent duplicate /v1 in API endpoints

Anthropic SDK automatically appends /v1 to endpoints, so we should not add it in our formatting. This change ensures URLs are correctly formatted without duplicate path segments.

* fix(anthropic): strip /v1 suffix in getSdkClient to prevent duplicate in models endpoint

The issue was:
- AI SDK (for chat) needs baseURL with /v1 suffix
- Anthropic SDK (for listModels) automatically appends /v1 to all endpoints

Solution:
- Keep /v1 in formatProviderApiHost for AI SDK compatibility
- Strip /v1 in getSdkClient before passing to Anthropic SDK
- This ensures chat works correctly while preventing /v1/v1/models duplication

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(anthropic): correct preview URL to match actual request behavior

The preview now correctly shows:
- Input: https://api.siliconflow.cn/v2
- Preview: https://api.siliconflow.cn/v2/messages (was incorrectly showing /v2/v1/messages)
- Actual: https://api.siliconflow.cn/v2/messages

This matches the actual behavior where getSdkClient strips /v1 suffix before
passing to Anthropic SDK, which then appends /v1/messages.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(anthropic): strip all API version suffixes, not just /v1

The Anthropic SDK always appends /v1 to endpoints, regardless of the baseURL.
Previously we only stripped /v1 suffix, causing issues with custom versions like /v2.

Now we strip all version suffixes (/v1, /v2, /v1beta, etc.) before passing to Anthropic SDK.

Examples:
- Input: https://api.siliconflow.cn/v2/
- After strip: https://api.siliconflow.cn
- Actual request: https://api.siliconflow.cn/v1/messages 

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(anthropic): correct preview to show AI SDK behavior, not Anthropic SDK

The preview was showing the wrong URL because it was reflecting Anthropic SDK behavior
(which strips versions and uses /v1), but checkApi and chat use AI SDK which preserves
the user's version path.

Now preview correctly shows:
- Input: https://api.siliconflow.cn/v2/
- AI SDK (checkApi/chat): https://api.siliconflow.cn/v2/messages 
- Preview: https://api.siliconflow.cn/v2/messages 

Note: Anthropic SDK (for listModels) still strips versions to use /v1/models,
but this is not shown in preview since it's a different code path.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor(checkApi): remove unnecessary legacy fallback

The legacy fallback logic in checkApi was:
1. Complex and hard to maintain
2. Never actually triggered in practice for Modern SDK supported providers
3. Could cause duplicate API requests

Since Modern AI SDK now handles all major providers correctly,
we can simplify by directly throwing errors instead of falling back.

This also removes unused imports: AiProvider and CompletionsParams.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(anthropic): restore version stripping in getSdkClient for Anthropic SDK

The Anthropic SDK (used for listModels) always appends /v1 to endpoints,
so we need to strip version suffixes from baseURL to avoid duplication.

This only affects Anthropic SDK operations (like listModels).
AI SDK operations (chat/checkApi) use provider.apiHost directly via
providerToAiSdkConfig, which preserves the user's version path.

Examples:
- AI SDK (chat): https://api.siliconflow.cn/v1 -> /v1/messages 
- Anthropic SDK (models): https://api.siliconflow.cn/v1 -> strip v1 -> /v1/models 

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(anthropic): ensure AI SDK gets /v1 in baseURL, strip for Anthropic SDK

The correct behavior is:
1. formatProviderApiHost: Add /v1 to apiHost (for AI SDK compatibility)
2. AI SDK (chat/checkApi): Use apiHost with /v1 -> /v1/messages 
3. Anthropic SDK (listModels): Strip /v1 from baseURL -> SDK adds /v1/models 
4. Preview: Show AI SDK behavior (main use case) -> /v1/messages 

Examples:
- Input: https://api.siliconflow.cn
- Formatted: https://api.siliconflow.cn/v1 (added by formatApiHost)
- AI SDK: https://api.siliconflow.cn/v1/messages 
- Anthropic SDK: https://api.siliconflow.cn (stripped) + /v1/models 
- Preview: https://api.siliconflow.cn/v1/messages 

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor(ai): simplify AiProviderNew initialization and improve docs

Update AiProviderNew constructor to automatically format URLs by default
Add comprehensive documentation explaining constructor behavior and usage

* chore: remove unused play.ts file

* fix(anthropic): strip api version from baseURL to avoid endpoint duplication

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 19:26:39 +08:00
SuYao
28dff9dfe3 feat: add silicon provider support for Anthropic API compatibility (#11468)
* feat: add silicon provider support for Anthropic API compatibility

* fix: update handling of ANTHROPIC_BASE_URL for silicon provider compatibility

* fix: update anthropicApiHost for silicon provider to use the correct endpoint

* fix: remove silicon from CLAUDE_OFFICIAL_SUPPORTED_PROVIDERS

* chore: add comment to clarify silicon model fallback logic in CLAUDE_OFFICIAL_SUPPORTED_PROVIDERS
2025-11-26 19:19:34 +08:00
fullex
155930ecf4 chore: update @types/react and @types/react-dom to latest versions
- Bumped @types/react from ^19.0.12 to ^19.2.7
- Bumped @types/react-dom from ^19.0.4 to ^19.2.3
- Updated csstype dependency from ^3.0.2 to ^3.2.2 in yarn.lock

These updates ensure compatibility with the latest React types and improve type definitions.
2025-11-26 16:05:40 +08:00
Shuchen Luo
b6b999b635 fix: add claude-opus-4-5 pattern to THINKING_TOKEN_MAP (#11457)
* fix: add claude-opus-4-5 pattern to THINKING_TOKEN_MAP

Adds missing regex pattern for claude-opus-4-5 models (e.g., claude-opus-4-5-20251101)
to the THINKING_TOKEN_MAP configuration. Without this pattern, the model was not
recognized, causing findTokenLimit() to return undefined and leading to an
AI_InvalidArgumentError when using Google Vertex AI Anthropic provider.

The fix adds the pattern 'claude-opus-4-5.*$': { min: 1024, max: 64_000 } to
match the existing claude-4 thinking token configuration.

Fixes AI_InvalidArgumentError: invalid anthropic provider options caused by
budgetTokens receiving NaN instead of a number.

Signed-off-by: Shuchen Luo (personal linux) <nemo0806@gmail.com>

* refactor: make THINKING_TOKEN_MAP constant private

* fix(reasoning): update claude model token limit regex patterns

- Consolidate claude model regex patterns to be more consistent
- Add comprehensive test cases for various claude model variants
- Ensure case insensitivity and proper handling of edge cases

* fix: format

* feat(models): extend claude model regex patterns to support AWS and GCP formats

Update regex patterns in THINKING_TOKEN_MAP to support additional Claude model ID formats used in AWS Bedrock and GCP Vertex AI
Add comprehensive test cases for new model ID formats and reorganize test suite

* fix: format

---------

Signed-off-by: Shuchen Luo (personal linux) <nemo0806@gmail.com>
Co-authored-by: icarus <eurfelux@gmail.com>
2025-11-26 15:47:14 +08:00
SuYao
0d69eeaccf fix: improve Gemini reasoning and message handling (#11439)
* fix: some bug

* fix/test

* fix: lint

* fix: 添加跳过 Gemini3 思考签名的中间件并更新消息转换逻辑

* fix: comment

* fix: js docs

* fix:id bug

* fix: condition

* fix: Update the user's verbosity setting logic to ensure that supported options are prioritized for use.

* fix: Add support for the 'openai-response' provider type.

* fix: lint
2025-11-26 15:46:52 +08:00
Phantom
ff48ce0a58 docs: enhance CLAUDE.md with quality guidelines (#11464)
* docs: add linting and testing step to completion guidelines

* docs: update CLAUDE.md with PR template guideline
2025-11-26 15:45:43 +08:00
SuYao
a2de7d48be fix: update Azure provider handling in AI SDK integration (#11465) 2025-11-26 15:43:32 +08:00
fullex
d4396b4890 docs: update links in Chinese contributing guide
- Corrected the paths in the Chinese version of the contributing guide to point to the appropriate documentation locations.
2025-11-26 13:21:10 +08:00
fullex
283519f1fd Merge branch 'main' of github.com:CherryHQ/cherry-studio 2025-11-26 13:17:09 +08:00
fullex
bb41709ce8 docs: update docs directory structure
- Updated links in CONTRIBUTING.md and README.md to point to the correct Chinese documentation paths.
- Removed outdated files including the English and Chinese versions of the branching strategy, contributing guide, and test plan documents.
- Cleaned up references to non-existent documentation in the project structure to streamline the contributor experience.
2025-11-26 13:17:01 +08:00
Copilot
c1f4b5b9b9 Fix: custom parameters for Gemini models (#11456)
* Initial plan

* fix(aiCore): extract AI SDK standard params from custom params for Gemini

Custom parameters like topK, frequencyPenalty, presencePenalty,
stopSequences, and seed should be passed as top-level streamText()
parameters, not in providerOptions. This fixes the issue where these
parameters were being ignored by the AI SDK's @ai-sdk/google module.

Changes:
- Add extractAiSdkStandardParams function to separate standard params
- Update buildProviderOptions to return both providerOptions and standardParams
- Update buildStreamTextParams to spread standardParams into params object
- Update tests to reflect new return structure

Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>

* refactor(aiCore): remove extractAiSdkStandardParams function and its tests, streamline parameter extraction logic

* chore: type

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: DeJeune <67425183+DeJeune@users.noreply.github.com>
Co-authored-by: suyao <sy20010504@gmail.com>
2025-11-26 13:16:58 +08:00
SuYao
5fb59d21ec fix: header merging logic via chore ai-sdk (#11443)
* fix: update provider-utils and add patch for header merging logic

* fix: enhance header merging logic to deduplicate values

* fix: handle null values in header merging logic

* chore: update ai-sdk dependencies and remove obsolete patches

- Updated @ai-sdk/amazon-bedrock from 3.0.56 to 3.0.61
- Updated @ai-sdk/anthropic from 2.0.45 to 2.0.49
- Updated @ai-sdk/gateway from 2.0.13 to 2.0.15
- Updated @ai-sdk/google from 2.0.40 to 2.0.43
- Updated @ai-sdk/google-vertex from 3.0.72 to 3.0.79
- Updated @ai-sdk/openai from 2.0.71 to 2.0.72
- Updated @ai-sdk/provider-utils from patch version to 3.0.17
- Removed obsolete patches for @ai-sdk/openai and @ai-sdk/provider-utils
- Added reasoning_content field to OpenAIChat response and chunk schemas
- Enhanced OpenAIChatLanguageModel to handle reasoning content in responses

* chore
2025-11-26 12:31:55 +08:00
Phantom
e8de31ca64 fix: Groq verbosity setting (#11452)
* feat(settings): show OpenAI settings for supported service tier providers

Add support for displaying OpenAI settings when provider supports service tiers.
This includes refactoring the condition check and fixing variable naming consistency.

* fix(settings): set openAI verbosity to undefined by default

* fix(store): bump version to 178 and disable verbosity for groq provider

Add migration to remove verbosity from groq provider and implement provider utility to check verbosity support
Update provider types to include verbosity support flag

* feat(provider): add verbosity option support for providers

Add verbosity parameter support in provider API options settings

* fix(aiCore): check provider support for verbosity before applying

Add provider validation and check for verbosity support to prevent errors when unsupported providers are used with verbosity settings

* feat(settings): add Groq settings group component and translations

add new GroqSettingsGroup component for managing Groq provider settings
update translations for Groq settings in both zh-cn and en-us locales
refactor OpenAISettingsGroup to separate Groq-specific logic

* feat(i18n): add groq settings and verbosity support translations

add translations for groq settings title and verbosity parameter support in multiple languages

* refactor(settings): simplify service tier mode fallback logic

Remove conditional service tier mode fallback and use provider-specific defaults directly

* fix(provider): remove redundant system provider check in verbosity support

* test(provider): add tests for verbosity support detection

* fix(OpenAISettingsGroup): add endpoint_type check for showSummarySetting condition

Add model.endpoint_type check to properly determine when to show summary setting for OpenAI models

* refactor(selector): simplify selector option types and add utility functions

remove undefined and null from selector option types
add utility functions to convert between option values and real values
update groq and openai settings groups to use new utilities
add new translation for "ignore" option

* fix(ApiOptionsSettings): correct checked state for verbosity toggle

* feat(i18n): add "ignore" translation for multiple languages

* refactor(groq): remove unused model prop and related checks

Clean up GroqSettingsGroup component by removing unused model prop and unnecessary service tier checks
2025-11-25 23:29:03 +08:00
Phantom
69d31a1e2b fix(models): qwen-mt-flash supports text delta (#11448)
refactor(models): improve text delta support check for qwen-mt models

Replace direct qwen-mt model check with regex pattern matching
Add comprehensive test cases for isNotSupportTextDeltaModel
Update all references to use new function name
2025-11-25 22:22:18 +08:00
fullex
fd3b7f717d fix: correct updateAssistantPreset reducer to properly update preset (#11453)
The previous implementation used `a = preset` inside forEach, which only
reassigns the local variable and doesn't actually update the array element.

Changed to use findIndex + direct array assignment to properly update
the preset in the state.

Fixes #11451

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 20:59:56 +08:00
LiuVaayne
bcd7bc9f2d ⬆️ chore: upgrade @anthropic-ai/claude-agent-sdk to 0.1.53 (#11444)
- Upgrade from 0.1.30 to 0.1.53
- Re-apply fork() patch for Electron IPC compatibility
2025-11-25 18:46:11 +08:00
kangfenmao
4dd92c3ce1 fix: handle optional provider in isSupportedReasoningEffortGrokModel function 2025-11-25 17:22:54 +08:00
SuYao
dc8df98929 fix: websearch button condition (#11440)
fix: button
2025-11-25 13:24:37 +08:00
fullex
0004a8cafe fix: respect enableMaxTokens setting when maxTokens is not configured (#11438)
* fix: respect enableMaxTokens setting when maxTokens is not configured

When enableMaxTokens is disabled, getMaxTokens() should return undefined
to let the API use its own default value, instead of forcing 4096 tokens.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(modelParameters): handle max tokens when feature is disabled

Check if max tokens feature is enabled before returning undefined to ensure proper API behavior

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: icarus <eurfelux@gmail.com>
2025-11-25 11:12:50 +08:00
118 changed files with 4579 additions and 1011 deletions

View File

@@ -1,8 +1,8 @@
diff --git a/dist/index.js b/dist/index.js
index dc7b74ba55337c491cdf1ab3e39ca68cc4187884..ace8c90591288e42c2957e93c9bf7984f1b22444 100644
index 51ce7e423934fb717cb90245cdfcdb3dae6780e6..0f7f7009e2f41a79a8669d38c8a44867bbff5e1f 100644
--- a/dist/index.js
+++ b/dist/index.js
@@ -472,7 +472,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
@@ -474,7 +474,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {
@@ -12,10 +12,10 @@ index dc7b74ba55337c491cdf1ab3e39ca68cc4187884..ace8c90591288e42c2957e93c9bf7984
// src/google-generative-ai-options.ts
diff --git a/dist/index.mjs b/dist/index.mjs
index 8390439c38cb7eaeb52080862cd6f4c58509e67c..a7647f2e11700dff7e1c8d4ae8f99d3637010733 100644
index f4b77e35c0cbfece85a3ef0d4f4e67aa6dde6271..8d2fecf8155a226006a0bde72b00b6036d4014b6 100644
--- a/dist/index.mjs
+++ b/dist/index.mjs
@@ -478,7 +478,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
@@ -480,7 +480,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {

View File

@@ -1,5 +1,5 @@
diff --git a/dist/index.js b/dist/index.js
index 7481f3b3511078068d87d03855b568b20bb86971..8ac5ec28d2f7ad1b3b0d3f8da945c75674e59637 100644
index bf900591bf2847a3253fe441aad24c06da19c6c1..c1d9bb6fefa2df1383339324073db0a70ea2b5a2 100644
--- a/dist/index.js
+++ b/dist/index.js
@@ -274,6 +274,7 @@ var openaiChatResponseSchema = (0, import_provider_utils3.lazyValidator)(

View File

@@ -1,8 +1,8 @@
diff --git a/sdk.mjs b/sdk.mjs
index 8cc6aaf0b25bcdf3c579ec95cde12d419fcb2a71..3b3b8beaea5ad2bbac26a15f792058306d0b059f 100755
index bf429a344b7d59f70aead16b639f949b07688a81..f77d50cc5d3fb04292cb3ac7fa7085d02dcc628f 100755
--- a/sdk.mjs
+++ b/sdk.mjs
@@ -6213,7 +6213,7 @@ function createAbortController(maxListeners = DEFAULT_MAX_LISTENERS) {
@@ -6250,7 +6250,7 @@ function createAbortController(maxListeners = DEFAULT_MAX_LISTENERS) {
}
// ../src/transport/ProcessTransport.ts
@@ -11,16 +11,20 @@ index 8cc6aaf0b25bcdf3c579ec95cde12d419fcb2a71..3b3b8beaea5ad2bbac26a15f79205830
import { createInterface } from "readline";
// ../src/utils/fsOperations.ts
@@ -6505,14 +6505,11 @@ class ProcessTransport {
@@ -6619,18 +6619,11 @@ class ProcessTransport {
const errorMessage = isNativeBinary(pathToClaudeCodeExecutable) ? `Claude Code native binary not found at ${pathToClaudeCodeExecutable}. Please ensure Claude Code is installed via native installer or specify a valid path with options.pathToClaudeCodeExecutable.` : `Claude Code executable not found at ${pathToClaudeCodeExecutable}. Is options.pathToClaudeCodeExecutable set?`;
throw new ReferenceError(errorMessage);
}
- const isNative = isNativeBinary(pathToClaudeCodeExecutable);
- const spawnCommand = isNative ? pathToClaudeCodeExecutable : executable;
- const spawnArgs = isNative ? [...executableArgs, ...args] : [...executableArgs, pathToClaudeCodeExecutable, ...args];
- this.logForDebugging(isNative ? `Spawning Claude Code native binary: ${spawnCommand} ${spawnArgs.join(" ")}` : `Spawning Claude Code process: ${spawnCommand} ${spawnArgs.join(" ")}`);
+ this.logForDebugging(`Forking Claude Code Node.js process: ${pathToClaudeCodeExecutable} ${args.join(" ")}`);
const stderrMode = env.DEBUG || stderr ? "pipe" : "ignore";
- const spawnMessage = isNative ? `Spawning Claude Code native binary: ${spawnCommand} ${spawnArgs.join(" ")}` : `Spawning Claude Code process: ${spawnCommand} ${spawnArgs.join(" ")}`;
- logForSdkDebugging(spawnMessage);
- if (stderr) {
- stderr(spawnMessage);
- }
+ logForSdkDebugging(`Forking Claude Code Node.js process: ${pathToClaudeCodeExecutable} ${args.join(" ")}`);
const stderrMode = env.DEBUG_CLAUDE_AGENT_SDK || stderr ? "pipe" : "ignore";
- this.child = spawn(spawnCommand, spawnArgs, {
+ this.child = fork(pathToClaudeCodeExecutable, args, {
cwd,

View File

@@ -10,7 +10,9 @@ This file provides guidance to AI coding assistants when working with code in th
- **Log centrally**: Route all logging through `loggerService` with the right context—no `console.log`.
- **Research via subagent**: Lean on `subagent` for external docs, APIs, news, and references.
- **Always propose before executing**: Before making any changes, clearly explain your planned approach and wait for explicit user approval to ensure alignment and prevent unwanted modifications.
- **Lint, test, and format before completion**: Coding tasks are only complete after running `yarn lint`, `yarn test`, and `yarn format` successfully.
- **Write conventional commits**: Commit small, focused changes using Conventional Commit messages (e.g., `feat:`, `fix:`, `refactor:`, `docs:`).
- **Follow PR template**: When submitting pull requests, follow the template in `.github/pull_request_template.md` to ensure complete context and documentation.
## Development Commands

View File

@@ -1,4 +1,4 @@
[中文](docs/CONTRIBUTING.zh.md) | [English](CONTRIBUTING.md)
[中文](docs/zh/guides/contributing.md) | [English](CONTRIBUTING.md)
# Cherry Studio Contributor Guide
@@ -32,7 +32,7 @@ To help you get familiar with the codebase, we recommend tackling issues tagged
### Testing
Features without tests are considered non-existent. To ensure code is truly effective, relevant processes should be covered by unit tests and functional tests. Therefore, when considering contributions, please also consider testability. All tests can be run locally without dependency on CI. Please refer to the "Testing" section in the [Developer Guide](docs/dev.md).
Features without tests are considered non-existent. To ensure code is truly effective, relevant processes should be covered by unit tests and functional tests. Therefore, when considering contributions, please also consider testability. All tests can be run locally without dependency on CI. Please refer to the "Testing" section in the [Developer Guide](docs/zh/guides/development.md).
### Automated Testing for Pull Requests
@@ -60,7 +60,7 @@ Maintainers are here to help you implement your use case within a reasonable tim
### Participating in the Test Plan
The Test Plan aims to provide users with a more stable application experience and faster iteration speed. For details, please refer to the [Test Plan](docs/testplan-en.md).
The Test Plan aims to provide users with a more stable application experience and faster iteration speed. For details, please refer to the [Test Plan](docs/en/guides/test-plan.md).
### Other Suggestions

View File

@@ -34,7 +34,7 @@
</a>
</h1>
<p align="center">English | <a href="./docs/README.zh.md">中文</a> | <a href="https://cherry-ai.com">Official Site</a> | <a href="https://docs.cherry-ai.com/cherry-studio-wen-dang/en-us">Documents</a> | <a href="./docs/dev.md">Development</a> | <a href="https://github.com/CherryHQ/cherry-studio/issues">Feedback</a><br></p>
<p align="center">English | <a href="./docs/zh/README.md">中文</a> | <a href="https://cherry-ai.com">Official Site</a> | <a href="https://docs.cherry-ai.com/cherry-studio-wen-dang/en-us">Documents</a> | <a href="./docs/en/guides/development.md">Development</a> | <a href="https://github.com/CherryHQ/cherry-studio/issues">Feedback</a><br></p>
<div align="center">
@@ -67,7 +67,7 @@ Cherry Studio is a desktop client that supports multiple LLM providers, availabl
👏 Join [Telegram Group](https://t.me/CherryStudioAI)[Discord](https://discord.gg/wez8HtpxqQ) | [QQ Group(575014769)](https://qm.qq.com/q/lo0D4qVZKi)
❤️ Like Cherry Studio? Give it a star 🌟 or [Sponsor](docs/sponsor.md) to support the development!
❤️ Like Cherry Studio? Give it a star 🌟 or [Sponsor](docs/zh/guides/sponsor.md) to support the development!
# 🌠 Screenshot
@@ -175,7 +175,7 @@ We welcome contributions to Cherry Studio! Here are some ways you can contribute
6. **Community Engagement**: Join discussions and help users.
7. **Promote Usage**: Spread the word about Cherry Studio.
Refer to the [Branching Strategy](docs/branching-strategy-en.md) for contribution guidelines
Refer to the [Branching Strategy](docs/en/guides/branching-strategy.md) for contribution guidelines
## Getting Started

81
docs/README.md Normal file
View File

@@ -0,0 +1,81 @@
# Cherry Studio Documentation / 文档
This directory contains the project documentation in multiple languages.
本目录包含多语言项目文档。
---
## Languages / 语言
- **[中文文档](./zh/README.md)** - Chinese Documentation
- **English Documentation** - See sections below
---
## English Documentation
### Guides
| Document | Description |
|----------|-------------|
| [Development Setup](./en/guides/development.md) | Development environment setup |
| [Branching Strategy](./en/guides/branching-strategy.md) | Git branching workflow |
| [i18n Guide](./en/guides/i18n.md) | Internationalization guide |
| [Logging Guide](./en/guides/logging.md) | How to use the logger service |
| [Test Plan](./en/guides/test-plan.md) | Test plan and release channels |
### References
| Document | Description |
|----------|-------------|
| [App Upgrade Config](./en/references/app-upgrade.md) | Application upgrade configuration |
| [CodeBlockView Component](./en/references/components/code-block-view.md) | Code block view component |
| [Image Preview Components](./en/references/components/image-preview.md) | Image preview components |
---
## 中文文档
### 指南 (Guides)
| 文档 | 说明 |
|------|------|
| [开发环境设置](./zh/guides/development.md) | 开发环境配置 |
| [贡献指南](./zh/guides/contributing.md) | 如何贡献代码 |
| [分支策略](./zh/guides/branching-strategy.md) | Git 分支工作流 |
| [测试计划](./zh/guides/test-plan.md) | 测试计划和发布通道 |
| [国际化指南](./zh/guides/i18n.md) | 国际化开发指南 |
| [日志使用指南](./zh/guides/logging.md) | 如何使用日志服务 |
| [中间件开发](./zh/guides/middleware.md) | 如何编写中间件 |
| [记忆功能](./zh/guides/memory.md) | 记忆功能使用指南 |
| [赞助信息](./zh/guides/sponsor.md) | 赞助相关信息 |
### 参考 (References)
| 文档 | 说明 |
|------|------|
| [消息系统](./zh/references/message-system.md) | 消息系统架构和 API |
| [数据库结构](./zh/references/database.md) | 数据库表结构 |
| [服务](./zh/references/services.md) | 服务层文档 (KnowledgeService) |
| [代码执行](./zh/references/code-execution.md) | 代码执行功能 |
| [应用升级配置](./zh/references/app-upgrade.md) | 应用升级配置 |
| [CodeBlockView 组件](./zh/references/components/code-block-view.md) | 代码块视图组件 |
| [图像预览组件](./zh/references/components/image-preview.md) | 图像预览组件 |
---
## Missing Translations / 缺少翻译
The following documents are only available in Chinese and need English translations:
以下文档仅有中文版本,需要英文翻译:
- `guides/contributing.md`
- `guides/memory.md`
- `guides/middleware.md`
- `guides/sponsor.md`
- `references/message-system.md`
- `references/database.md`
- `references/services.md`
- `references/code-execution.md`

View File

Before

Width:  |  Height:  |  Size: 150 KiB

After

Width:  |  Height:  |  Size: 150 KiB

View File

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

Before

Width:  |  Height:  |  Size: 563 KiB

After

Width:  |  Height:  |  Size: 563 KiB

View File

@@ -16,7 +16,7 @@ Cherry Studio implements a structured branching strategy to maintain code qualit
- Only accepts documentation updates and bug fixes
- Thoroughly tested before production deployment
For details about the `testplan` branch used in the Test Plan, please refer to the [Test Plan](testplan-en.md).
For details about the `testplan` branch used in the Test Plan, please refer to the [Test Plan](./test-plan.md).
## Contributing Branches

View File

@@ -18,11 +18,11 @@ The plugin has already been configured in the project — simply install it to g
### Demo
![demo-1](./.assets.how-to-i18n/demo-1.png)
![demo-1](../../assets/images/i18n/demo-1.png)
![demo-2](./.assets.how-to-i18n/demo-2.png)
![demo-2](../../assets/images/i18n/demo-2.png)
![demo-3](./.assets.how-to-i18n/demo-3.png)
![demo-3](../../assets/images/i18n/demo-3.png)
## i18n Conventions

View File

@@ -19,7 +19,7 @@ Users are welcome to submit issues or provide feedback through other channels fo
### Participating in the Test Plan
Developers should submit `PRs` according to the [Contributor Guide](../CONTRIBUTING.md) (and ensure the target branch is `main`). The repository maintainers will evaluate whether the `PR` should be included in the Test Plan based on factors such as the impact of the feature on the application, its importance, and whether broader testing is needed.
Developers should submit `PRs` according to the [Contributor Guide](../../CONTRIBUTING.md) (and ensure the target branch is `main`). The repository maintainers will evaluate whether the `PR` should be included in the Test Plan based on factors such as the impact of the feature on the application, its importance, and whether broader testing is needed.
If the `PR` is added to the Test Plan, the repository maintainers will:

View File

@@ -85,7 +85,7 @@ Main responsibilities:
- **SvgPreview**: SVG image preview
- **GraphvizPreview**: Graphviz diagram preview
All special view components share a common architecture for consistent user experience and functionality. For detailed information about these components and their implementation, see [Image Preview Components Documentation](./ImagePreview-en.md).
All special view components share a common architecture for consistent user experience and functionality. For detailed information about these components and their implementation, see [Image Preview Components Documentation](./image-preview.md).
#### StatusBar

View File

@@ -192,4 +192,4 @@ Image Preview Components integrate seamlessly with CodeBlockView:
- Shared state management
- Responsive layout adaptation
For more information about the overall CodeBlockView architecture, see [CodeBlockView Documentation](./CodeBlockView-en.md).
For more information about the overall CodeBlockView architecture, see [CodeBlockView Documentation](./code-block-view.md).

View File

@@ -1,3 +0,0 @@
# 消息的生命周期
![image](./message-lifecycle.png)

View File

@@ -1,11 +0,0 @@
# 数据库设置字段
此文档包含部分字段的数据类型说明。
## 字段
| 字段名 | 类型 | 说明 |
| ------------------------------ | ------------------------------ | ------------ |
| `translate:target:language` | `LanguageCode` | 翻译目标语言 |
| `translate:source:language` | `LanguageCode` | 翻译源语言 |
| `translate:bidirectional:pair` | `[LanguageCode, LanguageCode]` | 双向翻译对 |

View File

@@ -1,127 +0,0 @@
# messageBlock.ts 使用指南
该文件定义了用于管理应用程序中所有 `MessageBlock` 实体的 Redux Slice。它使用 Redux Toolkit 的 `createSlice``createEntityAdapter` 来高效地处理规范化的状态,并提供了一系列 actions 和 selectors 用于与消息块数据交互。
## 核心目标
- **状态管理**: 集中管理所有 `MessageBlock` 的状态。`MessageBlock` 代表消息中的不同内容单元(如文本、代码、图片、引用等)。
- **规范化**: 使用 `createEntityAdapter``MessageBlock` 数据存储在规范化的结构中(`{ ids: [], entities: {} }`),这有助于提高性能和简化更新逻辑。
- **可预测性**: 提供明确的 actions 来修改状态,并通过 selectors 安全地访问状态。
## 关键概念
- **Slice (`createSlice`)**: Redux Toolkit 的核心 API用于创建包含 reducer 逻辑、action creators 和初始状态的 Redux 模块。
- **Entity Adapter (`createEntityAdapter`)**: Redux Toolkit 提供的工具,用于简化对规范化数据的 CRUD创建、读取、更新、删除操作。它会自动生成 reducer 函数和 selectors。
- **Selectors**: 用于从 Redux store 中派生和计算数据的函数。Selectors 可以被记忆化memoized以提高性能。
## State 结构
`messageBlocks` slice 的状态结构由 `createEntityAdapter` 定义,大致如下:
```typescript
{
ids: string[]; // 存储所有 MessageBlock ID 的有序列表
entities: { [id: string]: MessageBlock }; // 按 ID 存储 MessageBlock 对象的字典
loadingState: 'idle' | 'loading' | 'succeeded' | 'failed'; // (可选) 其他状态,如加载状态
error: string | null; // (可选) 错误信息
}
```
## Actions
该 slice 导出以下 actions (由 `createSlice``createEntityAdapter` 自动生成或自定义)
- **`upsertOneBlock(payload: MessageBlock)`**:
- 添加一个新的 `MessageBlock` 或更新一个已存在的 `MessageBlock`。如果 payload 中的 `id` 已存在,则执行更新;否则执行插入。
- **`upsertManyBlocks(payload: MessageBlock[])`**:
- 添加或更新多个 `MessageBlock`。常用于批量加载数据(例如,加载一个 Topic 的所有消息块)。
- **`removeOneBlock(payload: string)`**:
- 根据提供的 `id` (payload) 移除单个 `MessageBlock`
- **`removeManyBlocks(payload: string[])`**:
- 根据提供的 `id` 数组 (payload) 移除多个 `MessageBlock`。常用于删除消息或清空 Topic 时清理相关的块。
- **`removeAllBlocks()`**:
- 移除 state 中的所有 `MessageBlock` 实体。
- **`updateOneBlock(payload: { id: string; changes: Partial<MessageBlock> })`**:
- 更新一个已存在的 `MessageBlock``payload` 需要包含块的 `id` 和一个包含要更改的字段的 `changes` 对象。
- **`setMessageBlocksLoading(payload: 'idle' | 'loading')`**:
- (自定义) 设置 `loadingState` 属性。
- **`setMessageBlocksError(payload: string)`**:
- (自定义) 设置 `loadingState``'failed'` 并记录错误信息。
**使用示例 (在 Thunk 或其他 Dispatch 的地方):**
```typescript
import { upsertOneBlock, removeManyBlocks, updateOneBlock } from './messageBlock'
import store from './store' // 假设这是你的 Redux store 实例
// 添加或更新一个块
const newBlock: MessageBlock = {
/* ... block data ... */
}
store.dispatch(upsertOneBlock(newBlock))
// 更新一个块的内容
store.dispatch(updateOneBlock({ id: blockId, changes: { content: 'New content' } }))
// 删除多个块
const blockIdsToRemove = ['id1', 'id2']
store.dispatch(removeManyBlocks(blockIdsToRemove))
```
## Selectors
该 slice 导出由 `createEntityAdapter` 生成的基础 selectors并通过 `messageBlocksSelectors` 对象访问:
- **`messageBlocksSelectors.selectIds(state: RootState): string[]`**: 返回包含所有块 ID 的数组。
- **`messageBlocksSelectors.selectEntities(state: RootState): { [id: string]: MessageBlock }`**: 返回块 ID 到块对象的映射字典。
- **`messageBlocksSelectors.selectAll(state: RootState): MessageBlock[]`**: 返回包含所有块对象的数组。
- **`messageBlocksSelectors.selectTotal(state: RootState): number`**: 返回块的总数。
- **`messageBlocksSelectors.selectById(state: RootState, id: string): MessageBlock | undefined`**: 根据 ID 返回单个块对象,如果找不到则返回 `undefined`
**此外,还提供了一个自定义的、记忆化的 selector**
- **`selectFormattedCitationsByBlockId(state: RootState, blockId: string | undefined): Citation[]`**:
- 接收一个 `blockId`
- 如果该 ID 对应的块是 `CITATION` 类型,则提取并格式化其包含的引用信息(来自网页搜索、知识库等),进行去重和重新编号,最后返回一个 `Citation[]` 数组,用于在 UI 中显示。
- 如果块不存在或类型不匹配,返回空数组 `[]`
- 这个 selector 封装了处理不同引用来源Gemini, OpenAI, OpenRouter, Zhipu 等)的复杂逻辑。
**使用示例 (在 React 组件或 `useSelector` 中):**
```typescript
import { useSelector } from 'react-redux'
import { messageBlocksSelectors, selectFormattedCitationsByBlockId } from './messageBlock'
import type { RootState } from './store'
// 获取所有块
const allBlocks = useSelector(messageBlocksSelectors.selectAll)
// 获取特定 ID 的块
const specificBlock = useSelector((state: RootState) => messageBlocksSelectors.selectById(state, someBlockId))
// 获取特定引用块格式化后的引用列表
const formattedCitations = useSelector((state: RootState) => selectFormattedCitationsByBlockId(state, citationBlockId))
// 在组件中使用引用数据
// {formattedCitations.map(citation => ...)}
```
## 集成
`messageBlock.ts` slice 通常与 `messageThunk.ts` 中的 Thunks 紧密协作。Thunks 负责处理异步逻辑(如 API 调用、数据库操作),并在需要时 dispatch `messageBlock` slice 的 actions 来更新状态。例如,当 `messageThunk` 接收到流式响应时,它会 dispatch `upsertOneBlock``updateOneBlock` 来实时更新对应的 `MessageBlock`。同样,删除消息的 Thunk 会 dispatch `removeManyBlocks`
理解 `messageBlock.ts` 的职责是管理**状态本身**,而 `messageThunk.ts` 负责**触发状态变更**的异步流程,这对于维护清晰的应用架构至关重要。

View File

@@ -1,105 +0,0 @@
# messageThunk.ts 使用指南
该文件包含用于管理应用程序中消息流、处理助手交互以及同步 Redux 状态与 IndexedDB 数据库的核心 Thunk Action Creators。主要围绕 `Message``MessageBlock` 对象进行操作。
## 核心功能
1. **发送/接收消息**: 处理用户消息的发送,触发助手响应,并流式处理返回的数据,将其解析为不同的 `MessageBlock`
2. **状态管理**: 确保 Redux store 中的消息和消息块状态与 IndexedDB 中的持久化数据保持一致。
3. **消息操作**: 提供删除、重发、重新生成、编辑后重发、追加响应、克隆等消息生命周期管理功能。
4. **Block 处理**: 动态创建、更新和保存各种类型的 `MessageBlock`(文本、思考过程、工具调用、引用、图片、错误、翻译等)。
## 主要 Thunks
以下是一些关键的 Thunk 函数及其用途:
1. **`sendMessage(userMessage, userMessageBlocks, assistant, topicId)`**
- **用途**: 发送一条新的用户消息。
- **流程**:
- 保存用户消息 (`userMessage`) 及其块 (`userMessageBlocks`) 到 Redux 和 DB。
- 检查 `@mentions` 以确定是单模型响应还是多模型响应。
- 创建助手消息(们)的存根 (Stub)。
- 将存根添加到 Redux 和 DB。
- 将核心处理逻辑 `fetchAndProcessAssistantResponseImpl` 添加到该 `topicId` 的队列中以获取实际响应。
- **Block 相关**: 主要处理用户消息的初始 `MessageBlock` 保存。
2. **`fetchAndProcessAssistantResponseImpl(dispatch, getState, topicId, assistant, assistantMessage)`**
- **用途**: (内部函数) 获取并处理单个助手响应的核心逻辑,被 `sendMessage`, `resend...`, `regenerate...`, `append...` 等调用。
- **流程**:
- 设置 Topic 加载状态。
- 准备上下文消息。
- 调用 `fetchChatCompletion` API 服务。
- 使用 `createStreamProcessor` 处理流式响应。
- 通过各种回调 (`onTextChunk`, `onThinkingChunk`, `onToolCallComplete`, `onImageGenerated`, `onError`, `onComplete` 等) 处理不同类型的事件。
- **Block 相关**:
- 根据流事件创建初始 `UNKNOWN` 块。
- 实时创建和更新 `MAIN_TEXT``THINKING` 块,使用 `throttledBlockUpdate``throttledBlockDbUpdate` 进行节流更新。
- 创建 `TOOL`, `CITATION`, `IMAGE`, `ERROR` 等类型的块。
- 在事件完成时(如 `onTextComplete`, `onToolCallComplete`)将块状态标记为 `SUCCESS``ERROR`,并使用 `saveUpdatedBlockToDB` 保存最终状态。
- 使用 `handleBlockTransition` 管理非流式块(如 `TOOL`, `CITATION`)的添加和状态更新。
3. **`loadTopicMessagesThunk(topicId, forceReload)`**
- **用途**: 从数据库加载指定主题的所有消息及其关联的 `MessageBlock`
- **流程**:
- 从 DB 获取 `Topic` 及其 `messages` 列表。
- 根据消息 ID 列表从 DB 获取所有相关的 `MessageBlock`
- 使用 `upsertManyBlocks` 将块更新到 Redux。
- 将消息更新到 Redux。
- **Block 相关**: 负责将持久化的 `MessageBlock` 加载到 Redux 状态。
4. **删除 Thunks**
- `deleteSingleMessageThunk(topicId, messageId)`: 删除单个消息及其所有 `MessageBlock`
- `deleteMessageGroupThunk(topicId, askId)`: 删除一个用户消息及其所有相关的助手响应消息和它们的所有 `MessageBlock`
- `clearTopicMessagesThunk(topicId)`: 清空主题下的所有消息及其所有 `MessageBlock`
- **Block 相关**: 从 Redux 和 DB 中移除指定的 `MessageBlock`
5. **重发/重新生成 Thunks**
- `resendMessageThunk(topicId, userMessageToResend, assistant)`: 重发用户消息。会重置(清空 Block 并标记为 PENDING所有与该用户消息关联的助手响应然后重新请求生成。
- `resendUserMessageWithEditThunk(topicId, originalMessage, mainTextBlockId, editedContent, assistant)`: 用户编辑消息内容后重发。先更新用户消息的 `MAIN_TEXT` 块内容,然后调用 `resendMessageThunk`
- `regenerateAssistantResponseThunk(topicId, assistantMessageToRegenerate, assistant)`: 重新生成单个助手响应。重置该助手消息(清空 Block 并标记为 PENDING然后重新请求生成。
- **Block 相关**: 删除旧的 `MessageBlock`,并在重新生成过程中创建新的 `MessageBlock`
6. **`appendAssistantResponseThunk(topicId, existingAssistantMessageId, newModel, assistant)`**
- **用途**: 在已有的对话上下文中,针对同一个用户问题,使用新选择的模型追加一个新的助手响应。
- **流程**:
- 找到现有助手消息以获取原始 `askId`
- 创建使用 `newModel` 的新助手消息存根(使用相同的 `askId`)。
- 添加新存根到 Redux 和 DB。
-`fetchAndProcessAssistantResponseImpl` 添加到队列以生成新响应。
- **Block 相关**: 为新的助手响应创建全新的 `MessageBlock`
7. **`cloneMessagesToNewTopicThunk(sourceTopicId, branchPointIndex, newTopic)`**
- **用途**: 将源主题的部分消息(及其 Block克隆到一个**已存在**的新主题中。
- **流程**:
- 复制指定索引前的消息。
- 为所有克隆的消息和 Block 生成新的 UUID。
- 正确映射克隆消息之间的 `askId` 关系。
- 复制 `MessageBlock` 内容,更新其 `messageId` 指向新的消息 ID。
- 更新文件引用计数(如果 Block 是文件或图片)。
- 将克隆的消息和 Block 保存到新主题的 Redux 状态和 DB 中。
- **Block 相关**: 创建 `MessageBlock` 的副本,并更新其 ID 和 `messageId`
8. **`initiateTranslationThunk(messageId, topicId, targetLanguage, sourceBlockId?, sourceLanguage?)`**
- **用途**: 为指定消息启动翻译流程,创建一个初始的 `TRANSLATION` 类型的 `MessageBlock`
- **流程**:
- 创建一个状态为 `STREAMING``TranslationMessageBlock`
- 将其添加到 Redux 和 DB。
- 更新原消息的 `blocks` 列表以包含新的翻译块 ID。
- **Block 相关**: 创建并保存一个占位的 `TranslationMessageBlock`。实际翻译内容的获取和填充需要后续步骤。
## 内部机制和注意事项
- **数据库交互**: 通过 `saveMessageAndBlocksToDB`, `updateExistingMessageAndBlocksInDB`, `saveUpdatesToDB`, `saveUpdatedBlockToDB`, `throttledBlockDbUpdate` 等辅助函数与 IndexedDB (`db`) 交互,确保数据持久化。
- **状态同步**: Thunks 负责协调 Redux Store 和 IndexedDB 之间的数据一致性。
- **队列 (`getTopicQueue`)**: 使用 `AsyncQueue` 确保对同一主题的操作(尤其是 API 请求)按顺序执行,避免竞态条件。
- **节流 (`throttle`)**: 对流式响应中频繁的 Block 更新(文本、思考)使用 `lodash.throttle` 优化性能,减少 Redux dispatch 和 DB 写入次数。
- **错误处理**: `fetchAndProcessAssistantResponseImpl` 内的回调函数(特别是 `onError`)处理流处理和 API 调用中可能出现的错误,并创建 `ERROR` 类型的 `MessageBlock`
开发者在使用这些 Thunks 时,通常需要提供 `dispatch`, `getState` (由 Redux Thunk 中间件注入),以及如 `topicId`, `assistant` 配置对象, 相关的 `Message``MessageBlock` 对象/ID 等参数。理解每个 Thunk 的职责和它如何影响消息及块的状态至关重要。

View File

@@ -1,156 +0,0 @@
# useMessageOperations.ts 使用指南
该文件定义了一个名为 `useMessageOperations` 的自定义 React Hook。这个 Hook 的主要目的是为 React 组件提供一个便捷的接口用于执行与特定主题Topic相关的各种消息操作。它封装了调用 Redux Thunks (`messageThunk.ts`) 和 Actions (`newMessage.ts`, `messageBlock.ts`) 的逻辑,简化了组件与消息数据交互的代码。
## 核心目标
- **封装**: 将复杂的消息操作逻辑(如删除、重发、重新生成、编辑、翻译等)封装在易于使用的函数中。
- **简化**: 让组件可以直接调用这些操作函数,而无需直接与 Redux `dispatch` 或 Thunks 交互。
- **上下文关联**: 所有操作都与传入的 `topic` 对象相关联,确保操作作用于正确的主题。
## 如何使用
在你的 React 函数组件中,导入并调用 `useMessageOperations` Hook并传入当前活动的 `Topic` 对象。
```typescript
import React from 'react';
import { useMessageOperations } from '@renderer/hooks/useMessageOperations';
import type { Topic, Message, Assistant, Model } from '@renderer/types';
interface MyComponentProps {
currentTopic: Topic;
currentAssistant: Assistant;
}
function MyComponent({ currentTopic, currentAssistant }: MyComponentProps) {
const {
deleteMessage,
resendMessage,
regenerateAssistantMessage,
appendAssistantResponse,
getTranslationUpdater,
createTopicBranch,
// ... 其他操作函数
} = useMessageOperations(currentTopic);
const handleDelete = (messageId: string) => {
deleteMessage(messageId);
};
const handleResend = (message: Message) => {
resendMessage(message, currentAssistant);
};
const handleAppend = (existingMsg: Message, newModel: Model) => {
appendAssistantResponse(existingMsg, newModel, currentAssistant);
}
// ... 在组件中使用其他操作函数
return (
<div>
{/* Component UI */}
<button onClick={() => handleDelete('some-message-id')}>Delete Message</button>
{/* ... */}
</div>
);
}
```
## 返回值
`useMessageOperations(topic)` Hook 返回一个包含以下函数和值的对象:
- **`deleteMessage(id: string)`**:
- 删除指定 `id` 的单个消息。
- 内部调用 `deleteSingleMessageThunk`
- **`deleteGroupMessages(askId: string)`**:
- 删除与指定 `askId` 相关联的一组消息(通常是用户提问及其所有助手回答)。
- 内部调用 `deleteMessageGroupThunk`
- **`editMessage(messageId: string, updates: Partial<Message>)`**:
- 更新指定 `messageId` 的消息的部分属性。
- **注意**: 目前主要用于更新 Redux 状态
- 内部调用 `newMessagesActions.updateMessage`
- **`resendMessage(message: Message, assistant: Assistant)`**:
- 重新发送指定的用户消息 (`message`),这将触发其所有关联助手响应的重新生成。
- 内部调用 `resendMessageThunk`
- **`resendUserMessageWithEdit(message: Message, editedContent: string, assistant: Assistant)`**:
- 在用户消息的主要文本块被编辑后,重新发送该消息。
- 会先查找消息的 `MAIN_TEXT` 块 ID然后调用 `resendUserMessageWithEditThunk`
- **`clearTopicMessages(_topicId?: string)`**:
- 清除当前主题(或可选的指定 `_topicId`)下的所有消息。
- 内部调用 `clearTopicMessagesThunk`
- **`createNewContext()`**:
- 发出一个全局事件 (`EVENT_NAMES.NEW_CONTEXT`),通常用于通知 UI 清空显示,准备新的上下文。不直接修改 Redux 状态。
- **`displayCount`**:
- (非操作函数) 从 Redux store 中获取当前的 `displayCount` 值。
- **`pauseMessages()`**:
- 尝试中止当前主题中正在进行的消息生成(状态为 `processing``pending`)。
- 通过查找相关的 `askId` 并调用 `abortCompletion` 来实现。
- 同时会 dispatch `setTopicLoading` action 将加载状态设为 `false`
- **`resumeMessage(message: Message, assistant: Assistant)`**:
- 恢复/重新发送一个用户消息。目前实现为直接调用 `resendMessage`
- **`regenerateAssistantMessage(message: Message, assistant: Assistant)`**:
- 重新生成指定的**助手**消息 (`message`) 的响应。
- 内部调用 `regenerateAssistantResponseThunk`
- **`appendAssistantResponse(existingAssistantMessage: Message, newModel: Model, assistant: Assistant)`**:
- 针对 `existingAssistantMessage` 所回复的**同一用户提问**,使用 `newModel` 追加一个新的助手响应。
- 内部调用 `appendAssistantResponseThunk`
- **`getTranslationUpdater(messageId: string, targetLanguage: string, sourceBlockId?: string, sourceLanguage?: string)`**:
- **用途**: 获取一个用于逐步更新翻译块内容的函数。
- **流程**:
1. 内部调用 `initiateTranslationThunk` 来创建或获取一个 `TRANSLATION` 类型的 `MessageBlock`,并获取其 `blockId`
2. 返回一个**异步更新函数**。
- **返回的更新函数 `(accumulatedText: string, isComplete?: boolean) => void`**:
- 接收累积的翻译文本和完成状态。
- 调用 `updateOneBlock` 更新 Redux 中的翻译块内容和状态 (`STREAMING``SUCCESS`)。
- 调用 `throttledBlockDbUpdate` 将更新(节流地)保存到数据库。
- 如果初始化失败Thunk 返回 `undefined`),则此函数返回 `null`
- **`createTopicBranch(sourceTopicId: string, branchPointIndex: number, newTopic: Topic)`**:
- 创建一个主题分支,将 `sourceTopicId` 主题中 `branchPointIndex` 索引之前的消息克隆到 `newTopic` 中。
- **注意**: `newTopic` 对象必须是调用此函数**之前**已经创建并添加到 Redux 和数据库中的。
- 内部调用 `cloneMessagesToNewTopicThunk`
## 依赖
- **`topic: Topic`**: 必须传入当前操作上下文的主题对象。Hook 返回的操作函数将始终作用于这个主题的 `topic.id`
- **Redux `dispatch`**: Hook 内部使用 `useAppDispatch` 获取 `dispatch` 函数来调用 actions 和 thunks。
## 相关 Hooks
在同一文件中还定义了两个辅助 Hook
- **`useTopicMessages(topic: Topic)`**:
- 使用 `selectMessagesForTopic` selector 来获取并返回指定主题的消息列表。
- **`useTopicLoading(topic: Topic)`**:
- 使用 `selectNewTopicLoading` selector 来获取并返回指定主题的加载状态。
这些 Hook 可以与 `useMessageOperations` 结合使用,方便地在组件中获取消息数据、加载状态,并执行相关操作。

View File

@@ -34,7 +34,7 @@
</a>
</h1>
<p align="center">
<a href="https://github.com/CherryHQ/cherry-studio">English</a> | 中文 | <a href="https://cherry-ai.com">官方网站</a> | <a href="https://docs.cherry-ai.com/cherry-studio-wen-dang/zh-cn">文档</a> | <a href="./dev.md">开发</a> | <a href="https://github.com/CherryHQ/cherry-studio/issues">反馈</a><br>
<a href="https://github.com/CherryHQ/cherry-studio">English</a> | 中文 | <a href="https://cherry-ai.com">官方网站</a> | <a href="https://docs.cherry-ai.com/cherry-studio-wen-dang/zh-cn">文档</a> | <a href="./guides/development.md">开发</a> | <a href="https://github.com/CherryHQ/cherry-studio/issues">反馈</a><br>
</p>
<!-- 题头徽章组合 -->
@@ -70,7 +70,7 @@ Cherry Studio 是一款支持多个大语言模型LLM服务商的桌面客
👏 欢迎加入 [Telegram 群组](https://t.me/CherryStudioAI)[Discord](https://discord.gg/wez8HtpxqQ) | [QQ群(575014769)](https://qm.qq.com/q/lo0D4qVZKi)
❤️ 喜欢 Cherry Studio? 点亮小星星 🌟 或 [赞助开发者](sponsor.md)! ❤️
❤️ 喜欢 Cherry Studio? 点亮小星星 🌟 或 [赞助开发者](./guides/sponsor.md)! ❤️
# 📖 使用教程
@@ -181,7 +181,7 @@ https://docs.cherry-ai.com
6. **社区参与**:加入讨论并帮助用户
7. **推广使用**:宣传 Cherry Studio
参考[分支策略](branching-strategy-zh.md)了解贡献指南
参考[分支策略](./guides/branching-strategy.md)了解贡献指南
## 入门
@@ -190,7 +190,7 @@ https://docs.cherry-ai.com
3. **提交更改**:提交并推送您的更改
4. **打开 Pull Request**:描述您的更改和原因
有关更详细的指南,请参阅我们的 [贡献指南](CONTRIBUTING.zh.md)
有关更详细的指南,请参阅我们的 [贡献指南](./guides/contributing.md)
感谢您的支持和贡献!

View File

@@ -16,7 +16,7 @@ Cherry Studio 采用结构化的分支策略来维护代码质量并简化开发
- 只接受文档更新和 bug 修复
- 经过完整测试后可以发布到生产环境
关于测试计划所使用的`testplan`分支,请查阅[测试计划](testplan-zh.md)。
关于测试计划所使用的`testplan`分支,请查阅[测试计划](./test-plan.md)。
## 贡献分支

View File

@@ -1,6 +1,6 @@
# Cherry Studio 贡献者指南
[**English**](../CONTRIBUTING.md) | [**中文**](CONTRIBUTING.zh.md)
[**English**](../../../CONTRIBUTING.md) | **中文**
欢迎来到 Cherry Studio 的贡献者社区!我们致力于将 Cherry Studio 打造成一个长期提供价值的项目,并希望邀请更多的开发者加入我们的行列。无论您是经验丰富的开发者还是刚刚起步的初学者,您的贡献都将帮助我们更好地服务用户,提升软件质量。
@@ -24,7 +24,7 @@
## 开始之前
请确保阅读了[行为准则](../CODE_OF_CONDUCT.md)和[LICENSE](../LICENSE)。
请确保阅读了[行为准则](../../../CODE_OF_CONDUCT.md)和[LICENSE](../../../LICENSE)。
## 开始贡献
@@ -32,7 +32,7 @@
### 测试
未经测试的功能等同于不存在。为确保代码真正有效,应通过单元测试和功能测试覆盖相关流程。因此,在考虑贡献时,也请考虑可测试性。所有测试均可本地运行,无需依赖 CI。请参阅[开发者指南](dev.md#test)中的Test部分。
未经测试的功能等同于不存在。为确保代码真正有效,应通过单元测试和功能测试覆盖相关流程。因此,在考虑贡献时,也请考虑可测试性。所有测试均可本地运行,无需依赖 CI。请参阅[开发者指南](./development.md#test)中的"Test"部分。
### 拉取请求的自动化测试
@@ -60,11 +60,11 @@ git commit --signoff -m "Your commit message"
### 获取代码审查/合并
维护者在此帮助您在合理时间内实现您的用例。他们会尽力在合理时间内审查您的代码并提供建设性反馈。但如果您在审查过程中受阻,或认为您的 Pull Request 未得到应有的关注,请通过 Issue 中的评论或者[社群](README.zh.md#-community)联系我们
维护者在此帮助您在合理时间内实现您的用例。他们会尽力在合理时间内审查您的代码并提供建设性反馈。但如果您在审查过程中受阻,或认为您的 Pull Request 未得到应有的关注,请通过 Issue 中的评论或者[社群](../README.md#-community)联系我们
### 参与测试计划
测试计划旨在为用户提供更稳定的应用体验和更快的迭代速度,详细情况请参阅[测试计划](testplan-zh.md)。
测试计划旨在为用户提供更稳定的应用体验和更快的迭代速度,详细情况请参阅[测试计划](./test-plan.md)。
### 其他建议

View File

@@ -0,0 +1,73 @@
# 🖥️ Develop
## IDE Setup
- Editor: [Cursor](https://www.cursor.com/), etc. Any VS Code compatible editor.
- Linter: [ESLint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint)
- Formatter: [Biome](https://marketplace.visualstudio.com/items?itemName=biomejs.biome)
## Project Setup
### Install
```bash
yarn
```
### Development
### Setup Node.js
Download and install [Node.js v22.x.x](https://nodejs.org/en/download)
### Setup Yarn
```bash
corepack enable
corepack prepare yarn@4.9.1 --activate
```
### Install Dependencies
```bash
yarn install
```
### ENV
```bash
copy .env.example .env
```
### Start
```bash
yarn dev
```
### Debug
```bash
yarn debug
```
Then input chrome://inspect in browser
### Test
```bash
yarn test
```
### Build
```bash
# For windows
$ yarn build:win
# For macOS
$ yarn build:mac
# For Linux
$ yarn build:linux
```

View File

@@ -15,11 +15,11 @@ i18n ally是一个强大的VSCode插件它能在开发阶段提供实时反
### 效果展示
![demo-1](./.assets.how-to-i18n/demo-1.png)
![demo-1](../../assets/images/i18n/demo-1.png)
![demo-2](./.assets.how-to-i18n/demo-2.png)
![demo-2](../../assets/images/i18n/demo-2.png)
![demo-3](./.assets.how-to-i18n/demo-3.png)
![demo-3](../../assets/images/i18n/demo-3.png)
## i18n 约定

View File

@@ -19,7 +19,7 @@
### 参与测试计划
开发者按照[贡献者指南](CONTRIBUTING.zh.md)要求正常提交`PR`并注意提交target为`main`)。仓库维护者会综合考虑(例如该功能对应用的影响程度,功能的重要性,是否需要更广泛的测试等),决定该`PR`是否应加入测试计划。
开发者按照[贡献者指南](./contributing.md)要求正常提交`PR`并注意提交target为`main`)。仓库维护者会综合考虑(例如该功能对应用的影响程度,功能的重要性,是否需要更广泛的测试等),决定该`PR`是否应加入测试计划。
若该`PR`加入测试计划,仓库维护者会做如下操作:

View File

@@ -85,7 +85,7 @@ graph TD
- **SvgPreview**: SVG 图像预览
- **GraphvizPreview**: Graphviz 图表预览
所有特殊视图组件共享通用架构,以确保一致的用户体验和功能。有关这些组件及其实现的详细信息,请参阅 [图像预览组件文档](./ImagePreview-zh.md)。
所有特殊视图组件共享通用架构,以确保一致的用户体验和功能。有关这些组件及其实现的详细信息,请参阅[图像预览组件文档](./image-preview.md)。
#### StatusBar 状态栏

View File

@@ -192,4 +192,4 @@ const { containerRef, error, isLoading, triggerRender, cancelRender, clearError,
- 共享状态管理
- 响应式布局适应
有关整体 CodeBlockView 架构的更多信息,请参阅 [CodeBlockView 文档](./CodeBlockView-zh.md)。
有关整体 CodeBlockView 架构的更多信息,请参阅 [CodeBlockView 文档](./code-block-view.md)。

View File

@@ -1,6 +1,24 @@
# `translate_languages` 表技术文档
# 数据库参考文档
## 📄 概述
本文档介绍 Cherry Studio 的数据库结构,包括设置字段和翻译语言表。
---
## 设置字段 (settings)
此部分包含设置相关字段的数据类型说明。
### 翻译相关字段
| 字段名 | 类型 | 说明 |
| ------------------------------ | ------------------------------ | ------------ |
| `translate:target:language` | `LanguageCode` | 翻译目标语言 |
| `translate:source:language` | `LanguageCode` | 翻译源语言 |
| `translate:bidirectional:pair` | `[LanguageCode, LanguageCode]` | 双向翻译对 |
---
## 翻译语言表 (translate_languages)
`translate_languages` 记录用户自定义的的语言类型(`Language`)。

View File

@@ -0,0 +1,404 @@
# 消息系统
本文档介绍 Cherry Studio 的消息系统架构,包括消息生命周期、状态管理和操作接口。
## 消息的生命周期
![消息生命周期](../../assets/images/message-lifecycle.png)
---
# messageBlock.ts 使用指南
该文件定义了用于管理应用程序中所有 `MessageBlock` 实体的 Redux Slice。它使用 Redux Toolkit 的 `createSlice``createEntityAdapter` 来高效地处理规范化的状态,并提供了一系列 actions 和 selectors 用于与消息块数据交互。
## 核心目标
- **状态管理**: 集中管理所有 `MessageBlock` 的状态。`MessageBlock` 代表消息中的不同内容单元(如文本、代码、图片、引用等)。
- **规范化**: 使用 `createEntityAdapter``MessageBlock` 数据存储在规范化的结构中(`{ ids: [], entities: {} }`),这有助于提高性能和简化更新逻辑。
- **可预测性**: 提供明确的 actions 来修改状态,并通过 selectors 安全地访问状态。
## 关键概念
- **Slice (`createSlice`)**: Redux Toolkit 的核心 API用于创建包含 reducer 逻辑、action creators 和初始状态的 Redux 模块。
- **Entity Adapter (`createEntityAdapter`)**: Redux Toolkit 提供的工具,用于简化对规范化数据的 CRUD创建、读取、更新、删除操作。它会自动生成 reducer 函数和 selectors。
- **Selectors**: 用于从 Redux store 中派生和计算数据的函数。Selectors 可以被记忆化memoized以提高性能。
## State 结构
`messageBlocks` slice 的状态结构由 `createEntityAdapter` 定义,大致如下:
```typescript
{
ids: string[]; // 存储所有 MessageBlock ID 的有序列表
entities: { [id: string]: MessageBlock }; // 按 ID 存储 MessageBlock 对象的字典
loadingState: 'idle' | 'loading' | 'succeeded' | 'failed'; // (可选) 其他状态,如加载状态
error: string | null; // (可选) 错误信息
}
```
## Actions
该 slice 导出以下 actions (由 `createSlice``createEntityAdapter` 自动生成或自定义)
- **`upsertOneBlock(payload: MessageBlock)`**:
- 添加一个新的 `MessageBlock` 或更新一个已存在的 `MessageBlock`。如果 payload 中的 `id` 已存在,则执行更新;否则执行插入。
- **`upsertManyBlocks(payload: MessageBlock[])`**:
- 添加或更新多个 `MessageBlock`。常用于批量加载数据(例如,加载一个 Topic 的所有消息块)。
- **`removeOneBlock(payload: string)`**:
- 根据提供的 `id` (payload) 移除单个 `MessageBlock`
- **`removeManyBlocks(payload: string[])`**:
- 根据提供的 `id` 数组 (payload) 移除多个 `MessageBlock`。常用于删除消息或清空 Topic 时清理相关的块。
- **`removeAllBlocks()`**:
- 移除 state 中的所有 `MessageBlock` 实体。
- **`updateOneBlock(payload: { id: string; changes: Partial<MessageBlock> })`**:
- 更新一个已存在的 `MessageBlock``payload` 需要包含块的 `id` 和一个包含要更改的字段的 `changes` 对象。
- **`setMessageBlocksLoading(payload: 'idle' | 'loading')`**:
- (自定义) 设置 `loadingState` 属性。
- **`setMessageBlocksError(payload: string)`**:
- (自定义) 设置 `loadingState``'failed'` 并记录错误信息。
**使用示例 (在 Thunk 或其他 Dispatch 的地方):**
```typescript
import { upsertOneBlock, removeManyBlocks, updateOneBlock } from './messageBlock'
import store from './store' // 假设这是你的 Redux store 实例
// 添加或更新一个块
const newBlock: MessageBlock = {
/* ... block data ... */
}
store.dispatch(upsertOneBlock(newBlock))
// 更新一个块的内容
store.dispatch(updateOneBlock({ id: blockId, changes: { content: 'New content' } }))
// 删除多个块
const blockIdsToRemove = ['id1', 'id2']
store.dispatch(removeManyBlocks(blockIdsToRemove))
```
## Selectors
该 slice 导出由 `createEntityAdapter` 生成的基础 selectors并通过 `messageBlocksSelectors` 对象访问:
- **`messageBlocksSelectors.selectIds(state: RootState): string[]`**: 返回包含所有块 ID 的数组。
- **`messageBlocksSelectors.selectEntities(state: RootState): { [id: string]: MessageBlock }`**: 返回块 ID 到块对象的映射字典。
- **`messageBlocksSelectors.selectAll(state: RootState): MessageBlock[]`**: 返回包含所有块对象的数组。
- **`messageBlocksSelectors.selectTotal(state: RootState): number`**: 返回块的总数。
- **`messageBlocksSelectors.selectById(state: RootState, id: string): MessageBlock | undefined`**: 根据 ID 返回单个块对象,如果找不到则返回 `undefined`
**此外,还提供了一个自定义的、记忆化的 selector**
- **`selectFormattedCitationsByBlockId(state: RootState, blockId: string | undefined): Citation[]`**:
- 接收一个 `blockId`
- 如果该 ID 对应的块是 `CITATION` 类型,则提取并格式化其包含的引用信息(来自网页搜索、知识库等),进行去重和重新编号,最后返回一个 `Citation[]` 数组,用于在 UI 中显示。
- 如果块不存在或类型不匹配,返回空数组 `[]`
- 这个 selector 封装了处理不同引用来源Gemini, OpenAI, OpenRouter, Zhipu 等)的复杂逻辑。
**使用示例 (在 React 组件或 `useSelector` 中):**
```typescript
import { useSelector } from 'react-redux'
import { messageBlocksSelectors, selectFormattedCitationsByBlockId } from './messageBlock'
import type { RootState } from './store'
// 获取所有块
const allBlocks = useSelector(messageBlocksSelectors.selectAll)
// 获取特定 ID 的块
const specificBlock = useSelector((state: RootState) => messageBlocksSelectors.selectById(state, someBlockId))
// 获取特定引用块格式化后的引用列表
const formattedCitations = useSelector((state: RootState) => selectFormattedCitationsByBlockId(state, citationBlockId))
// 在组件中使用引用数据
// {formattedCitations.map(citation => ...)}
```
## 集成
`messageBlock.ts` slice 通常与 `messageThunk.ts` 中的 Thunks 紧密协作。Thunks 负责处理异步逻辑(如 API 调用、数据库操作),并在需要时 dispatch `messageBlock` slice 的 actions 来更新状态。例如,当 `messageThunk` 接收到流式响应时,它会 dispatch `upsertOneBlock``updateOneBlock` 来实时更新对应的 `MessageBlock`。同样,删除消息的 Thunk 会 dispatch `removeManyBlocks`
理解 `messageBlock.ts` 的职责是管理**状态本身**,而 `messageThunk.ts` 负责**触发状态变更**的异步流程,这对于维护清晰的应用架构至关重要。
---
# messageThunk.ts 使用指南
该文件包含用于管理应用程序中消息流、处理助手交互以及同步 Redux 状态与 IndexedDB 数据库的核心 Thunk Action Creators。主要围绕 `Message``MessageBlock` 对象进行操作。
## 核心功能
1. **发送/接收消息**: 处理用户消息的发送,触发助手响应,并流式处理返回的数据,将其解析为不同的 `MessageBlock`
2. **状态管理**: 确保 Redux store 中的消息和消息块状态与 IndexedDB 中的持久化数据保持一致。
3. **消息操作**: 提供删除、重发、重新生成、编辑后重发、追加响应、克隆等消息生命周期管理功能。
4. **Block 处理**: 动态创建、更新和保存各种类型的 `MessageBlock`(文本、思考过程、工具调用、引用、图片、错误、翻译等)。
## 主要 Thunks
以下是一些关键的 Thunk 函数及其用途:
1. **`sendMessage(userMessage, userMessageBlocks, assistant, topicId)`**
- **用途**: 发送一条新的用户消息。
- **流程**:
- 保存用户消息 (`userMessage`) 及其块 (`userMessageBlocks`) 到 Redux 和 DB。
- 检查 `@mentions` 以确定是单模型响应还是多模型响应。
- 创建助手消息(们)的存根 (Stub)。
- 将存根添加到 Redux 和 DB。
- 将核心处理逻辑 `fetchAndProcessAssistantResponseImpl` 添加到该 `topicId` 的队列中以获取实际响应。
- **Block 相关**: 主要处理用户消息的初始 `MessageBlock` 保存。
2. **`fetchAndProcessAssistantResponseImpl(dispatch, getState, topicId, assistant, assistantMessage)`**
- **用途**: (内部函数) 获取并处理单个助手响应的核心逻辑,被 `sendMessage`, `resend...`, `regenerate...`, `append...` 等调用。
- **流程**:
- 设置 Topic 加载状态。
- 准备上下文消息。
- 调用 `fetchChatCompletion` API 服务。
- 使用 `createStreamProcessor` 处理流式响应。
- 通过各种回调 (`onTextChunk`, `onThinkingChunk`, `onToolCallComplete`, `onImageGenerated`, `onError`, `onComplete` 等) 处理不同类型的事件。
- **Block 相关**:
- 根据流事件创建初始 `UNKNOWN` 块。
- 实时创建和更新 `MAIN_TEXT``THINKING` 块,使用 `throttledBlockUpdate``throttledBlockDbUpdate` 进行节流更新。
- 创建 `TOOL`, `CITATION`, `IMAGE`, `ERROR` 等类型的块。
- 在事件完成时(如 `onTextComplete`, `onToolCallComplete`)将块状态标记为 `SUCCESS``ERROR`,并使用 `saveUpdatedBlockToDB` 保存最终状态。
- 使用 `handleBlockTransition` 管理非流式块(如 `TOOL`, `CITATION`)的添加和状态更新。
3. **`loadTopicMessagesThunk(topicId, forceReload)`**
- **用途**: 从数据库加载指定主题的所有消息及其关联的 `MessageBlock`
- **流程**:
- 从 DB 获取 `Topic` 及其 `messages` 列表。
- 根据消息 ID 列表从 DB 获取所有相关的 `MessageBlock`
- 使用 `upsertManyBlocks` 将块更新到 Redux。
- 将消息更新到 Redux。
- **Block 相关**: 负责将持久化的 `MessageBlock` 加载到 Redux 状态。
4. **删除 Thunks**
- `deleteSingleMessageThunk(topicId, messageId)`: 删除单个消息及其所有 `MessageBlock`
- `deleteMessageGroupThunk(topicId, askId)`: 删除一个用户消息及其所有相关的助手响应消息和它们的所有 `MessageBlock`
- `clearTopicMessagesThunk(topicId)`: 清空主题下的所有消息及其所有 `MessageBlock`
- **Block 相关**: 从 Redux 和 DB 中移除指定的 `MessageBlock`
5. **重发/重新生成 Thunks**
- `resendMessageThunk(topicId, userMessageToResend, assistant)`: 重发用户消息。会重置(清空 Block 并标记为 PENDING所有与该用户消息关联的助手响应然后重新请求生成。
- `resendUserMessageWithEditThunk(topicId, originalMessage, mainTextBlockId, editedContent, assistant)`: 用户编辑消息内容后重发。先更新用户消息的 `MAIN_TEXT` 块内容,然后调用 `resendMessageThunk`
- `regenerateAssistantResponseThunk(topicId, assistantMessageToRegenerate, assistant)`: 重新生成单个助手响应。重置该助手消息(清空 Block 并标记为 PENDING然后重新请求生成。
- **Block 相关**: 删除旧的 `MessageBlock`,并在重新生成过程中创建新的 `MessageBlock`
6. **`appendAssistantResponseThunk(topicId, existingAssistantMessageId, newModel, assistant)`**
- **用途**: 在已有的对话上下文中,针对同一个用户问题,使用新选择的模型追加一个新的助手响应。
- **流程**:
- 找到现有助手消息以获取原始 `askId`
- 创建使用 `newModel` 的新助手消息存根(使用相同的 `askId`)。
- 添加新存根到 Redux 和 DB。
-`fetchAndProcessAssistantResponseImpl` 添加到队列以生成新响应。
- **Block 相关**: 为新的助手响应创建全新的 `MessageBlock`
7. **`cloneMessagesToNewTopicThunk(sourceTopicId, branchPointIndex, newTopic)`**
- **用途**: 将源主题的部分消息(及其 Block克隆到一个**已存在**的新主题中。
- **流程**:
- 复制指定索引前的消息。
- 为所有克隆的消息和 Block 生成新的 UUID。
- 正确映射克隆消息之间的 `askId` 关系。
- 复制 `MessageBlock` 内容,更新其 `messageId` 指向新的消息 ID。
- 更新文件引用计数(如果 Block 是文件或图片)。
- 将克隆的消息和 Block 保存到新主题的 Redux 状态和 DB 中。
- **Block 相关**: 创建 `MessageBlock` 的副本,并更新其 ID 和 `messageId`
8. **`initiateTranslationThunk(messageId, topicId, targetLanguage, sourceBlockId?, sourceLanguage?)`**
- **用途**: 为指定消息启动翻译流程,创建一个初始的 `TRANSLATION` 类型的 `MessageBlock`
- **流程**:
- 创建一个状态为 `STREAMING``TranslationMessageBlock`
- 将其添加到 Redux 和 DB。
- 更新原消息的 `blocks` 列表以包含新的翻译块 ID。
- **Block 相关**: 创建并保存一个占位的 `TranslationMessageBlock`。实际翻译内容的获取和填充需要后续步骤。
## 内部机制和注意事项
- **数据库交互**: 通过 `saveMessageAndBlocksToDB`, `updateExistingMessageAndBlocksInDB`, `saveUpdatesToDB`, `saveUpdatedBlockToDB`, `throttledBlockDbUpdate` 等辅助函数与 IndexedDB (`db`) 交互,确保数据持久化。
- **状态同步**: Thunks 负责协调 Redux Store 和 IndexedDB 之间的数据一致性。
- **队列 (`getTopicQueue`)**: 使用 `AsyncQueue` 确保对同一主题的操作(尤其是 API 请求)按顺序执行,避免竞态条件。
- **节流 (`throttle`)**: 对流式响应中频繁的 Block 更新(文本、思考)使用 `lodash.throttle` 优化性能,减少 Redux dispatch 和 DB 写入次数。
- **错误处理**: `fetchAndProcessAssistantResponseImpl` 内的回调函数(特别是 `onError`)处理流处理和 API 调用中可能出现的错误,并创建 `ERROR` 类型的 `MessageBlock`
开发者在使用这些 Thunks 时,通常需要提供 `dispatch`, `getState` (由 Redux Thunk 中间件注入),以及如 `topicId`, `assistant` 配置对象, 相关的 `Message``MessageBlock` 对象/ID 等参数。理解每个 Thunk 的职责和它如何影响消息及块的状态至关重要。
---
# useMessageOperations.ts 使用指南
该文件定义了一个名为 `useMessageOperations` 的自定义 React Hook。这个 Hook 的主要目的是为 React 组件提供一个便捷的接口用于执行与特定主题Topic相关的各种消息操作。它封装了调用 Redux Thunks (`messageThunk.ts`) 和 Actions (`newMessage.ts`, `messageBlock.ts`) 的逻辑,简化了组件与消息数据交互的代码。
## 核心目标
- **封装**: 将复杂的消息操作逻辑(如删除、重发、重新生成、编辑、翻译等)封装在易于使用的函数中。
- **简化**: 让组件可以直接调用这些操作函数,而无需直接与 Redux `dispatch` 或 Thunks 交互。
- **上下文关联**: 所有操作都与传入的 `topic` 对象相关联,确保操作作用于正确的主题。
## 如何使用
在你的 React 函数组件中,导入并调用 `useMessageOperations` Hook并传入当前活动的 `Topic` 对象。
```typescript
import React from 'react';
import { useMessageOperations } from '@renderer/hooks/useMessageOperations';
import type { Topic, Message, Assistant, Model } from '@renderer/types';
interface MyComponentProps {
currentTopic: Topic;
currentAssistant: Assistant;
}
function MyComponent({ currentTopic, currentAssistant }: MyComponentProps) {
const {
deleteMessage,
resendMessage,
regenerateAssistantMessage,
appendAssistantResponse,
getTranslationUpdater,
createTopicBranch,
// ... 其他操作函数
} = useMessageOperations(currentTopic);
const handleDelete = (messageId: string) => {
deleteMessage(messageId);
};
const handleResend = (message: Message) => {
resendMessage(message, currentAssistant);
};
const handleAppend = (existingMsg: Message, newModel: Model) => {
appendAssistantResponse(existingMsg, newModel, currentAssistant);
}
// ... 在组件中使用其他操作函数
return (
<div>
{/* Component UI */}
<button onClick={() => handleDelete('some-message-id')}>Delete Message</button>
{/* ... */}
</div>
);
}
```
## 返回值
`useMessageOperations(topic)` Hook 返回一个包含以下函数和值的对象:
- **`deleteMessage(id: string)`**:
- 删除指定 `id` 的单个消息。
- 内部调用 `deleteSingleMessageThunk`
- **`deleteGroupMessages(askId: string)`**:
- 删除与指定 `askId` 相关联的一组消息(通常是用户提问及其所有助手回答)。
- 内部调用 `deleteMessageGroupThunk`
- **`editMessage(messageId: string, updates: Partial<Message>)`**:
- 更新指定 `messageId` 的消息的部分属性。
- **注意**: 目前主要用于更新 Redux 状态
- 内部调用 `newMessagesActions.updateMessage`
- **`resendMessage(message: Message, assistant: Assistant)`**:
- 重新发送指定的用户消息 (`message`),这将触发其所有关联助手响应的重新生成。
- 内部调用 `resendMessageThunk`
- **`resendUserMessageWithEdit(message: Message, editedContent: string, assistant: Assistant)`**:
- 在用户消息的主要文本块被编辑后,重新发送该消息。
- 会先查找消息的 `MAIN_TEXT` 块 ID然后调用 `resendUserMessageWithEditThunk`
- **`clearTopicMessages(_topicId?: string)`**:
- 清除当前主题(或可选的指定 `_topicId`)下的所有消息。
- 内部调用 `clearTopicMessagesThunk`
- **`createNewContext()`**:
- 发出一个全局事件 (`EVENT_NAMES.NEW_CONTEXT`),通常用于通知 UI 清空显示,准备新的上下文。不直接修改 Redux 状态。
- **`displayCount`**:
- (非操作函数) 从 Redux store 中获取当前的 `displayCount` 值。
- **`pauseMessages()`**:
- 尝试中止当前主题中正在进行的消息生成(状态为 `processing``pending`)。
- 通过查找相关的 `askId` 并调用 `abortCompletion` 来实现。
- 同时会 dispatch `setTopicLoading` action 将加载状态设为 `false`
- **`resumeMessage(message: Message, assistant: Assistant)`**:
- 恢复/重新发送一个用户消息。目前实现为直接调用 `resendMessage`
- **`regenerateAssistantMessage(message: Message, assistant: Assistant)`**:
- 重新生成指定的**助手**消息 (`message`) 的响应。
- 内部调用 `regenerateAssistantResponseThunk`
- **`appendAssistantResponse(existingAssistantMessage: Message, newModel: Model, assistant: Assistant)`**:
- 针对 `existingAssistantMessage` 所回复的**同一用户提问**,使用 `newModel` 追加一个新的助手响应。
- 内部调用 `appendAssistantResponseThunk`
- **`getTranslationUpdater(messageId: string, targetLanguage: string, sourceBlockId?: string, sourceLanguage?: string)`**:
- **用途**: 获取一个用于逐步更新翻译块内容的函数。
- **流程**:
1. 内部调用 `initiateTranslationThunk` 来创建或获取一个 `TRANSLATION` 类型的 `MessageBlock`,并获取其 `blockId`
2. 返回一个**异步更新函数**。
- **返回的更新函数 `(accumulatedText: string, isComplete?: boolean) => void`**:
- 接收累积的翻译文本和完成状态。
- 调用 `updateOneBlock` 更新 Redux 中的翻译块内容和状态 (`STREAMING``SUCCESS`)。
- 调用 `throttledBlockDbUpdate` 将更新(节流地)保存到数据库。
- 如果初始化失败Thunk 返回 `undefined`),则此函数返回 `null`
- **`createTopicBranch(sourceTopicId: string, branchPointIndex: number, newTopic: Topic)`**:
- 创建一个主题分支,将 `sourceTopicId` 主题中 `branchPointIndex` 索引之前的消息克隆到 `newTopic` 中。
- **注意**: `newTopic` 对象必须是调用此函数**之前**已经创建并添加到 Redux 和数据库中的。
- 内部调用 `cloneMessagesToNewTopicThunk`
## 依赖
- **`topic: Topic`**: 必须传入当前操作上下文的主题对象。Hook 返回的操作函数将始终作用于这个主题的 `topic.id`
- **Redux `dispatch`**: Hook 内部使用 `useAppDispatch` 获取 `dispatch` 函数来调用 actions 和 thunks。
## 相关 Hooks
在同一文件中还定义了两个辅助 Hook
- **`useTopicMessages(topic: Topic)`**:
- 使用 `selectMessagesForTopic` selector 来获取并返回指定主题的消息列表。
- **`useTopicLoading(topic: Topic)`**:
- 使用 `selectNewTopicLoading` selector 来获取并返回指定主题的加载状态。
这些 Hook 可以与 `useMessageOperations` 结合使用,方便地在组件中获取消息数据、加载状态,并执行相关操作。

View File

@@ -134,66 +134,56 @@ artifactBuildCompleted: scripts/artifact-build-completed.js
releaseInfo:
releaseNotes: |
<!--LANG:en-->
What's New in v1.7.0-rc.2
What's New in v1.7.0-rc.3
✨ New Features:
- AI Models: Added support for Gemini 3, Gemini 3 Pro with image preview, and GPT-5.1
- Import: ChatGPT conversation import feature
- Agent: Git Bash detection and requirement check for Windows agents
- Search: Native language emoji search with CLDR data format
- Provider: Endpoint type support for cherryin provider
- Debug: Local crash mini dump file for better diagnostics
- Provider: Added Silicon provider support for Anthropic API compatibility
- Provider: AIHubMix support for nano banana
🐛 Important Bug Fixes:
- Error Handling: Improved error display in AiSdkToChunkAdapter
- Database: Optimized DatabaseManager and fixed libsql crash issues
- Memory: Fixed EventEmitter memory leak in useApiServer hook
- Messages: Fixed adjacent user messages appearing when assistant message contains error only
- Tools: Fixed missing execution state for approved tool permissions
- File Processing: Fixed "no such file" error for non-English filenames in open-mineru
- PDF: Fixed mineru PDF validation and 403 errors
- Images: Fixed base64 image save issues
- Search: Fixed URL context and web search capability
- Models: Added verbosity parameter support for GPT-5 models
- UI: Improved todo tool status icon visibility and colors
- Providers: Fixed api-host for vercel ai-gateway and gitcode update config
🐛 Bug Fixes:
- i18n: Clean up translation tags and untranslated strings
- Provider: Fixed Silicon provider code list
- Provider: Fixed Poe API reasoning parameters for GPT-5 and reasoning models
- Provider: Fixed duplicate /v1 in Anthropic API endpoints
- Provider: Fixed Azure provider handling in AI SDK integration
- Models: Added Claude Opus 4.5 pattern to THINKING_TOKEN_MAP
- Models: Improved Gemini reasoning and message handling
- Models: Fixed custom parameters for Gemini models
- Models: Fixed qwen-mt-flash text delta support
- Models: Fixed Groq verbosity setting
- UI: Fixed quota display and quota tips
- UI: Fixed web search button condition
- Settings: Fixed updateAssistantPreset reducer to properly update preset
- Settings: Respect enableMaxTokens setting when maxTokens is not configured
- SDK: Fixed header merging logic in AI SDK
⚡ Improvements:
- SDK: Updated Google and OpenAI SDKs with new features
- UI: Simplified knowledge base creation modal and agent creation form
- Tools: Replaced renderToolContent function with ToolContent component
- Architecture: Namespace tool call IDs with session ID to prevent conflicts
- Config: AI SDK configuration refactoring
- SDK: Upgraded @anthropic-ai/claude-agent-sdk to 0.1.53
<!--LANG:zh-CN-->
v1.7.0-rc.2 新特性
v1.7.0-rc.3 更新内容
✨ 新功能:
- AI 模型:新增 Gemini 3、Gemini 3 Pro 图像预览支持,以及 GPT-5.1
- 导入ChatGPT 对话导入功能
- AgentWindows Agent 的 Git Bash 检测和要求检查
- 搜索:支持本地语言 emoji 搜索CLDR 数据格式)
- 提供商cherryin provider 的端点类型支持
- 调试:启用本地崩溃 mini dump 文件,方便诊断
- 提供商:新增 Silicon 提供商对 Anthropic API 的兼容性支持
- 提供商AIHubMix 支持 nano banana
🐛 重要修复:
- 错误处理:改进 AiSdkToChunkAdapter 的错误显示
- 数据库:优化 DatabaseManager 并修复 libsql 崩溃问题
- 内存:修复 useApiServer hook 中的 EventEmitter 内存泄漏
- 消息:修复当助手消息仅包含错误时相邻用户消息出现的问题
- 工具:修复批准工具权限缺少执行状态的问题
- 文件处理:修复 open-mineru 处理非英文文件名时的"无此文件"错误
- PDF修复 mineru PDF 验证和 403 错误
- 图片:修复 base64 图片保存问题
- 搜索:修复 URL 上下文和网络搜索功能
- 模型:为 GPT-5 模型添加 verbosity 参数支持
- UI改进 todo 工具状态图标可见性和颜色
- 提供商:修复 vercel ai-gateway 和 gitcode 更新配置的 api-host
🐛 问题修复:
- 国际化:清理翻译标签和未翻译字符串
- 提供商:修复 Silicon 提供商代码列表
- 提供商:修复 Poe API 对 GPT-5 和推理模型的推理参数
- 提供商:修复 Anthropic API 端点重复 /v1 问题
- 提供商:修复 Azure 提供商在 AI SDK 集成中的处理
- 模型Claude Opus 4.5 添加到 THINKING_TOKEN_MAP
- 模型:改进 Gemini 推理和消息处理
- 模型:修复 Gemini 模型自定义参数
- 模型:修复 qwen-mt-flash text delta 支持
- 模型:修复 Groq verbosity 设置
- 界面:修复配额显示和配额提示
- 界面:修复 Web 搜索按钮条件
- 设置:修复 updateAssistantPreset reducer 正确更新 preset
- 设置:尊重 enableMaxTokens 设置
- SDK修复 AI SDK 中 header 合并逻辑
⚡ 改进:
- SDK更新 Google 和 OpenAI SDK新增功能和修复
- UI简化知识库创建模态框和 agent 创建表单
- 工具:用 ToolContent 组件替换 renderToolContent 函数,提升可读性
- 架构:用会话 ID 命名工具调用 ID 以防止冲突
- 配置AI SDK 配置重构
- SDK升级 @anthropic-ai/claude-agent-sdk 到 0.1.53
<!--LANG:END-->

View File

@@ -1,6 +1,6 @@
{
"name": "CherryStudio",
"version": "1.7.0-rc.2",
"version": "1.7.0-rc.3",
"private": true,
"description": "A powerful AI assistant for producer.",
"main": "./out/main/index.js",
@@ -80,7 +80,7 @@
"release:ai-sdk-provider": "yarn workspace @cherrystudio/ai-sdk-provider version patch --immediate && yarn workspace @cherrystudio/ai-sdk-provider build && yarn workspace @cherrystudio/ai-sdk-provider npm publish --access public"
},
"dependencies": {
"@anthropic-ai/claude-agent-sdk": "patch:@anthropic-ai/claude-agent-sdk@npm%3A0.1.30#~/.yarn/patches/@anthropic-ai-claude-agent-sdk-npm-0.1.30-b50a299674.patch",
"@anthropic-ai/claude-agent-sdk": "patch:@anthropic-ai/claude-agent-sdk@npm%3A0.1.53#~/.yarn/patches/@anthropic-ai-claude-agent-sdk-npm-0.1.53-4b77f4cf29.patch",
"@libsql/client": "0.14.0",
"@libsql/win32-x64-msvc": "^0.4.7",
"@napi-rs/system-ocr": "patch:@napi-rs/system-ocr@npm%3A1.0.2#~/.yarn/patches/@napi-rs-system-ocr-npm-1.0.2-59e7a78e8b.patch",
@@ -109,15 +109,15 @@
"@agentic/exa": "^7.3.3",
"@agentic/searxng": "^7.3.3",
"@agentic/tavily": "^7.3.3",
"@ai-sdk/amazon-bedrock": "^3.0.56",
"@ai-sdk/anthropic": "^2.0.45",
"@ai-sdk/amazon-bedrock": "^3.0.61",
"@ai-sdk/anthropic": "^2.0.49",
"@ai-sdk/cerebras": "^1.0.31",
"@ai-sdk/gateway": "^2.0.13",
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.40#~/.yarn/patches/@ai-sdk-google-npm-2.0.40-47e0eeee83.patch",
"@ai-sdk/google-vertex": "^3.0.72",
"@ai-sdk/gateway": "^2.0.15",
"@ai-sdk/google": "patch:@ai-sdk/google@npm%3A2.0.43#~/.yarn/patches/@ai-sdk-google-npm-2.0.43-689ed559b3.patch",
"@ai-sdk/google-vertex": "^3.0.79",
"@ai-sdk/huggingface": "^0.0.10",
"@ai-sdk/mistral": "^2.0.24",
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.71#~/.yarn/patches/@ai-sdk-openai-npm-2.0.71-a88ef00525.patch",
"@ai-sdk/openai": "patch:@ai-sdk/openai@npm%3A2.0.72#~/.yarn/patches/@ai-sdk-openai-npm-2.0.72-234e68da87.patch",
"@ai-sdk/perplexity": "^2.0.20",
"@ai-sdk/test-server": "^0.0.1",
"@ant-design/v5-patch-for-react-19": "^1.0.3",
@@ -171,7 +171,7 @@
"@opentelemetry/sdk-trace-base": "^2.0.0",
"@opentelemetry/sdk-trace-node": "^2.0.0",
"@opentelemetry/sdk-trace-web": "^2.0.0",
"@opeoginni/github-copilot-openai-compatible": "0.1.21",
"@opeoginni/github-copilot-openai-compatible": "^0.1.21",
"@playwright/test": "^1.52.0",
"@radix-ui/react-context-menu": "^2.2.16",
"@reduxjs/toolkit": "^2.2.5",
@@ -217,8 +217,8 @@
"@types/mime-types": "^3",
"@types/node": "^22.17.1",
"@types/pako": "^1.0.2",
"@types/react": "^19.0.12",
"@types/react-dom": "^19.0.4",
"@types/react": "^19.2.7",
"@types/react-dom": "^19.2.3",
"@types/react-infinite-scroll-component": "^5.0.0",
"@types/react-transition-group": "^4.4.12",
"@types/react-window": "^1",
@@ -412,12 +412,9 @@
"@langchain/openai@npm:>=0.1.0 <0.6.0": "patch:@langchain/openai@npm%3A1.0.0#~/.yarn/patches/@langchain-openai-npm-1.0.0-474d0ad9d4.patch",
"@langchain/openai@npm:^0.3.16": "patch:@langchain/openai@npm%3A1.0.0#~/.yarn/patches/@langchain-openai-npm-1.0.0-474d0ad9d4.patch",
"@langchain/openai@npm:>=0.2.0 <0.7.0": "patch:@langchain/openai@npm%3A1.0.0#~/.yarn/patches/@langchain-openai-npm-1.0.0-474d0ad9d4.patch",
"@ai-sdk/openai@npm:2.0.64": "patch:@ai-sdk/openai@npm%3A2.0.64#~/.yarn/patches/@ai-sdk-openai-npm-2.0.64-48f99f5bf3.patch",
"@ai-sdk/openai@npm:^2.0.42": "patch:@ai-sdk/openai@npm%3A2.0.71#~/.yarn/patches/@ai-sdk-openai-npm-2.0.71-a88ef00525.patch",
"@ai-sdk/google@npm:2.0.40": "patch:@ai-sdk/google@npm%3A2.0.40#~/.yarn/patches/@ai-sdk-google-npm-2.0.40-47e0eeee83.patch",
"@ai-sdk/openai@npm:2.0.71": "patch:@ai-sdk/openai@npm%3A2.0.71#~/.yarn/patches/@ai-sdk-openai-npm-2.0.71-a88ef00525.patch",
"@ai-sdk/openai-compatible@npm:1.0.27": "patch:@ai-sdk/openai-compatible@npm%3A1.0.27#~/.yarn/patches/@ai-sdk-openai-compatible-npm-1.0.27-06f74278cf.patch",
"@ai-sdk/openai-compatible@npm:^1.0.19": "patch:@ai-sdk/openai-compatible@npm%3A1.0.27#~/.yarn/patches/@ai-sdk-openai-compatible-npm-1.0.27-06f74278cf.patch"
"@ai-sdk/openai@npm:^2.0.42": "patch:@ai-sdk/openai@npm%3A2.0.72#~/.yarn/patches/@ai-sdk-openai-npm-2.0.72-234e68da87.patch",
"@ai-sdk/google@npm:^2.0.40": "patch:@ai-sdk/google@npm%3A2.0.40#~/.yarn/patches/@ai-sdk-google-npm-2.0.40-47e0eeee83.patch",
"@ai-sdk/openai-compatible@npm:^1.0.27": "patch:@ai-sdk/openai-compatible@npm%3A1.0.27#~/.yarn/patches/@ai-sdk-openai-compatible-npm-1.0.27-06f74278cf.patch"
},
"packageManager": "yarn@4.9.1",
"lint-staged": {

View File

@@ -39,13 +39,13 @@
"ai": "^5.0.26"
},
"dependencies": {
"@ai-sdk/anthropic": "^2.0.45",
"@ai-sdk/azure": "^2.0.73",
"@ai-sdk/anthropic": "^2.0.49",
"@ai-sdk/azure": "^2.0.74",
"@ai-sdk/deepseek": "^1.0.29",
"@ai-sdk/openai-compatible": "patch:@ai-sdk/openai-compatible@npm%3A1.0.27#~/.yarn/patches/@ai-sdk-openai-compatible-npm-1.0.27-06f74278cf.patch",
"@ai-sdk/provider": "^2.0.0",
"@ai-sdk/provider-utils": "^3.0.17",
"@ai-sdk/xai": "^2.0.34",
"@ai-sdk/xai": "^2.0.36",
"zod": "^4.1.5"
},
"devDependencies": {

View File

@@ -374,5 +374,13 @@ export enum IpcChannel {
WebSocket_Stop = 'webSocket:stop',
WebSocket_Status = 'webSocket:status',
WebSocket_SendFile = 'webSocket:send-file',
WebSocket_GetAllCandidates = 'webSocket:get-all-candidates'
WebSocket_GetAllCandidates = 'webSocket:get-all-candidates',
// Volcengine
Volcengine_SaveCredentials = 'volcengine:save-credentials',
Volcengine_HasCredentials = 'volcengine:has-credentials',
Volcengine_ClearCredentials = 'volcengine:clear-credentials',
Volcengine_ListModels = 'volcengine:list-models',
Volcengine_GetAuthHeaders = 'volcengine:get-auth-headers',
Volcengine_MakeRequest = 'volcengine:make-request'
}

View File

@@ -88,11 +88,16 @@ export function getSdkClient(
}
})
}
const baseURL =
let baseURL =
provider.type === 'anthropic'
? provider.apiHost
: (provider.anthropicApiHost && provider.anthropicApiHost.trim()) || provider.apiHost
// Anthropic SDK automatically appends /v1 to all endpoints (like /v1/messages, /v1/models)
// We need to strip api version from baseURL to avoid duplication (e.g., /v3/v1/models)
// formatProviderApiHost adds /v1 for AI SDK compatibility, but Anthropic SDK needs it removed
baseURL = baseURL.replace(/\/v\d+(?:alpha|beta)?(?=\/|$)/i, '')
logger.debug('Anthropic API baseURL', { baseURL, providerId: provider.id })
if (provider.id === 'aihubmix') {

View File

@@ -0,0 +1,48 @@
/**
* @fileoverview Shared provider configuration for Claude Code and Anthropic API compatibility
*
* This module defines which models from specific providers support the Anthropic API endpoint.
* Used by both the Code Tools page and the Anthropic SDK client.
*/
/**
* Silicon provider models that support Anthropic API endpoint.
* These models can be used with Claude Code via the Anthropic-compatible API.
*
* @see https://docs.siliconflow.cn/cn/api-reference/chat-completions/messages
*/
export const SILICON_ANTHROPIC_COMPATIBLE_MODELS: readonly string[] = [
// DeepSeek V3.1 series
'Pro/deepseek-ai/DeepSeek-V3.1-Terminus',
'deepseek-ai/DeepSeek-V3.1',
'Pro/deepseek-ai/DeepSeek-V3.1',
// DeepSeek V3 series
'deepseek-ai/DeepSeek-V3',
'Pro/deepseek-ai/DeepSeek-V3',
// Moonshot/Kimi series
'moonshotai/Kimi-K2-Instruct-0905',
'Pro/moonshotai/Kimi-K2-Instruct-0905',
'moonshotai/Kimi-Dev-72B',
// Baidu ERNIE
'baidu/ERNIE-4.5-300B-A47B'
]
/**
* Creates a Set for efficient lookup of silicon Anthropic-compatible model IDs.
*/
const SILICON_ANTHROPIC_COMPATIBLE_MODEL_SET = new Set(SILICON_ANTHROPIC_COMPATIBLE_MODELS)
/**
* Checks if a model ID is compatible with Anthropic API on Silicon provider.
*
* @param modelId - The model ID to check
* @returns true if the model supports Anthropic API endpoint
*/
export function isSiliconAnthropicCompatibleModel(modelId: string): boolean {
return SILICON_ANTHROPIC_COMPATIBLE_MODEL_SET.has(modelId)
}
/**
* Silicon provider's Anthropic API host URL.
*/
export const SILICON_ANTHROPIC_API_HOST = 'https://api.siliconflow.cn'

View File

@@ -1,6 +1,7 @@
import { CacheService } from '@main/services/CacheService'
import { loggerService } from '@main/services/LoggerService'
import { reduxService } from '@main/services/ReduxService'
import { isSiliconAnthropicCompatibleModel } from '@shared/config/providers'
import type { ApiModel, Model, Provider } from '@types'
const logger = loggerService.withContext('ApiServerUtils')
@@ -287,6 +288,8 @@ export const getProviderAnthropicModelChecker = (providerId: string): ((m: Model
return (m: Model) => m.endpoint_type === 'anthropic'
case 'aihubmix':
return (m: Model) => m.id.includes('claude')
case 'silicon':
return (m: Model) => isSiliconAnthropicCompatibleModel(m.id)
default:
// allow all models when checker not configured
return () => true

View File

@@ -73,6 +73,7 @@ import {
import storeSyncService from './services/StoreSyncService'
import { themeService } from './services/ThemeService'
import VertexAIService from './services/VertexAIService'
import VolcengineService from './services/VolcengineService'
import WebSocketService from './services/WebSocketService'
import { setOpenLinkExternal } from './services/WebviewService'
import { windowService } from './services/WindowService'
@@ -1077,6 +1078,14 @@ export function registerIpc(mainWindow: BrowserWindow, app: Electron.App) {
ipcMain.handle(IpcChannel.WebSocket_SendFile, WebSocketService.sendFile)
ipcMain.handle(IpcChannel.WebSocket_GetAllCandidates, WebSocketService.getAllCandidates)
// Volcengine
ipcMain.handle(IpcChannel.Volcengine_SaveCredentials, VolcengineService.saveCredentials)
ipcMain.handle(IpcChannel.Volcengine_HasCredentials, VolcengineService.hasCredentials)
ipcMain.handle(IpcChannel.Volcengine_ClearCredentials, VolcengineService.clearCredentials)
ipcMain.handle(IpcChannel.Volcengine_ListModels, VolcengineService.listModels)
ipcMain.handle(IpcChannel.Volcengine_GetAuthHeaders, VolcengineService.getAuthHeaders)
ipcMain.handle(IpcChannel.Volcengine_MakeRequest, VolcengineService.makeRequest)
ipcMain.handle(IpcChannel.APP_CrashRenderProcess, () => {
mainWindow.webContents.forcefullyCrashRenderer()
})

View File

@@ -0,0 +1,732 @@
import { loggerService } from '@logger'
import crypto from 'crypto'
import { app, net, safeStorage } from 'electron'
import fs from 'fs'
import path from 'path'
import * as z from 'zod'
import { getConfigDir } from '../utils/file'
const logger = loggerService.withContext('VolcengineService')
// Configuration constants
const CONFIG = {
ALGORITHM: 'HMAC-SHA256',
REQUEST_TYPE: 'request',
DEFAULT_REGION: 'cn-beijing',
SERVICE_NAME: 'ark',
DEFAULT_HEADERS: {
'content-type': 'application/json',
accept: 'application/json'
},
API_URLS: {
ARK_HOST: 'open.volcengineapi.com'
},
CREDENTIALS_FILE_NAME: '.volcengine_credentials',
API_VERSION: '2024-01-01',
DEFAULT_PAGE_SIZE: 100
} as const
// Request schemas
const ListFoundationModelsRequestSchema = z.object({
PageNumber: z.optional(z.number()),
PageSize: z.optional(z.number())
})
const ListEndpointsRequestSchema = z.object({
ProjectName: z.optional(z.string()),
PageNumber: z.optional(z.number()),
PageSize: z.optional(z.number())
})
// Response schemas - only keep fields needed for model list
const FoundationModelItemSchema = z.object({
Name: z.string(),
DisplayName: z.optional(z.string()),
Description: z.optional(z.string())
})
const EndpointItemSchema = z.object({
Id: z.string(),
Name: z.optional(z.string()),
Description: z.optional(z.string()),
ModelReference: z.optional(
z.object({
FoundationModel: z.optional(
z.object({
Name: z.optional(z.string()),
ModelVersion: z.optional(z.string())
})
),
CustomModelId: z.optional(z.string())
})
)
})
const ListFoundationModelsResponseSchema = z.object({
Result: z.object({
TotalCount: z.number(),
Items: z.array(FoundationModelItemSchema)
})
})
const ListEndpointsResponseSchema = z.object({
Result: z.object({
TotalCount: z.number(),
Items: z.array(EndpointItemSchema)
})
})
// Infer types from schemas
type ListFoundationModelsRequest = z.infer<typeof ListFoundationModelsRequestSchema>
type ListEndpointsRequest = z.infer<typeof ListEndpointsRequestSchema>
type ListFoundationModelsResponse = z.infer<typeof ListFoundationModelsResponseSchema>
type ListEndpointsResponse = z.infer<typeof ListEndpointsResponseSchema>
// ============= Internal Type Definitions =============
interface VolcengineCredentials {
accessKeyId: string
secretAccessKey: string
}
interface SignedRequestParams {
method: 'GET' | 'POST'
host: string
path: string
query: Record<string, string>
headers: Record<string, string>
body?: string
service: string
region: string
}
interface SignedHeaders {
Authorization: string
'X-Date': string
'X-Content-Sha256': string
Host: string
}
interface ModelInfo {
id: string
name: string
description?: string
created?: number
}
interface ListModelsResult {
models: ModelInfo[]
total?: number
warnings?: string[]
}
// Custom error class
class VolcengineServiceError extends Error {
constructor(
message: string,
public readonly cause?: unknown
) {
super(message)
this.name = 'VolcengineServiceError'
}
}
/**
* Volcengine API Signing Service
*
* Implements HMAC-SHA256 signing algorithm for Volcengine API authentication.
* Securely stores credentials using Electron's safeStorage.
*/
class VolcengineService {
private readonly credentialsFilePath: string
constructor() {
this.credentialsFilePath = this.getCredentialsFilePath()
}
/**
* Get the path for storing encrypted credentials
*/
private getCredentialsFilePath(): string {
const oldPath = path.join(app.getPath('userData'), CONFIG.CREDENTIALS_FILE_NAME)
if (fs.existsSync(oldPath)) {
return oldPath
}
return path.join(getConfigDir(), CONFIG.CREDENTIALS_FILE_NAME)
}
// ============= Cryptographic Helper Methods =============
/**
* Calculate SHA256 hash of data and return hex encoded string
*/
private sha256Hash(data: string | Buffer): string {
return crypto.createHash('sha256').update(data).digest('hex')
}
/**
* Calculate HMAC-SHA256 and return buffer
*/
private hmacSha256(key: Buffer | string, data: string): Buffer {
return crypto.createHmac('sha256', key).update(data, 'utf8').digest()
}
/**
* Calculate HMAC-SHA256 and return hex encoded string
*/
private hmacSha256Hex(key: Buffer | string, data: string): string {
return crypto.createHmac('sha256', key).update(data, 'utf8').digest('hex')
}
/**
* URL encode according to RFC3986
*/
private uriEncode(str: string, encodeSlash: boolean = true): string {
if (!str) return ''
return str
.split('')
.map((char) => {
if (
(char >= 'A' && char <= 'Z') ||
(char >= 'a' && char <= 'z') ||
(char >= '0' && char <= '9') ||
char === '_' ||
char === '-' ||
char === '~' ||
char === '.'
) {
return char
}
if (char === '/' && !encodeSlash) {
return char
}
return encodeURIComponent(char)
})
.join('')
}
// ============= Signing Implementation =============
/**
* Get current UTC time in ISO8601 format (YYYYMMDD'T'HHMMSS'Z')
*/
private getIso8601DateTime(): string {
const now = new Date()
return now
.toISOString()
.replace(/[-:]/g, '')
.replace(/\.\d{3}/, '')
}
/**
* Get date portion from datetime (YYYYMMDD)
*/
private getDateFromDateTime(dateTime: string): string {
return dateTime.substring(0, 8)
}
/**
* Build canonical query string from query parameters
*/
private buildCanonicalQueryString(query: Record<string, string>): string {
if (!query || Object.keys(query).length === 0) {
return ''
}
return Object.keys(query)
.sort()
.map((key) => `${this.uriEncode(key)}=${this.uriEncode(query[key])}`)
.join('&')
}
/**
* Build canonical headers string
*/
private buildCanonicalHeaders(headers: Record<string, string>): {
canonicalHeaders: string
signedHeaders: string
} {
const sortedKeys = Object.keys(headers)
.map((k) => k.toLowerCase())
.sort()
const canonicalHeaders = sortedKeys.map((key) => `${key}:${headers[key]?.trim() || ''}`).join('\n') + '\n'
const signedHeaders = sortedKeys.join(';')
return { canonicalHeaders, signedHeaders }
}
/**
* Create the signing key through a series of HMAC operations
*
* kSecret = SecretAccessKey
* kDate = HMAC(kSecret, Date)
* kRegion = HMAC(kDate, Region)
* kService = HMAC(kRegion, Service)
* kSigning = HMAC(kService, "request")
*/
private deriveSigningKey(secretKey: string, date: string, region: string, service: string): Buffer {
const kDate = this.hmacSha256(secretKey, date)
const kRegion = this.hmacSha256(kDate, region)
const kService = this.hmacSha256(kRegion, service)
const kSigning = this.hmacSha256(kService, CONFIG.REQUEST_TYPE)
return kSigning
}
/**
* Create canonical request string
*
* CanonicalRequest =
* HTTPRequestMethod + '\n' +
* CanonicalURI + '\n' +
* CanonicalQueryString + '\n' +
* CanonicalHeaders + '\n' +
* SignedHeaders + '\n' +
* HexEncode(Hash(RequestPayload))
*/
private createCanonicalRequest(
method: string,
canonicalUri: string,
canonicalQueryString: string,
canonicalHeaders: string,
signedHeaders: string,
payloadHash: string
): string {
return [method, canonicalUri, canonicalQueryString, canonicalHeaders, signedHeaders, payloadHash].join('\n')
}
/**
* Create string to sign
*
* StringToSign =
* Algorithm + '\n' +
* RequestDateTime + '\n' +
* CredentialScope + '\n' +
* HexEncode(Hash(CanonicalRequest))
*/
private createStringToSign(dateTime: string, credentialScope: string, canonicalRequest: string): string {
const hashedCanonicalRequest = this.sha256Hash(canonicalRequest)
return [CONFIG.ALGORITHM, dateTime, credentialScope, hashedCanonicalRequest].join('\n')
}
/**
* Generate signature for the request
*/
private generateSignature(params: SignedRequestParams, credentials: VolcengineCredentials): SignedHeaders {
const { method, host, path: requestPath, query, body, service, region } = params
// Step 1: Prepare datetime
const dateTime = this.getIso8601DateTime()
const date = this.getDateFromDateTime(dateTime)
// Step 2: Calculate payload hash
const payloadHash = this.sha256Hash(body || '')
// Step 3: Prepare headers for signing
const headersToSign: Record<string, string> = {
host: host,
'x-date': dateTime,
'x-content-sha256': payloadHash,
'content-type': 'application/json'
}
// Step 4: Build canonical components
const canonicalUri = this.uriEncode(requestPath, false) || '/'
const canonicalQueryString = this.buildCanonicalQueryString(query)
const { canonicalHeaders, signedHeaders } = this.buildCanonicalHeaders(headersToSign)
// Step 5: Create canonical request
const canonicalRequest = this.createCanonicalRequest(
method.toUpperCase(),
canonicalUri,
canonicalQueryString,
canonicalHeaders,
signedHeaders,
payloadHash
)
// Step 6: Create credential scope and string to sign
const credentialScope = `${date}/${region}/${service}/${CONFIG.REQUEST_TYPE}`
const stringToSign = this.createStringToSign(dateTime, credentialScope, canonicalRequest)
// Step 7: Calculate signature
const signingKey = this.deriveSigningKey(credentials.secretAccessKey, date, region, service)
const signature = this.hmacSha256Hex(signingKey, stringToSign)
// Step 8: Build authorization header
const authorization = `${CONFIG.ALGORITHM} Credential=${credentials.accessKeyId}/${credentialScope}, SignedHeaders=${signedHeaders}, Signature=${signature}`
return {
Authorization: authorization,
'X-Date': dateTime,
'X-Content-Sha256': payloadHash,
Host: host
}
}
// ============= Credential Management =============
/**
* Save credentials securely using Electron's safeStorage
*/
public saveCredentials = async (
_: Electron.IpcMainInvokeEvent,
accessKeyId: string,
secretAccessKey: string
): Promise<void> => {
try {
if (!accessKeyId || !secretAccessKey) {
throw new VolcengineServiceError('Access Key ID and Secret Access Key are required')
}
const credentials: VolcengineCredentials = { accessKeyId, secretAccessKey }
const credentialsJson = JSON.stringify(credentials)
const encryptedData = safeStorage.encryptString(credentialsJson)
// Ensure directory exists
const dir = path.dirname(this.credentialsFilePath)
if (!fs.existsSync(dir)) {
await fs.promises.mkdir(dir, { recursive: true })
}
await fs.promises.writeFile(this.credentialsFilePath, encryptedData)
logger.info('Volcengine credentials saved successfully')
} catch (error) {
logger.error('Failed to save Volcengine credentials:', error as Error)
throw new VolcengineServiceError('Failed to save credentials', error)
}
}
/**
* Load credentials from encrypted storage
* @throws VolcengineServiceError if credentials file exists but is corrupted
*/
private async loadCredentials(): Promise<VolcengineCredentials | null> {
if (!fs.existsSync(this.credentialsFilePath)) {
return null
}
try {
const encryptedData = await fs.promises.readFile(this.credentialsFilePath)
const decryptedJson = safeStorage.decryptString(Buffer.from(encryptedData))
return JSON.parse(decryptedJson) as VolcengineCredentials
} catch (error) {
logger.error('Failed to load Volcengine credentials:', error as Error)
throw new VolcengineServiceError(
'Credentials file exists but could not be loaded. Please re-enter your credentials.',
error
)
}
}
/**
* Check if credentials exist
*/
public hasCredentials = async (): Promise<boolean> => {
return fs.existsSync(this.credentialsFilePath)
}
/**
* Clear stored credentials
*/
public clearCredentials = async (): Promise<void> => {
try {
if (fs.existsSync(this.credentialsFilePath)) {
await fs.promises.unlink(this.credentialsFilePath)
logger.info('Volcengine credentials cleared')
}
} catch (error) {
logger.error('Failed to clear Volcengine credentials:', error as Error)
throw new VolcengineServiceError('Failed to clear credentials', error)
}
}
// ============= API Methods =============
/**
* Make a signed request to Volcengine API
*/
private async makeSignedRequest<T>(
method: 'GET' | 'POST',
host: string,
path: string,
action: string,
version: string,
query?: Record<string, string>,
body?: Record<string, unknown>,
service: string = CONFIG.SERVICE_NAME,
region: string = CONFIG.DEFAULT_REGION
): Promise<T> {
const credentials = await this.loadCredentials()
if (!credentials) {
throw new VolcengineServiceError('No credentials found. Please save credentials first.')
}
const fullQuery: Record<string, string> = {
Action: action,
Version: version,
...query
}
const bodyString = body ? JSON.stringify(body) : ''
const signedHeaders = this.generateSignature(
{
method,
host,
path,
query: fullQuery,
headers: {},
body: bodyString,
service,
region
},
credentials
)
// Build URL with query string (use simple encoding for URL, canonical encoding is only for signature)
const urlParams = new URLSearchParams(fullQuery)
const url = `https://${host}${path}?${urlParams.toString()}`
const requestHeaders: Record<string, string> = {
...CONFIG.DEFAULT_HEADERS,
Authorization: signedHeaders.Authorization,
'X-Date': signedHeaders['X-Date'],
'X-Content-Sha256': signedHeaders['X-Content-Sha256']
}
logger.debug('Making Volcengine API request', { url, method, action })
try {
const response = await net.fetch(url, {
method,
headers: requestHeaders,
body: method === 'POST' && bodyString ? bodyString : undefined
})
if (!response.ok) {
const errorText = await response.text()
logger.error(`Volcengine API error: ${response.status}`, { errorText })
throw new VolcengineServiceError(`API request failed: ${response.status} - ${errorText}`)
}
return (await response.json()) as T
} catch (error) {
if (error instanceof VolcengineServiceError) {
throw error
}
logger.error('Volcengine API request failed:', error as Error)
throw new VolcengineServiceError('API request failed', error)
}
}
/**
* List foundation models from Volcengine ARK
*/
private async listFoundationModels(region: string = CONFIG.DEFAULT_REGION): Promise<ListFoundationModelsResponse> {
const requestBody: ListFoundationModelsRequest = {
PageNumber: 1,
PageSize: CONFIG.DEFAULT_PAGE_SIZE
}
const response = await this.makeSignedRequest<unknown>(
'POST',
CONFIG.API_URLS.ARK_HOST,
'/',
'ListFoundationModels',
CONFIG.API_VERSION,
{},
requestBody,
CONFIG.SERVICE_NAME,
region
)
return ListFoundationModelsResponseSchema.parse(response)
}
/**
* List user-created endpoints from Volcengine ARK
*/
private async listEndpoints(
projectName?: string,
region: string = CONFIG.DEFAULT_REGION
): Promise<ListEndpointsResponse> {
const requestBody: ListEndpointsRequest = {
ProjectName: projectName || 'default',
PageNumber: 1,
PageSize: CONFIG.DEFAULT_PAGE_SIZE
}
const response = await this.makeSignedRequest<unknown>(
'POST',
CONFIG.API_URLS.ARK_HOST,
'/',
'ListEndpoints',
CONFIG.API_VERSION,
{},
requestBody,
CONFIG.SERVICE_NAME,
region
)
return ListEndpointsResponseSchema.parse(response)
}
/**
* List all available models from Volcengine ARK
* Combines foundation models and user-created endpoints
*/
public listModels = async (
_?: Electron.IpcMainInvokeEvent,
projectName?: string,
region?: string
): Promise<ListModelsResult> => {
try {
const effectiveRegion = region || CONFIG.DEFAULT_REGION
const [foundationModelsResult, endpointsResult] = await Promise.allSettled([
this.listFoundationModels(effectiveRegion),
this.listEndpoints(projectName, effectiveRegion)
])
const models: ModelInfo[] = []
const warnings: string[] = []
if (foundationModelsResult.status === 'fulfilled') {
const foundationModels = foundationModelsResult.value
for (const item of foundationModels.Result.Items) {
models.push({
id: item.Name,
name: item.DisplayName || item.Name,
description: item.Description
})
}
logger.info(`Found ${foundationModels.Result.Items.length} foundation models`)
} else {
const errorMsg = `Failed to fetch foundation models: ${foundationModelsResult.reason}`
logger.warn(errorMsg)
warnings.push(errorMsg)
}
// Process endpoints
if (endpointsResult.status === 'fulfilled') {
const endpoints = endpointsResult.value
for (const item of endpoints.Result.Items) {
const modelRef = item.ModelReference
const foundationModelName = modelRef?.FoundationModel?.Name
const modelVersion = modelRef?.FoundationModel?.ModelVersion
const customModelId = modelRef?.CustomModelId
let displayName = item.Name || item.Id
if (foundationModelName) {
displayName = modelVersion ? `${foundationModelName} (${modelVersion})` : foundationModelName
} else if (customModelId) {
displayName = customModelId
}
models.push({
id: item.Id,
name: displayName,
description: item.Description
})
}
logger.info(`Found ${endpoints.Result.Items.length} endpoints`)
} else {
const errorMsg = `Failed to fetch endpoints: ${endpointsResult.reason}`
logger.warn(errorMsg)
warnings.push(errorMsg)
}
// If both failed, throw error
if (foundationModelsResult.status === 'rejected' && endpointsResult.status === 'rejected') {
throw new VolcengineServiceError('Failed to fetch both foundation models and endpoints')
}
const total =
(foundationModelsResult.status === 'fulfilled' ? foundationModelsResult.value.Result.TotalCount : 0) +
(endpointsResult.status === 'fulfilled' ? endpointsResult.value.Result.TotalCount : 0)
logger.info(`Total models found: ${models.length}`)
return {
models,
total,
warnings: warnings.length > 0 ? warnings : undefined
}
} catch (error) {
logger.error('Failed to list Volcengine models:', error as Error)
throw new VolcengineServiceError('Failed to list models', error)
}
}
/**
* Get authorization headers for external use
* This allows the renderer process to make direct API calls with proper authentication
*/
public getAuthHeaders = async (
_: Electron.IpcMainInvokeEvent,
params: {
method: 'GET' | 'POST'
host: string
path: string
query?: Record<string, string>
body?: string
service?: string
region?: string
}
): Promise<SignedHeaders> => {
const credentials = await this.loadCredentials()
if (!credentials) {
throw new VolcengineServiceError('No credentials found. Please save credentials first.')
}
return this.generateSignature(
{
method: params.method,
host: params.host,
path: params.path,
query: params.query || {},
headers: {},
body: params.body,
service: params.service || CONFIG.SERVICE_NAME,
region: params.region || CONFIG.DEFAULT_REGION
},
credentials
)
}
/**
* Make a generic signed API request
* This is a more flexible method that allows custom API calls
*/
public makeRequest = async (
_: Electron.IpcMainInvokeEvent,
params: {
method: 'GET' | 'POST'
host: string
path: string
action: string
version: string
query?: Record<string, string>
body?: Record<string, unknown>
service?: string
region?: string
}
): Promise<unknown> => {
return this.makeSignedRequest(
params.method,
params.host,
params.path,
params.action,
params.version,
params.query,
params.body,
params.service || CONFIG.SERVICE_NAME,
params.region || CONFIG.DEFAULT_REGION
)
}
}
export default new VolcengineService()

View File

@@ -572,6 +572,41 @@ const api = {
status: () => ipcRenderer.invoke(IpcChannel.WebSocket_Status),
sendFile: (filePath: string) => ipcRenderer.invoke(IpcChannel.WebSocket_SendFile, filePath),
getAllCandidates: () => ipcRenderer.invoke(IpcChannel.WebSocket_GetAllCandidates)
},
volcengine: {
saveCredentials: (accessKeyId: string, secretAccessKey: string): Promise<void> =>
ipcRenderer.invoke(IpcChannel.Volcengine_SaveCredentials, accessKeyId, secretAccessKey),
hasCredentials: (): Promise<boolean> => ipcRenderer.invoke(IpcChannel.Volcengine_HasCredentials),
clearCredentials: (): Promise<void> => ipcRenderer.invoke(IpcChannel.Volcengine_ClearCredentials),
listModels: (
projectName?: string,
region?: string
): Promise<{
models: Array<{ id: string; name: string; description?: string; created?: number }>
total?: number
warnings?: string[]
}> => ipcRenderer.invoke(IpcChannel.Volcengine_ListModels, projectName, region),
getAuthHeaders: (params: {
method: 'GET' | 'POST'
host: string
path: string
query?: Record<string, string>
body?: string
service?: string
region?: string
}): Promise<{ Authorization: string; 'X-Date': string; 'X-Content-Sha256': string; Host: string }> =>
ipcRenderer.invoke(IpcChannel.Volcengine_GetAuthHeaders, params),
makeRequest: (params: {
method: 'GET' | 'POST'
host: string
path: string
action: string
version: string
query?: Record<string, string>
body?: Record<string, unknown>
service?: string
region?: string
}): Promise<unknown> => ipcRenderer.invoke(IpcChannel.Volcengine_MakeRequest, params)
}
}

View File

@@ -50,7 +50,40 @@ export default class ModernAiProvider {
private model?: Model
private localProvider: Awaited<AiSdkProvider> | null = null
// 构造函数重载签名
/**
* Constructor for ModernAiProvider
*
* @param modelOrProvider - Model or Provider object
* @param provider - Optional Provider object (only used when first param is Model)
*
* @remarks
* **Important behavior notes**:
*
* 1. When called with `(model)`:
* - Calls `getActualProvider(model)` to retrieve and format the provider
* - URL will be automatically formatted via `formatProviderApiHost`, adding version suffixes like `/v1`
*
* 2. When called with `(model, provider)`:
* - **Directly uses the provided provider WITHOUT going through `getActualProvider`**
* - **URL will NOT be automatically formatted, `/v1` suffix will NOT be added**
* - This is legacy behavior kept for backward compatibility
*
* 3. When called with `(provider)`:
* - Directly uses the provider without requiring a model
* - Used for operations that don't need a model (e.g., fetchModels)
*
* @example
* ```typescript
* // Recommended: Auto-format URL
* const ai = new ModernAiProvider(model)
*
* // Not recommended: Skip URL formatting (only for special cases)
* const ai = new ModernAiProvider(model, customProvider)
*
* // For operations that don't need a model
* const ai = new ModernAiProvider(provider)
* ```
*/
constructor(model: Model, provider?: Provider)
constructor(provider: Provider)
constructor(modelOrProvider: Model | Provider, provider?: Provider)
@@ -156,7 +189,7 @@ export default class ModernAiProvider {
config: ModernAiProviderConfig
): Promise<CompletionsResult> {
// ai-gateway不是image/generation 端点所以就先不走legacy了
if (config.isImageGenerationEndpoint && config.provider!.id !== SystemProviderIds['ai-gateway']) {
if (config.isImageGenerationEndpoint && this.getActualProvider().id !== SystemProviderIds['ai-gateway']) {
// 使用 legacy 实现处理图像生成(支持图片编辑等高级功能)
if (!config.uiMessages) {
throw new Error('uiMessages is required for image generation endpoint')
@@ -322,10 +355,10 @@ export default class ModernAiProvider {
}
}
/**
* 使用现代化 AI SDK 的图像生成实现,支持流式输出
* @deprecated 已改为使用 legacy 实现以支持图片编辑等高级功能
*/
// /**
// * 使用现代化 AI SDK 的图像生成实现,支持流式输出
// * @deprecated 已改为使用 legacy 实现以支持图片编辑等高级功能
// */
/*
private async modernImageGeneration(
model: ImageModel,

View File

@@ -14,6 +14,7 @@ import { OpenAIAPIClient } from './openai/OpenAIApiClient'
import { OpenAIResponseAPIClient } from './openai/OpenAIResponseAPIClient'
import { OVMSClient } from './ovms/OVMSClient'
import { PPIOAPIClient } from './ppio/PPIOAPIClient'
import { VolcengineAPIClient } from './volcengine/VolcengineAPIClient'
import { ZhipuAPIClient } from './zhipu/ZhipuAPIClient'
const logger = loggerService.withContext('ApiClientFactory')
@@ -64,6 +65,12 @@ export class ApiClientFactory {
return instance
}
if (provider.id === 'doubao') {
logger.debug(`Creating VolcengineAPIClient for provider: ${provider.id}`)
instance = new VolcengineAPIClient(provider) as BaseApiClient
return instance
}
if (provider.id === 'ovms') {
logger.debug(`Creating OVMSClient for provider: ${provider.id}`)
instance = new OVMSClient(provider) as BaseApiClient

View File

@@ -11,10 +11,8 @@ import {
findTokenLimit,
GEMINI_FLASH_MODEL_REGEX,
getThinkModelType,
isClaudeReasoningModel,
isDeepSeekHybridInferenceModel,
isDoubaoThinkingAutoModel,
isGeminiReasoningModel,
isGPT5SeriesModel,
isGrokReasoningModel,
isNotSupportSystemMessageModel,
@@ -651,7 +649,6 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
logger.warn('No user message. Some providers may not support.')
}
// poe 需要通过用户消息传递 reasoningEffort
const reasoningEffort = this.getReasoningEffort(assistant, model)
const lastUserMsg = userMessages.findLast((m) => m.role === 'user')
@@ -662,22 +659,6 @@ export class OpenAIAPIClient extends OpenAIBaseClient<
lastUserMsg.content = processPostsuffixQwen3Model(currentContent, qwenThinkModeEnabled)
}
if (this.provider.id === SystemProviderIds.poe) {
// 如果以后 poe 支持 reasoning_effort 参数了,可以删掉这部分
let suffix = ''
if (isGPT5SeriesModel(model) && reasoningEffort.reasoning_effort) {
suffix = ` --reasoning_effort ${reasoningEffort.reasoning_effort}`
} else if (isClaudeReasoningModel(model) && reasoningEffort.thinking?.budget_tokens) {
suffix = ` --thinking_budget ${reasoningEffort.thinking.budget_tokens}`
} else if (isGeminiReasoningModel(model) && reasoningEffort.extra_body?.google?.thinking_config) {
suffix = ` --thinking_budget ${reasoningEffort.extra_body.google.thinking_config.thinking_budget}`
}
// FIXME: poe 不支持多个text part上传文本文件的时候用的不是file part而是text part因此会出问题
// 临时解决方案是强制poe用string content但是其实poe部分支持array
if (typeof lastUserMsg.content === 'string') {
lastUserMsg.content += suffix
}
}
}
// 4. 最终请求消息

View File

@@ -0,0 +1,74 @@
import type OpenAI from '@cherrystudio/openai'
import { loggerService } from '@logger'
import { getVolcengineProjectName, getVolcengineRegion } from '@renderer/hooks/useVolcengine'
import type { Provider } from '@renderer/types'
import { OpenAIAPIClient } from '../openai/OpenAIApiClient'
const logger = loggerService.withContext('VolcengineAPIClient')
/**
* Volcengine (Doubao) API Client
*
* Extends OpenAIAPIClient for standard chat completions (OpenAI-compatible),
* but overrides listModels to use Volcengine's signed API via IPC.
*/
export class VolcengineAPIClient extends OpenAIAPIClient {
constructor(provider: Provider) {
super(provider)
}
/**
* List models using Volcengine's signed API
* This calls the main process VolcengineService which handles HMAC-SHA256 signing
*/
override async listModels(): Promise<OpenAI.Models.Model[]> {
try {
const hasCredentials = await window.api.volcengine.hasCredentials()
if (!hasCredentials) {
logger.info('Volcengine credentials not configured, falling back to OpenAI-compatible list')
// Fall back to standard OpenAI-compatible API if no Volcengine credentials
return super.listModels()
}
logger.info('Fetching models from Volcengine API using signed request')
const projectName = getVolcengineProjectName()
const region = getVolcengineRegion()
const response = await window.api.volcengine.listModels(projectName, region)
if (!response || !response.models) {
logger.warn('Empty response from Volcengine listModels')
return []
}
// Notify user of any partial failures
if (response.warnings && response.warnings.length > 0) {
for (const warning of response.warnings) {
logger.warn(warning)
}
window.toast?.warning('Some Volcengine models could not be fetched. Check logs for details.')
}
const models: OpenAI.Models.Model[] = response.models.map((model) => ({
id: model.id,
object: 'model' as const,
created: model.created || Math.floor(Date.now() / 1000),
owned_by: 'volcengine',
// @ts-ignore - description is used by UI to display model name
name: model.name || model.id
}))
logger.info(`Found ${models.length} models from Volcengine API`)
return models
} catch (error) {
logger.error('Failed to list Volcengine models:', error as Error)
// Notify user before falling back
window.toast?.warning('Failed to fetch Volcengine models. Check credentials if this persists.')
// Fall back to standard OpenAI-compatible API on error
logger.info('Falling back to OpenAI-compatible model list')
return super.listModels()
}
}
}

View File

@@ -1,6 +1,6 @@
import type { WebSearchPluginConfig } from '@cherrystudio/ai-core/built-in/plugins'
import { loggerService } from '@logger'
import { isSupportedThinkingTokenQwenModel } from '@renderer/config/models'
import { isGemini3Model, isSupportedThinkingTokenQwenModel } from '@renderer/config/models'
import type { MCPTool } from '@renderer/types'
import { type Assistant, type Message, type Model, type Provider, SystemProviderIds } from '@renderer/types'
import type { Chunk } from '@renderer/types/chunk'
@@ -9,11 +9,13 @@ import type { LanguageModelMiddleware } from 'ai'
import { extractReasoningMiddleware, simulateStreamingMiddleware } from 'ai'
import { isEmpty } from 'lodash'
import { getAiSdkProviderId } from '../provider/factory'
import { isOpenRouterGeminiGenerateImageModel } from '../utils/image'
import { noThinkMiddleware } from './noThinkMiddleware'
import { openrouterGenerateImageMiddleware } from './openrouterGenerateImageMiddleware'
import { openrouterReasoningMiddleware } from './openrouterReasoningMiddleware'
import { qwenThinkingMiddleware } from './qwenThinkingMiddleware'
import { skipGeminiThoughtSignatureMiddleware } from './skipGeminiThoughtSignatureMiddleware'
import { toolChoiceMiddleware } from './toolChoiceMiddleware'
const logger = loggerService.withContext('AiSdkMiddlewareBuilder')
@@ -257,6 +259,15 @@ function addModelSpecificMiddlewares(builder: AiSdkMiddlewareBuilder, config: Ai
middleware: openrouterGenerateImageMiddleware()
})
}
if (isGemini3Model(config.model)) {
const aiSdkId = getAiSdkProviderId(config.provider)
builder.add({
name: 'skip-gemini3-thought-signature',
middleware: skipGeminiThoughtSignatureMiddleware(aiSdkId)
})
logger.debug('Added skip Gemini3 thought signature middleware')
}
}
/**

View File

@@ -0,0 +1,36 @@
import type { LanguageModelMiddleware } from 'ai'
/**
* skip Gemini Thought Signature Middleware
* 由于多模型客户端请求的复杂性(可以中途切换其他模型),这里选择通过中间件方式添加跳过所有 Gemini3 思考签名
* Due to the complexity of multi-model client requests (which can switch to other models mid-process),
* it was decided to add a skip for all Gemini3 thinking signatures via middleware.
* @param aiSdkId AI SDK Provider ID
* @returns LanguageModelMiddleware
*/
export function skipGeminiThoughtSignatureMiddleware(aiSdkId: string): LanguageModelMiddleware {
const MAGIC_STRING = 'skip_thought_signature_validator'
return {
middlewareVersion: 'v2',
transformParams: async ({ params }) => {
const transformedParams = { ...params }
// Process messages in prompt
if (transformedParams.prompt && Array.isArray(transformedParams.prompt)) {
transformedParams.prompt = transformedParams.prompt.map((message) => {
if (typeof message.content !== 'string') {
for (const part of message.content) {
const googleOptions = part?.providerOptions?.[aiSdkId]
if (googleOptions?.thoughtSignature) {
googleOptions.thoughtSignature = MAGIC_STRING
}
}
}
return message
})
}
return transformedParams
}
}
}

View File

@@ -180,6 +180,10 @@ describe('messageConverter', () => {
const result = await convertMessagesToSdkMessages([initialUser, assistant, finalUser], model)
expect(result).toEqual([
{
role: 'user',
content: [{ type: 'text', text: 'Start editing' }]
},
{
role: 'assistant',
content: [{ type: 'text', text: 'Here is the current preview' }]
@@ -217,6 +221,7 @@ describe('messageConverter', () => {
expect(result).toEqual([
{ role: 'system', content: 'fileid://reference' },
{ role: 'user', content: [{ type: 'text', text: 'Use this document as inspiration' }] },
{
role: 'assistant',
content: [{ type: 'text', text: 'Generated previews ready' }]

View File

@@ -194,20 +194,20 @@ async function convertMessageToAssistantModelMessage(
* This function processes messages and transforms them into the format required by the SDK.
* It handles special cases for vision models and image enhancement models.
*
* @param messages - Array of messages to convert. Must contain at least 2 messages when using image enhancement models.
* @param messages - Array of messages to convert. Must contain at least 3 messages when using image enhancement models for special handling.
* @param model - The model configuration that determines conversion behavior
*
* @returns A promise that resolves to an array of SDK-compatible model messages
*
* @remarks
* For image enhancement models with 2+ messages:
* - Expects the second-to-last message (index length-2) to be an assistant message containing image blocks
* - Expects the last message (index length-1) to be a user message
* - Extracts images from the assistant message and appends them to the user message content
* - Returns only the last two processed messages [assistantSdkMessage, userSdkMessage]
* For image enhancement models with 3+ messages:
* - Examines the last 2 messages to find an assistant message containing image blocks
* - If found, extracts images from the assistant message and appends them to the last user message content
* - Returns all converted messages (not just the last two) with the images merged into the user message
* - Typical pattern: [system?, assistant(image), user] -> [system?, assistant, user(image)]
*
* For other models:
* - Returns all converted messages in order
* - Returns all converted messages in order without special image handling
*
* The function automatically detects vision model capabilities and adjusts conversion accordingly.
*/
@@ -220,29 +220,25 @@ export async function convertMessagesToSdkMessages(messages: Message[], model: M
sdkMessages.push(...(Array.isArray(sdkMessage) ? sdkMessage : [sdkMessage]))
}
// Special handling for image enhancement models
// Only keep the last two messages and merge images into the user message
// [system?, user, assistant, user]
// Only merge images into the user message
// [system?, assistant(image), user] -> [system?, assistant, user(image)]
if (isImageEnhancementModel(model) && messages.length >= 3) {
const needUpdatedMessages = messages.slice(-2)
const needUpdatedSdkMessages = sdkMessages.slice(-2)
const assistantMessage = needUpdatedMessages.filter((m) => m.role === 'assistant')[0]
const assistantSdkMessage = needUpdatedSdkMessages.filter((m) => m.role === 'assistant')[0]
const userSdkMessage = needUpdatedSdkMessages.filter((m) => m.role === 'user')[0]
const systemSdkMessages = sdkMessages.filter((m) => m.role === 'system')
const imageBlocks = findImageBlocks(assistantMessage)
const imageParts = await convertImageBlockToImagePart(imageBlocks)
const parts: Array<TextPart | ImagePart | FilePart> = []
if (typeof userSdkMessage.content === 'string') {
parts.push({ type: 'text', text: userSdkMessage.content })
parts.push(...imageParts)
userSdkMessage.content = parts
} else {
userSdkMessage.content.push(...imageParts)
const assistantMessage = needUpdatedMessages.find((m) => m.role === 'assistant')
const userSdkMessage = sdkMessages[sdkMessages.length - 1]
if (assistantMessage && userSdkMessage?.role === 'user') {
const imageBlocks = findImageBlocks(assistantMessage)
const imageParts = await convertImageBlockToImagePart(imageBlocks)
if (imageParts.length > 0) {
if (typeof userSdkMessage.content === 'string') {
userSdkMessage.content = [{ type: 'text', text: userSdkMessage.content }, ...imageParts]
} else if (Array.isArray(userSdkMessage.content)) {
userSdkMessage.content.push(...imageParts)
}
}
}
if (systemSdkMessages.length > 0) {
return [systemSdkMessages[0], assistantSdkMessage, userSdkMessage]
}
return [assistantSdkMessage, userSdkMessage]
}
return sdkMessages

View File

@@ -3,7 +3,6 @@
* 处理温度、TopP、超时等基础参数的获取逻辑
*/
import { DEFAULT_MAX_TOKENS } from '@renderer/config/constant'
import {
isClaude45ReasoningModel,
isClaudeReasoningModel,
@@ -73,11 +72,19 @@ export function getTimeout(model: Model): number {
export function getMaxTokens(assistant: Assistant, model: Model): number | undefined {
// NOTE: ai-sdk会把maxToken和budgetToken加起来
let { maxTokens = DEFAULT_MAX_TOKENS } = getAssistantSettings(assistant)
const assistantSettings = getAssistantSettings(assistant)
const enabledMaxTokens = assistantSettings.enableMaxTokens ?? false
let maxTokens = assistantSettings.maxTokens
// If user hasn't enabled enableMaxTokens, return undefined to let the API use its default value.
// Note: Anthropic API requires max_tokens, but that's handled by the Anthropic client with a fallback.
if (!enabledMaxTokens || maxTokens === undefined) {
return undefined
}
const provider = getProviderByModel(model)
if (isSupportedThinkingTokenClaudeModel(model) && ['anthropic', 'aws-bedrock'].includes(provider.type)) {
const { reasoning_effort: reasoningEffort } = getAssistantSettings(assistant)
const { reasoning_effort: reasoningEffort } = assistantSettings
const budget = getAnthropicThinkingBudget(maxTokens, reasoningEffort, model.id)
if (budget) {
maxTokens -= budget

View File

@@ -106,7 +106,7 @@ export async function buildStreamTextParams(
searchWithTime: store.getState().websearch.searchWithTime
}
const providerOptions = buildProviderOptions(assistant, model, provider, {
const { providerOptions, standardParams } = buildProviderOptions(assistant, model, provider, {
enableReasoning,
enableWebSearch,
enableGenerateImage
@@ -181,11 +181,16 @@ export async function buildStreamTextParams(
}
// 构建基础参数
// Note: standardParams (topK, frequencyPenalty, presencePenalty, stopSequences, seed)
// are extracted from custom parameters and passed directly to streamText()
// instead of being placed in providerOptions
const params: StreamTextParams = {
messages: sdkMessages,
maxOutputTokens: getMaxTokens(assistant, model),
temperature: getTemperature(assistant, model),
topP: getTopP(assistant, model),
// Include AI SDK standard params extracted from custom parameters
...standardParams,
abortSignal: options.requestOptions?.signal,
headers,
providerOptions,

View File

@@ -60,8 +60,12 @@ function tryResolveProviderId(identifier: string): ProviderId | null {
export function getAiSdkProviderId(provider: Provider): string {
// 1. 尝试解析provider.id
const resolvedFromId = tryResolveProviderId(provider.id)
if (isAzureOpenAIProvider(provider) && isAzureResponsesEndpoint(provider)) {
return 'azure-responses'
if (isAzureOpenAIProvider(provider)) {
if (isAzureResponsesEndpoint(provider)) {
return 'azure-responses'
} else {
return 'azure'
}
}
if (resolvedFromId) {
return resolvedFromId

View File

@@ -90,6 +90,7 @@ function formatProviderApiHost(provider: Provider): Provider {
if (isAnthropicProvider(provider)) {
const baseHost = formatted.anthropicApiHost || formatted.apiHost
// AI SDK needs /v1 in baseURL, Anthropic SDK will strip it in getSdkClient
formatted.apiHost = formatApiHost(baseHost)
if (!formatted.anthropicApiHost) {
formatted.anthropicApiHost = formatted.apiHost

View File

@@ -0,0 +1,652 @@
/**
* extractAiSdkStandardParams Unit Tests
* Tests for extracting AI SDK standard parameters from custom parameters
*/
import { describe, expect, it, vi } from 'vitest'
import { extractAiSdkStandardParams } from '../options'
// Mock logger to prevent errors
vi.mock('@logger', () => ({
loggerService: {
withContext: () => ({
debug: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
info: vi.fn()
})
}
}))
// Mock settings store
vi.mock('@renderer/store/settings', () => ({
default: (state = { settings: {} }) => state
}))
// Mock hooks to prevent uuid errors
vi.mock('@renderer/hooks/useSettings', () => ({
getStoreSetting: vi.fn(() => ({}))
}))
// Mock uuid to prevent errors
vi.mock('uuid', () => ({
v4: vi.fn(() => 'test-uuid')
}))
// Mock AssistantService to prevent uuid errors
vi.mock('@renderer/services/AssistantService', () => ({
getDefaultAssistant: vi.fn(() => ({
id: 'test-assistant',
name: 'Test Assistant',
settings: {}
})),
getDefaultTopic: vi.fn(() => ({
id: 'test-topic',
assistantId: 'test-assistant',
createdAt: new Date().toISOString()
}))
}))
// Mock provider service
vi.mock('@renderer/services/ProviderService', () => ({
getProviderById: vi.fn(() => ({
id: 'test-provider',
name: 'Test Provider'
}))
}))
// Mock config modules
vi.mock('@renderer/config/models', () => ({
isOpenAIModel: vi.fn(() => false),
isQwenMTModel: vi.fn(() => false),
isSupportFlexServiceTierModel: vi.fn(() => false),
isSupportVerbosityModel: vi.fn(() => false),
getModelSupportedVerbosity: vi.fn(() => [])
}))
vi.mock('@renderer/config/translate', () => ({
mapLanguageToQwenMTModel: vi.fn()
}))
vi.mock('@renderer/utils/provider', () => ({
isSupportServiceTierProvider: vi.fn(() => false),
isSupportVerbosityProvider: vi.fn(() => false)
}))
describe('extractAiSdkStandardParams', () => {
describe('Positive cases - Standard parameters extraction', () => {
it('should extract all AI SDK standard parameters', () => {
const customParams = {
maxOutputTokens: 1000,
temperature: 0.7,
topP: 0.9,
topK: 40,
presencePenalty: 0.5,
frequencyPenalty: 0.3,
stopSequences: ['STOP', 'END'],
seed: 42
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
maxOutputTokens: 1000,
temperature: 0.7,
topP: 0.9,
topK: 40,
presencePenalty: 0.5,
frequencyPenalty: 0.3,
stopSequences: ['STOP', 'END'],
seed: 42
})
expect(result.providerParams).toStrictEqual({})
})
it('should extract single standard parameter', () => {
const customParams = {
temperature: 0.8
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
temperature: 0.8
})
expect(result.providerParams).toStrictEqual({})
})
it('should extract topK parameter', () => {
const customParams = {
topK: 50
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
topK: 50
})
expect(result.providerParams).toStrictEqual({})
})
it('should extract frequencyPenalty parameter', () => {
const customParams = {
frequencyPenalty: 0.6
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
frequencyPenalty: 0.6
})
expect(result.providerParams).toStrictEqual({})
})
it('should extract presencePenalty parameter', () => {
const customParams = {
presencePenalty: 0.4
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
presencePenalty: 0.4
})
expect(result.providerParams).toStrictEqual({})
})
it('should extract stopSequences parameter', () => {
const customParams = {
stopSequences: ['HALT', 'TERMINATE']
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
stopSequences: ['HALT', 'TERMINATE']
})
expect(result.providerParams).toStrictEqual({})
})
it('should extract seed parameter', () => {
const customParams = {
seed: 12345
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
seed: 12345
})
expect(result.providerParams).toStrictEqual({})
})
it('should extract maxOutputTokens parameter', () => {
const customParams = {
maxOutputTokens: 2048
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
maxOutputTokens: 2048
})
expect(result.providerParams).toStrictEqual({})
})
it('should extract topP parameter', () => {
const customParams = {
topP: 0.95
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
topP: 0.95
})
expect(result.providerParams).toStrictEqual({})
})
})
describe('Negative cases - Provider-specific parameters', () => {
it('should place all non-standard parameters in providerParams', () => {
const customParams = {
customParam: 'value',
anotherParam: 123,
thirdParam: true
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({
customParam: 'value',
anotherParam: 123,
thirdParam: true
})
})
it('should place single provider-specific parameter in providerParams', () => {
const customParams = {
reasoningEffort: 'high'
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({
reasoningEffort: 'high'
})
})
it('should place model-specific parameter in providerParams', () => {
const customParams = {
thinking: { type: 'enabled', budgetTokens: 5000 }
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({
thinking: { type: 'enabled', budgetTokens: 5000 }
})
})
it('should place serviceTier in providerParams', () => {
const customParams = {
serviceTier: 'auto'
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({
serviceTier: 'auto'
})
})
it('should place textVerbosity in providerParams', () => {
const customParams = {
textVerbosity: 'high'
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({
textVerbosity: 'high'
})
})
})
describe('Mixed parameters', () => {
it('should correctly separate mixed standard and provider-specific parameters', () => {
const customParams = {
temperature: 0.7,
topK: 40,
customParam: 'custom_value',
reasoningEffort: 'medium',
frequencyPenalty: 0.5,
seed: 999
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
temperature: 0.7,
topK: 40,
frequencyPenalty: 0.5,
seed: 999
})
expect(result.providerParams).toStrictEqual({
customParam: 'custom_value',
reasoningEffort: 'medium'
})
})
it('should handle complex mixed parameters with nested objects', () => {
const customParams = {
topP: 0.9,
presencePenalty: 0.3,
thinking: { type: 'enabled', budgetTokens: 5000 },
stopSequences: ['STOP'],
serviceTier: 'auto',
maxOutputTokens: 4096
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
topP: 0.9,
presencePenalty: 0.3,
stopSequences: ['STOP'],
maxOutputTokens: 4096
})
expect(result.providerParams).toStrictEqual({
thinking: { type: 'enabled', budgetTokens: 5000 },
serviceTier: 'auto'
})
})
it('should handle all standard params with some provider params', () => {
const customParams = {
maxOutputTokens: 2000,
temperature: 0.8,
topP: 0.95,
topK: 50,
presencePenalty: 0.6,
frequencyPenalty: 0.4,
stopSequences: ['END', 'DONE'],
seed: 777,
customApiParam: 'value',
anotherCustomParam: 123
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
maxOutputTokens: 2000,
temperature: 0.8,
topP: 0.95,
topK: 50,
presencePenalty: 0.6,
frequencyPenalty: 0.4,
stopSequences: ['END', 'DONE'],
seed: 777
})
expect(result.providerParams).toStrictEqual({
customApiParam: 'value',
anotherCustomParam: 123
})
})
})
describe('Edge cases', () => {
it('should handle empty object', () => {
const customParams = {}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({})
})
it('should handle zero values for numeric parameters', () => {
const customParams = {
temperature: 0,
topK: 0,
seed: 0
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
temperature: 0,
topK: 0,
seed: 0
})
expect(result.providerParams).toStrictEqual({})
})
it('should handle negative values for numeric parameters', () => {
const customParams = {
presencePenalty: -0.5,
frequencyPenalty: -0.3,
seed: -1
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
presencePenalty: -0.5,
frequencyPenalty: -0.3,
seed: -1
})
expect(result.providerParams).toStrictEqual({})
})
it('should handle empty arrays for stopSequences', () => {
const customParams = {
stopSequences: []
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
stopSequences: []
})
expect(result.providerParams).toStrictEqual({})
})
it('should handle null values in mixed parameters', () => {
const customParams = {
temperature: 0.7,
customNull: null,
topK: 40
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
temperature: 0.7,
topK: 40
})
expect(result.providerParams).toStrictEqual({
customNull: null
})
})
it('should handle undefined values in mixed parameters', () => {
const customParams = {
temperature: 0.7,
customUndefined: undefined,
topK: 40
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
temperature: 0.7,
topK: 40
})
expect(result.providerParams).toStrictEqual({
customUndefined: undefined
})
})
it('should handle boolean values for standard parameters', () => {
const customParams = {
temperature: 0.7,
customBoolean: false,
topK: 40
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
temperature: 0.7,
topK: 40
})
expect(result.providerParams).toStrictEqual({
customBoolean: false
})
})
it('should handle very large numeric values', () => {
const customParams = {
maxOutputTokens: 999999,
seed: 2147483647,
topK: 10000
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
maxOutputTokens: 999999,
seed: 2147483647,
topK: 10000
})
expect(result.providerParams).toStrictEqual({})
})
it('should handle decimal values with high precision', () => {
const customParams = {
temperature: 0.123456789,
topP: 0.987654321,
presencePenalty: 0.111111111
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
temperature: 0.123456789,
topP: 0.987654321,
presencePenalty: 0.111111111
})
expect(result.providerParams).toStrictEqual({})
})
})
describe('Case sensitivity', () => {
it('should NOT extract parameters with incorrect case - uppercase first letter', () => {
const customParams = {
Temperature: 0.7,
TopK: 40,
FrequencyPenalty: 0.5
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({
Temperature: 0.7,
TopK: 40,
FrequencyPenalty: 0.5
})
})
it('should NOT extract parameters with incorrect case - all uppercase', () => {
const customParams = {
TEMPERATURE: 0.7,
TOPK: 40,
SEED: 42
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({
TEMPERATURE: 0.7,
TOPK: 40,
SEED: 42
})
})
it('should NOT extract parameters with incorrect case - all lowercase', () => {
const customParams = {
maxoutputtokens: 1000,
frequencypenalty: 0.5,
stopsequences: ['STOP']
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({
maxoutputtokens: 1000,
frequencypenalty: 0.5,
stopsequences: ['STOP']
})
})
it('should correctly extract exact case match while rejecting incorrect case', () => {
const customParams = {
temperature: 0.7,
Temperature: 0.8,
TEMPERATURE: 0.9,
topK: 40,
TopK: 50
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
temperature: 0.7,
topK: 40
})
expect(result.providerParams).toStrictEqual({
Temperature: 0.8,
TEMPERATURE: 0.9,
TopK: 50
})
})
})
describe('Parameter name variations', () => {
it('should NOT extract similar but incorrect parameter names', () => {
const customParams = {
temp: 0.7, // should not match temperature
top_k: 40, // should not match topK
max_tokens: 1000, // should not match maxOutputTokens
freq_penalty: 0.5 // should not match frequencyPenalty
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({
temp: 0.7,
top_k: 40,
max_tokens: 1000,
freq_penalty: 0.5
})
})
it('should NOT extract snake_case versions of standard parameters', () => {
const customParams = {
top_k: 40,
top_p: 0.9,
presence_penalty: 0.5,
frequency_penalty: 0.3,
stop_sequences: ['STOP'],
max_output_tokens: 1000
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({})
expect(result.providerParams).toStrictEqual({
top_k: 40,
top_p: 0.9,
presence_penalty: 0.5,
frequency_penalty: 0.3,
stop_sequences: ['STOP'],
max_output_tokens: 1000
})
})
it('should extract exact camelCase parameters only', () => {
const customParams = {
topK: 40, // correct
top_k: 50, // incorrect
topP: 0.9, // correct
top_p: 0.8, // incorrect
frequencyPenalty: 0.5, // correct
frequency_penalty: 0.4 // incorrect
}
const result = extractAiSdkStandardParams(customParams)
expect(result.standardParams).toStrictEqual({
topK: 40,
topP: 0.9,
frequencyPenalty: 0.5
})
expect(result.providerParams).toStrictEqual({
top_k: 50,
top_p: 0.8,
frequency_penalty: 0.4
})
})
})
})

View File

@@ -128,7 +128,20 @@ vi.mock('../reasoning', () => ({
reasoningConfig: { type: 'enabled', budgetTokens: 5000 }
})),
getReasoningEffort: vi.fn(() => ({ reasoningEffort: 'medium' })),
getCustomParameters: vi.fn(() => ({}))
getCustomParameters: vi.fn(() => ({})),
extractAiSdkStandardParams: vi.fn((customParams: Record<string, any>) => {
const AI_SDK_STANDARD_PARAMS = ['topK', 'frequencyPenalty', 'presencePenalty', 'stopSequences', 'seed']
const standardParams: Record<string, any> = {}
const providerParams: Record<string, any> = {}
for (const [key, value] of Object.entries(customParams)) {
if (AI_SDK_STANDARD_PARAMS.includes(key)) {
standardParams[key] = value
} else {
providerParams[key] = value
}
}
return { standardParams, providerParams }
})
}))
vi.mock('../image', () => ({
@@ -184,8 +197,9 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result).toHaveProperty('openai')
expect(result.openai).toBeDefined()
expect(result.providerOptions).toHaveProperty('openai')
expect(result.providerOptions.openai).toBeDefined()
expect(result.standardParams).toBeDefined()
})
it('should include reasoning parameters when enabled', () => {
@@ -195,8 +209,8 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result.openai).toHaveProperty('reasoningEffort')
expect(result.openai.reasoningEffort).toBe('medium')
expect(result.providerOptions.openai).toHaveProperty('reasoningEffort')
expect(result.providerOptions.openai.reasoningEffort).toBe('medium')
})
it('should include service tier when supported', () => {
@@ -211,8 +225,8 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result.openai).toHaveProperty('serviceTier')
expect(result.openai.serviceTier).toBe(OpenAIServiceTiers.auto)
expect(result.providerOptions.openai).toHaveProperty('serviceTier')
expect(result.providerOptions.openai.serviceTier).toBe(OpenAIServiceTiers.auto)
})
})
@@ -239,8 +253,8 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result).toHaveProperty('anthropic')
expect(result.anthropic).toBeDefined()
expect(result.providerOptions).toHaveProperty('anthropic')
expect(result.providerOptions.anthropic).toBeDefined()
})
it('should include reasoning parameters when enabled', () => {
@@ -250,8 +264,8 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result.anthropic).toHaveProperty('thinking')
expect(result.anthropic.thinking).toEqual({
expect(result.providerOptions.anthropic).toHaveProperty('thinking')
expect(result.providerOptions.anthropic.thinking).toEqual({
type: 'enabled',
budgetTokens: 5000
})
@@ -282,8 +296,8 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result).toHaveProperty('google')
expect(result.google).toBeDefined()
expect(result.providerOptions).toHaveProperty('google')
expect(result.providerOptions.google).toBeDefined()
})
it('should include reasoning parameters when enabled', () => {
@@ -293,8 +307,8 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result.google).toHaveProperty('thinkingConfig')
expect(result.google.thinkingConfig).toEqual({
expect(result.providerOptions.google).toHaveProperty('thinkingConfig')
expect(result.providerOptions.google.thinkingConfig).toEqual({
include_thoughts: true
})
})
@@ -306,8 +320,8 @@ describe('options utils', () => {
enableGenerateImage: true
})
expect(result.google).toHaveProperty('responseModalities')
expect(result.google.responseModalities).toEqual(['TEXT', 'IMAGE'])
expect(result.providerOptions.google).toHaveProperty('responseModalities')
expect(result.providerOptions.google.responseModalities).toEqual(['TEXT', 'IMAGE'])
})
})
@@ -335,8 +349,8 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result).toHaveProperty('xai')
expect(result.xai).toBeDefined()
expect(result.providerOptions).toHaveProperty('xai')
expect(result.providerOptions.xai).toBeDefined()
})
it('should include reasoning parameters when enabled', () => {
@@ -346,8 +360,8 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result.xai).toHaveProperty('reasoningEffort')
expect(result.xai.reasoningEffort).toBe('high')
expect(result.providerOptions.xai).toHaveProperty('reasoningEffort')
expect(result.providerOptions.xai.reasoningEffort).toBe('high')
})
})
@@ -374,8 +388,8 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result).toHaveProperty('deepseek')
expect(result.deepseek).toBeDefined()
expect(result.providerOptions).toHaveProperty('deepseek')
expect(result.providerOptions.deepseek).toBeDefined()
})
})
@@ -402,8 +416,8 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result).toHaveProperty('openrouter')
expect(result.openrouter).toBeDefined()
expect(result.providerOptions).toHaveProperty('openrouter')
expect(result.providerOptions.openrouter).toBeDefined()
})
it('should include web search parameters when enabled', () => {
@@ -413,12 +427,12 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result.openrouter).toHaveProperty('enable_search')
expect(result.providerOptions.openrouter).toHaveProperty('enable_search')
})
})
describe('Custom parameters', () => {
it('should merge custom parameters', async () => {
it('should merge custom provider-specific parameters', async () => {
const { getCustomParameters } = await import('../reasoning')
vi.mocked(getCustomParameters).mockReturnValue({
@@ -443,10 +457,88 @@ describe('options utils', () => {
}
)
expect(result.openai).toHaveProperty('custom_param')
expect(result.openai.custom_param).toBe('custom_value')
expect(result.openai).toHaveProperty('another_param')
expect(result.openai.another_param).toBe(123)
expect(result.providerOptions.openai).toHaveProperty('custom_param')
expect(result.providerOptions.openai.custom_param).toBe('custom_value')
expect(result.providerOptions.openai).toHaveProperty('another_param')
expect(result.providerOptions.openai.another_param).toBe(123)
})
it('should extract AI SDK standard params from custom parameters', async () => {
const { getCustomParameters } = await import('../reasoning')
vi.mocked(getCustomParameters).mockReturnValue({
topK: 5,
frequencyPenalty: 0.5,
presencePenalty: 0.3,
seed: 42,
custom_param: 'custom_value'
})
const result = buildProviderOptions(
mockAssistant,
mockModel,
{
id: SystemProviderIds.gemini,
name: 'Google',
type: 'gemini',
apiKey: 'test-key',
apiHost: 'https://generativelanguage.googleapis.com'
} as Provider,
{
enableReasoning: false,
enableWebSearch: false,
enableGenerateImage: false
}
)
// Standard params should be extracted and returned separately
expect(result.standardParams).toEqual({
topK: 5,
frequencyPenalty: 0.5,
presencePenalty: 0.3,
seed: 42
})
// Provider-specific params should still be in providerOptions
expect(result.providerOptions.google).toHaveProperty('custom_param')
expect(result.providerOptions.google.custom_param).toBe('custom_value')
// Standard params should NOT be in providerOptions
expect(result.providerOptions.google).not.toHaveProperty('topK')
expect(result.providerOptions.google).not.toHaveProperty('frequencyPenalty')
expect(result.providerOptions.google).not.toHaveProperty('presencePenalty')
expect(result.providerOptions.google).not.toHaveProperty('seed')
})
it('should handle stopSequences in custom parameters', async () => {
const { getCustomParameters } = await import('../reasoning')
vi.mocked(getCustomParameters).mockReturnValue({
stopSequences: ['STOP', 'END'],
custom_param: 'value'
})
const result = buildProviderOptions(
mockAssistant,
mockModel,
{
id: SystemProviderIds.gemini,
name: 'Google',
type: 'gemini',
apiKey: 'test-key',
apiHost: 'https://generativelanguage.googleapis.com'
} as Provider,
{
enableReasoning: false,
enableWebSearch: false,
enableGenerateImage: false
}
)
expect(result.standardParams).toEqual({
stopSequences: ['STOP', 'END']
})
expect(result.providerOptions.google).not.toHaveProperty('stopSequences')
})
})
@@ -474,8 +566,8 @@ describe('options utils', () => {
enableGenerateImage: true
})
expect(result.google).toHaveProperty('thinkingConfig')
expect(result.google).toHaveProperty('responseModalities')
expect(result.providerOptions.google).toHaveProperty('thinkingConfig')
expect(result.providerOptions.google).toHaveProperty('responseModalities')
})
it('should handle all capabilities enabled', () => {
@@ -485,8 +577,8 @@ describe('options utils', () => {
enableGenerateImage: true
})
expect(result.google).toBeDefined()
expect(Object.keys(result.google).length).toBeGreaterThan(0)
expect(result.providerOptions.google).toBeDefined()
expect(Object.keys(result.providerOptions.google).length).toBeGreaterThan(0)
})
})
@@ -513,7 +605,7 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result).toHaveProperty('google')
expect(result.providerOptions).toHaveProperty('google')
})
it('should map google-vertex-anthropic to anthropic', () => {
@@ -538,7 +630,7 @@ describe('options utils', () => {
enableGenerateImage: false
})
expect(result).toHaveProperty('anthropic')
expect(result.providerOptions).toHaveProperty('anthropic')
})
})
})

View File

@@ -0,0 +1,288 @@
import type { Assistant, Model, ReasoningEffortOption } from '@renderer/types'
import { SystemProviderIds } from '@renderer/types'
import { describe, expect, it, vi } from 'vitest'
import { getReasoningEffort } from '../reasoning'
// Mock logger
vi.mock('@logger', () => ({
loggerService: {
withContext: () => ({
warn: vi.fn(),
info: vi.fn(),
error: vi.fn()
})
}
}))
vi.mock('@renderer/store/settings', () => ({
default: {},
settingsSlice: {
name: 'settings',
reducer: vi.fn(),
actions: {}
}
}))
vi.mock('@renderer/store/assistants', () => {
const mockAssistantsSlice = {
name: 'assistants',
reducer: vi.fn((state = { entities: {}, ids: [] }) => state),
actions: {
updateTopicUpdatedAt: vi.fn(() => ({ type: 'UPDATE_TOPIC_UPDATED_AT' }))
}
}
return {
default: mockAssistantsSlice.reducer,
updateTopicUpdatedAt: vi.fn(() => ({ type: 'UPDATE_TOPIC_UPDATED_AT' })),
assistantsSlice: mockAssistantsSlice
}
})
// Mock provider service
vi.mock('@renderer/services/AssistantService', () => ({
getProviderByModel: (model: Model) => ({
id: model.provider,
name: 'Poe',
type: 'openai'
}),
getAssistantSettings: (assistant: Assistant) => assistant.settings || {}
}))
describe('Poe Provider Reasoning Support', () => {
const createPoeModel = (id: string): Model => ({
id,
name: id,
provider: SystemProviderIds.poe,
group: 'poe'
})
const createAssistant = (reasoning_effort?: ReasoningEffortOption, maxTokens?: number): Assistant => ({
id: 'test-assistant',
name: 'Test Assistant',
emoji: '🤖',
prompt: '',
topics: [],
messages: [],
type: 'assistant',
regularPhrases: [],
settings: {
reasoning_effort,
maxTokens
}
})
describe('GPT-5 Series Models', () => {
it('should return reasoning_effort in extra_body for GPT-5 model with low effort', () => {
const model = createPoeModel('gpt-5')
const assistant = createAssistant('low')
const result = getReasoningEffort(assistant, model)
expect(result).toEqual({
extra_body: {
reasoning_effort: 'low'
}
})
})
it('should return reasoning_effort in extra_body for GPT-5 model with medium effort', () => {
const model = createPoeModel('gpt-5')
const assistant = createAssistant('medium')
const result = getReasoningEffort(assistant, model)
expect(result).toEqual({
extra_body: {
reasoning_effort: 'medium'
}
})
})
it('should return reasoning_effort in extra_body for GPT-5 model with high effort', () => {
const model = createPoeModel('gpt-5')
const assistant = createAssistant('high')
const result = getReasoningEffort(assistant, model)
expect(result).toEqual({
extra_body: {
reasoning_effort: 'high'
}
})
})
it('should convert auto to medium for GPT-5 model in extra_body', () => {
const model = createPoeModel('gpt-5')
const assistant = createAssistant('auto')
const result = getReasoningEffort(assistant, model)
expect(result).toEqual({
extra_body: {
reasoning_effort: 'medium'
}
})
})
it('should return reasoning_effort in extra_body for GPT-5.1 model', () => {
const model = createPoeModel('gpt-5.1')
const assistant = createAssistant('medium')
const result = getReasoningEffort(assistant, model)
expect(result).toEqual({
extra_body: {
reasoning_effort: 'medium'
}
})
})
})
describe('Claude Models', () => {
it('should return thinking_budget in extra_body for Claude 3.7 Sonnet', () => {
const model = createPoeModel('claude-3.7-sonnet')
const assistant = createAssistant('medium', 4096)
const result = getReasoningEffort(assistant, model)
expect(result).toHaveProperty('extra_body')
expect(result.extra_body).toHaveProperty('thinking_budget')
expect(typeof result.extra_body?.thinking_budget).toBe('number')
expect(result.extra_body?.thinking_budget).toBeGreaterThan(0)
})
it('should return thinking_budget in extra_body for Claude Sonnet 4', () => {
const model = createPoeModel('claude-sonnet-4')
const assistant = createAssistant('high', 8192)
const result = getReasoningEffort(assistant, model)
expect(result).toHaveProperty('extra_body')
expect(result.extra_body).toHaveProperty('thinking_budget')
expect(typeof result.extra_body?.thinking_budget).toBe('number')
})
it('should calculate thinking_budget based on effort ratio and maxTokens', () => {
const model = createPoeModel('claude-3.7-sonnet')
const assistant = createAssistant('low', 4096)
const result = getReasoningEffort(assistant, model)
expect(result.extra_body?.thinking_budget).toBeGreaterThanOrEqual(1024)
})
})
describe('Gemini Models', () => {
it('should return thinking_budget in extra_body for Gemini 2.5 Flash', () => {
const model = createPoeModel('gemini-2.5-flash')
const assistant = createAssistant('medium')
const result = getReasoningEffort(assistant, model)
expect(result).toHaveProperty('extra_body')
expect(result.extra_body).toHaveProperty('thinking_budget')
expect(typeof result.extra_body?.thinking_budget).toBe('number')
})
it('should return thinking_budget in extra_body for Gemini 2.5 Pro', () => {
const model = createPoeModel('gemini-2.5-pro')
const assistant = createAssistant('high')
const result = getReasoningEffort(assistant, model)
expect(result).toHaveProperty('extra_body')
expect(result.extra_body).toHaveProperty('thinking_budget')
})
it('should use -1 for auto effort', () => {
const model = createPoeModel('gemini-2.5-flash')
const assistant = createAssistant('auto')
const result = getReasoningEffort(assistant, model)
expect(result.extra_body?.thinking_budget).toBe(-1)
})
it('should calculate thinking_budget for non-auto effort', () => {
const model = createPoeModel('gemini-2.5-flash')
const assistant = createAssistant('low')
const result = getReasoningEffort(assistant, model)
expect(typeof result.extra_body?.thinking_budget).toBe('number')
})
})
describe('No Reasoning Effort', () => {
it('should return empty object when reasoning_effort is not set', () => {
const model = createPoeModel('gpt-5')
const assistant = createAssistant(undefined)
const result = getReasoningEffort(assistant, model)
expect(result).toEqual({})
})
it('should return empty object when reasoning_effort is "none"', () => {
const model = createPoeModel('gpt-5')
const assistant = createAssistant('none')
const result = getReasoningEffort(assistant, model)
expect(result).toEqual({})
})
})
describe('Non-Reasoning Models', () => {
it('should return empty object for non-reasoning models', () => {
const model = createPoeModel('gpt-4')
const assistant = createAssistant('medium')
const result = getReasoningEffort(assistant, model)
expect(result).toEqual({})
})
})
describe('Edge Cases: Models Without Token Limit Configuration', () => {
it('should return empty object for Claude models without token limit configuration', () => {
const model = createPoeModel('claude-unknown-variant')
const assistant = createAssistant('medium', 4096)
const result = getReasoningEffort(assistant, model)
// Should return empty object when token limit is not found
expect(result).toEqual({})
expect(result.extra_body?.thinking_budget).toBeUndefined()
})
it('should return empty object for unmatched Poe reasoning models', () => {
// A hypothetical reasoning model that doesn't match GPT-5, Claude, or Gemini
const model = createPoeModel('some-reasoning-model')
// Make it appear as a reasoning model by giving it a name that won't match known categories
const assistant = createAssistant('medium')
const result = getReasoningEffort(assistant, model)
// Should return empty object for unmatched models
expect(result).toEqual({})
})
it('should fallback to -1 for Gemini models without token limit', () => {
// Use a Gemini model variant that won't match any token limit pattern
// The current regex patterns cover gemini-.*-flash.*$ and gemini-.*-pro.*$
// so we need a model that matches isSupportedThinkingTokenGeminiModel but not THINKING_TOKEN_MAP
const model = createPoeModel('gemini-2.5-flash')
const assistant = createAssistant('auto')
const result = getReasoningEffort(assistant, model)
// For 'auto' effort, should use -1
expect(result.extra_body?.thinking_budget).toBe(-1)
})
it('should enforce minimum 1024 token floor for Claude models', () => {
const model = createPoeModel('claude-3.7-sonnet')
// Use very small maxTokens to test the minimum floor
const assistant = createAssistant('low', 100)
const result = getReasoningEffort(assistant, model)
expect(result.extra_body?.thinking_budget).toBeGreaterThanOrEqual(1024)
})
it('should handle undefined maxTokens for Claude models', () => {
const model = createPoeModel('claude-3.7-sonnet')
const assistant = createAssistant('medium', undefined)
const result = getReasoningEffort(assistant, model)
expect(result).toHaveProperty('extra_body')
expect(result.extra_body).toHaveProperty('thinking_budget')
expect(typeof result.extra_body?.thinking_budget).toBe('number')
expect(result.extra_body?.thinking_budget).toBeGreaterThanOrEqual(1024)
})
})
})

View File

@@ -14,6 +14,7 @@ import {
} from '@renderer/config/models'
import { mapLanguageToQwenMTModel } from '@renderer/config/translate'
import { getStoreSetting } from '@renderer/hooks/useSettings'
import { getProviderById } from '@renderer/services/ProviderService'
import {
type Assistant,
type GroqServiceTier,
@@ -30,8 +31,8 @@ import {
type Provider,
type ServiceTier
} from '@renderer/types'
import type { OpenAIVerbosity } from '@renderer/types/aiCoreTypes'
import { isSupportServiceTierProvider } from '@renderer/utils/provider'
import { type AiSdkParam, isAiSdkParam, type OpenAIVerbosity } from '@renderer/types/aiCoreTypes'
import { isSupportServiceTierProvider, isSupportVerbosityProvider } from '@renderer/utils/provider'
import type { JSONValue } from 'ai'
import { t } from 'i18next'
@@ -90,15 +91,56 @@ function getServiceTier<T extends Provider>(model: Model, provider: T): OpenAISe
}
}
function getVerbosity(): OpenAIVerbosity {
function getVerbosity(model: Model): OpenAIVerbosity {
if (!isSupportVerbosityModel(model) || !isSupportVerbosityProvider(getProviderById(model.provider)!)) {
return undefined
}
const openAI = getStoreSetting('openAI')
return openAI.verbosity
const userVerbosity = openAI.verbosity
if (userVerbosity) {
const supportedVerbosity = getModelSupportedVerbosity(model)
// Use user's verbosity if supported, otherwise use the first supported option
const verbosity = supportedVerbosity.includes(userVerbosity) ? userVerbosity : supportedVerbosity[0]
return verbosity
}
return undefined
}
/**
* Extract AI SDK standard parameters from custom parameters
* These parameters should be passed directly to streamText() instead of providerOptions
*/
export function extractAiSdkStandardParams(customParams: Record<string, any>): {
standardParams: Partial<Record<AiSdkParam, any>>
providerParams: Record<string, any>
} {
const standardParams: Partial<Record<AiSdkParam, any>> = {}
const providerParams: Record<string, any> = {}
for (const [key, value] of Object.entries(customParams)) {
if (isAiSdkParam(key)) {
standardParams[key] = value
} else {
providerParams[key] = value
}
}
return { standardParams, providerParams }
}
/**
* 构建 AI SDK 的 providerOptions
* 按 provider 类型分离,保持类型安全
* 返回格式:{ 'providerId': providerOptions }
* 返回格式:{
* providerOptions: { 'providerId': providerOptions },
* standardParams: { topK, frequencyPenalty, presencePenalty, stopSequences, seed }
* }
*
* Custom parameters are split into two categories:
* 1. AI SDK standard parameters (topK, frequencyPenalty, etc.) - returned separately to be passed to streamText()
* 2. Provider-specific parameters - merged into providerOptions
*/
export function buildProviderOptions(
assistant: Assistant,
@@ -109,13 +151,16 @@ export function buildProviderOptions(
enableWebSearch: boolean
enableGenerateImage: boolean
}
): Record<string, Record<string, JSONValue>> {
): {
providerOptions: Record<string, Record<string, JSONValue>>
standardParams: Partial<Record<AiSdkParam, any>>
} {
logger.debug('buildProviderOptions', { assistant, model, actualProvider, capabilities })
const rawProviderId = getAiSdkProviderId(actualProvider)
// 构建 provider 特定的选项
let providerSpecificOptions: Record<string, any> = {}
const serviceTier = getServiceTier(model, actualProvider)
const textVerbosity = getVerbosity()
const textVerbosity = getVerbosity(model)
// 根据 provider 类型分离构建逻辑
const { data: baseProviderId, success } = baseProviderIdSchema.safeParse(rawProviderId)
if (success) {
@@ -130,7 +175,8 @@ export function buildProviderOptions(
assistant,
model,
capabilities,
serviceTier
serviceTier,
textVerbosity
)
providerSpecificOptions = options
}
@@ -163,7 +209,8 @@ export function buildProviderOptions(
model,
capabilities,
actualProvider,
serviceTier
serviceTier,
textVerbosity
)
break
default:
@@ -201,10 +248,14 @@ export function buildProviderOptions(
}
}
// 合并自定义参数 provider 特定的选项中
// 获取自定义参数并分离标准参数和 provider 特定参数
const customParams = getCustomParameters(assistant)
const { standardParams, providerParams } = extractAiSdkStandardParams(customParams)
// 合并 provider 特定的自定义参数到 providerSpecificOptions
providerSpecificOptions = {
...providerSpecificOptions,
...getCustomParameters(assistant)
...providerParams
}
let rawProviderKey =
@@ -212,16 +263,21 @@ export function buildProviderOptions(
'google-vertex': 'google',
'google-vertex-anthropic': 'anthropic',
'azure-anthropic': 'anthropic',
'ai-gateway': 'gateway'
'ai-gateway': 'gateway',
azure: 'openai',
'azure-responses': 'openai'
}[rawProviderId] || rawProviderId
if (rawProviderKey === 'cherryin') {
rawProviderKey = { gemini: 'google' }[actualProvider.type] || actualProvider.type
rawProviderKey = { gemini: 'google', ['openai-response']: 'openai' }[actualProvider.type] || actualProvider.type
}
// 返回 AI Core SDK 要求的格式:{ 'providerId': providerOptions }
// 返回 AI Core SDK 要求的格式:{ 'providerId': providerOptions } 以及提取的标准参数
return {
[rawProviderKey]: providerSpecificOptions
providerOptions: {
[rawProviderKey]: providerSpecificOptions
},
standardParams
}
}
@@ -236,7 +292,8 @@ function buildOpenAIProviderOptions(
enableWebSearch: boolean
enableGenerateImage: boolean
},
serviceTier: OpenAIServiceTier
serviceTier: OpenAIServiceTier,
textVerbosity?: OpenAIVerbosity
): OpenAIResponsesProviderOptions {
const { enableReasoning } = capabilities
let providerOptions: OpenAIResponsesProviderOptions = {}
@@ -248,8 +305,13 @@ function buildOpenAIProviderOptions(
...reasoningParams
}
}
const provider = getProviderById(model.provider)
if (isSupportVerbosityModel(model)) {
if (!provider) {
throw new Error(`Provider ${model.provider} not found`)
}
if (isSupportVerbosityModel(model) && isSupportVerbosityProvider(provider)) {
const openAI = getStoreSetting<'openAI'>('openAI')
const userVerbosity = openAI?.verbosity
@@ -267,7 +329,8 @@ function buildOpenAIProviderOptions(
providerOptions = {
...providerOptions,
serviceTier
serviceTier,
textVerbosity
}
return providerOptions
@@ -366,11 +429,13 @@ function buildCherryInProviderOptions(
enableGenerateImage: boolean
},
actualProvider: Provider,
serviceTier: OpenAIServiceTier
serviceTier: OpenAIServiceTier,
textVerbosity: OpenAIVerbosity
): OpenAIResponsesProviderOptions | AnthropicProviderOptions | GoogleGenerativeAIProviderOptions {
switch (actualProvider.type) {
case 'openai':
return buildOpenAIProviderOptions(assistant, model, capabilities, serviceTier)
case 'openai-response':
return buildOpenAIProviderOptions(assistant, model, capabilities, serviceTier, textVerbosity)
case 'anthropic':
return buildAnthropicProviderOptions(assistant, model, capabilities)

View File

@@ -12,7 +12,8 @@ import {
isDeepSeekHybridInferenceModel,
isDoubaoSeedAfter251015,
isDoubaoThinkingAutoModel,
isGemini3Model,
isGemini3ThinkingTokenModel,
isGPT5SeriesModel,
isGPT51SeriesModel,
isGrok4FastReasoningModel,
isGrokReasoningModel,
@@ -36,7 +37,7 @@ import {
} from '@renderer/config/models'
import { getStoreSetting } from '@renderer/hooks/useSettings'
import { getAssistantSettings, getProviderByModel } from '@renderer/services/AssistantService'
import type { Assistant, Model, ReasoningEffortOption } from '@renderer/types'
import type { Assistant, Model } from '@renderer/types'
import { EFFORT_RATIO, isSystemProvider, SystemProviderIds } from '@renderer/types'
import type { OpenAISummaryText } from '@renderer/types/aiCoreTypes'
import type { ReasoningEffortOptionalParams } from '@renderer/types/sdk'
@@ -142,6 +143,69 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
}
// reasoningEffort有效的情况
// https://creator.poe.com/docs/external-applications/openai-compatible-api#additional-considerations
// Poe provider - supports custom bot parameters via extra_body
if (provider.id === SystemProviderIds.poe) {
// GPT-5 series models use reasoning_effort parameter in extra_body
if (isGPT5SeriesModel(model) || isGPT51SeriesModel(model)) {
return {
extra_body: {
reasoning_effort: reasoningEffort === 'auto' ? 'medium' : reasoningEffort
}
}
}
// Claude models use thinking_budget parameter in extra_body
if (isSupportedThinkingTokenClaudeModel(model)) {
const effortRatio = EFFORT_RATIO[reasoningEffort]
const tokenLimit = findTokenLimit(model.id)
const maxTokens = assistant.settings?.maxTokens
if (!tokenLimit) {
logger.warn(
`No token limit configuration found for Claude model "${model.id}" on Poe provider. ` +
`Reasoning effort setting "${reasoningEffort}" will not be applied.`
)
return {}
}
let budgetTokens = Math.floor((tokenLimit.max - tokenLimit.min) * effortRatio + tokenLimit.min)
budgetTokens = Math.floor(Math.max(1024, Math.min(budgetTokens, (maxTokens || DEFAULT_MAX_TOKENS) * effortRatio)))
return {
extra_body: {
thinking_budget: budgetTokens
}
}
}
// Gemini models use thinking_budget parameter in extra_body
if (isSupportedThinkingTokenGeminiModel(model)) {
const effortRatio = EFFORT_RATIO[reasoningEffort]
const tokenLimit = findTokenLimit(model.id)
let budgetTokens: number | undefined
if (tokenLimit && reasoningEffort !== 'auto') {
budgetTokens = Math.floor((tokenLimit.max - tokenLimit.min) * effortRatio + tokenLimit.min)
} else if (!tokenLimit && reasoningEffort !== 'auto') {
logger.warn(
`No token limit configuration found for Gemini model "${model.id}" on Poe provider. ` +
`Using auto (-1) instead of requested effort "${reasoningEffort}".`
)
}
return {
extra_body: {
thinking_budget: budgetTokens ?? -1
}
}
}
// Poe reasoning model not in known categories (GPT-5, Claude, Gemini)
logger.warn(
`Poe provider reasoning model "${model.id}" does not match known categories ` +
`(GPT-5, Claude, Gemini). Reasoning effort setting "${reasoningEffort}" will not be applied.`
)
return {}
}
// OpenRouter models
if (model.provider === SystemProviderIds.openrouter) {
@@ -281,7 +345,7 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
// gemini series, openai compatible api
if (isSupportedThinkingTokenGeminiModel(model)) {
// https://ai.google.dev/gemini-api/docs/gemini-3?thinking=high#openai_compatibility
if (isGemini3Model(model)) {
if (isGemini3ThinkingTokenModel(model)) {
return {
reasoning_effort: reasoningEffort
}
@@ -465,20 +529,20 @@ export function getAnthropicReasoningParams(
return {}
}
type GoogelThinkingLevel = NonNullable<GoogleGenerativeAIProviderOptions['thinkingConfig']>['thinkingLevel']
// type GoogleThinkingLevel = NonNullable<GoogleGenerativeAIProviderOptions['thinkingConfig']>['thinkingLevel']
function mapToGeminiThinkingLevel(reasoningEffort: ReasoningEffortOption): GoogelThinkingLevel {
switch (reasoningEffort) {
case 'low':
return 'low'
case 'medium':
return 'medium'
case 'high':
return 'high'
default:
return 'medium'
}
}
// function mapToGeminiThinkingLevel(reasoningEffort: ReasoningEffortOption): GoogelThinkingLevel {
// switch (reasoningEffort) {
// case 'low':
// return 'low'
// case 'medium':
// return 'medium'
// case 'high':
// return 'high'
// default:
// return 'medium'
// }
// }
/**
* 获取 Gemini 推理参数
@@ -507,14 +571,15 @@ export function getGeminiReasoningParams(
}
}
// TODO: 很多中转还不支持
// https://ai.google.dev/gemini-api/docs/gemini-3?thinking=high#new_api_features_in_gemini_3
if (isGemini3Model(model)) {
return {
thinkingConfig: {
thinkingLevel: mapToGeminiThinkingLevel(reasoningEffort)
}
}
}
// if (isGemini3ThinkingTokenModel(model)) {
// return {
// thinkingConfig: {
// thinkingLevel: mapToGeminiThinkingLevel(reasoningEffort)
// }
// }
// }
const effortRatio = EFFORT_RATIO[reasoningEffort]

View File

@@ -6,7 +6,7 @@ import { useEffect, useMemo, useRef, useState } from 'react'
import { useTranslation } from 'react-i18next'
import styled, { css } from 'styled-components'
interface SelectorOption<V = string | number | undefined | null> {
interface SelectorOption<V = string | number> {
label: string | ReactNode
value: V
type?: 'group'
@@ -14,7 +14,7 @@ interface SelectorOption<V = string | number | undefined | null> {
disabled?: boolean
}
interface BaseSelectorProps<V = string | number | undefined | null> {
interface BaseSelectorProps<V = string | number> {
options: SelectorOption<V>[]
placeholder?: string
placement?: 'topLeft' | 'topCenter' | 'topRight' | 'bottomLeft' | 'bottomCenter' | 'bottomRight' | 'top' | 'bottom'
@@ -39,7 +39,7 @@ interface MultipleSelectorProps<V> extends BaseSelectorProps<V> {
export type SelectorProps<V> = SingleSelectorProps<V> | MultipleSelectorProps<V>
const Selector = <V extends string | number | undefined | null>({
const Selector = <V extends string | number>({
options,
value,
onChange = () => {},

View File

@@ -33,6 +33,7 @@ import {
MODEL_SUPPORTED_OPTIONS,
MODEL_SUPPORTED_REASONING_EFFORT
} from '../reasoning'
import { isGemini3ThinkingTokenModel } from '../utils'
import { isTextToImageModel } from '../vision'
vi.mock('@renderer/store', () => ({
@@ -799,20 +800,6 @@ describe('getThinkModelType - Comprehensive Coverage', () => {
})
})
describe('Token limit lookup', () => {
it.each([
['gemini-2.5-flash-lite-latest', { min: 512, max: 24576 }],
['qwen-plus-2025-07-14', { min: 0, max: 38912 }],
['claude-haiku-4', { min: 1024, max: 64000 }]
])('returns configured min/max pairs for %s', (id, expected) => {
expect(findTokenLimit(id)).toEqual(expected)
})
it('returns undefined when regex misses', () => {
expect(findTokenLimit('unknown-model')).toBeUndefined()
})
})
describe('Gemini Models', () => {
describe('isSupportedThinkingTokenGeminiModel', () => {
it('should return true for gemini 2.5 models', () => {
@@ -955,7 +942,7 @@ describe('Gemini Models', () => {
provider: '',
group: ''
})
).toBe(true)
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3.0-flash-image-preview',
@@ -963,7 +950,7 @@ describe('Gemini Models', () => {
provider: '',
group: ''
})
).toBe(true)
).toBe(false)
expect(
isSupportedThinkingTokenGeminiModel({
id: 'gemini-3.5-pro-image-preview',
@@ -971,7 +958,7 @@ describe('Gemini Models', () => {
provider: '',
group: ''
})
).toBe(true)
).toBe(false)
})
it('should return false for gemini-2.x image models', () => {
@@ -1163,7 +1150,7 @@ describe('Gemini Models', () => {
provider: '',
group: ''
})
).toBe(true)
).toBe(false)
expect(
isGeminiReasoningModel({
id: 'gemini-3.5-flash-image-preview',
@@ -1171,7 +1158,7 @@ describe('Gemini Models', () => {
provider: '',
group: ''
})
).toBe(true)
).toBe(false)
})
it('should return false for older gemini models without thinking', () => {
@@ -1200,6 +1187,19 @@ describe('Gemini Models', () => {
})
describe('findTokenLimit', () => {
describe('General token limit lookup', () => {
it.each([
['gemini-2.5-flash-lite-latest', { min: 512, max: 24576 }],
['qwen-plus-2025-07-14', { min: 0, max: 38912 }]
])('returns configured min/max pairs for %s', (id, expected) => {
expect(findTokenLimit(id)).toEqual(expected)
})
it('returns undefined when regex misses', () => {
expect(findTokenLimit('unknown-model')).toBeUndefined()
})
})
const cases: Array<{ modelId: string; expected: { min: number; max: number } }> = [
{ modelId: 'gemini-2.5-flash-lite-exp', expected: { min: 512, max: 24_576 } },
{ modelId: 'gemini-1.5-flash', expected: { min: 0, max: 24_576 } },
@@ -1215,11 +1215,7 @@ describe('findTokenLimit', () => {
{ modelId: 'qwen-plus-ultra', expected: { min: 0, max: 81_920 } },
{ modelId: 'qwen-turbo-pro', expected: { min: 0, max: 38_912 } },
{ modelId: 'qwen-flash-lite', expected: { min: 0, max: 81_920 } },
{ modelId: 'qwen3-7b', expected: { min: 1_024, max: 38_912 } },
{ modelId: 'claude-3.7-sonnet-extended', expected: { min: 1_024, max: 64_000 } },
{ modelId: 'claude-sonnet-4.1', expected: { min: 1_024, max: 64_000 } },
{ modelId: 'claude-sonnet-4-5-20250929', expected: { min: 1_024, max: 64_000 } },
{ modelId: 'claude-opus-4-1-extended', expected: { min: 1_024, max: 32_000 } }
{ modelId: 'qwen3-7b', expected: { min: 1_024, max: 38_912 } }
]
it.each(cases)('returns correct limits for $modelId', ({ modelId, expected }) => {
@@ -1229,4 +1225,355 @@ describe('findTokenLimit', () => {
it('returns undefined for unknown models', () => {
expect(findTokenLimit('unknown-model')).toBeUndefined()
})
describe('Claude models', () => {
describe('Claude 3.7 Sonnet models', () => {
it.each([
'claude-3.7-sonnet',
'claude-3-7-sonnet',
'claude-3.7-sonnet-latest',
'claude-3-7-sonnet-latest',
'claude-3.7-sonnet-20250201',
'claude-3-7-sonnet-20250201',
// Official Claude API IDs
'claude-3-7-sonnet-20250219',
// AWS Bedrock format
'anthropic.claude-3-7-sonnet-20250219-v1:0',
// GCP Vertex AI format
'claude-3-7-sonnet@20250219'
])('should return { min: 1024, max: 64000 } for %s', (modelId) => {
expect(findTokenLimit(modelId)).toEqual({ min: 1024, max: 64_000 })
})
it.each(['CLAUDE-3.7-SONNET', 'Claude-3-7-Sonnet-Latest'])('should be case insensitive for %s', (modelId) => {
expect(findTokenLimit(modelId)).toEqual({ min: 1024, max: 64_000 })
})
})
describe('Claude 4.0 series models', () => {
it.each([
'claude-sonnet-4',
'claude-sonnet-4.0',
'claude-sonnet-4-0',
'claude-sonnet-4-preview',
'claude-sonnet-4.0-preview',
'claude-sonnet-4-20250101',
// Official Claude API IDs
'claude-sonnet-4-20250514',
// AWS Bedrock format
'anthropic.claude-sonnet-4-20250514-v1:0',
// GCP Vertex AI format
'claude-sonnet-4@20250514'
])('should return { min: 1024, max: 64000 } for Sonnet variant %s', (modelId) => {
expect(findTokenLimit(modelId)).toEqual({ min: 1024, max: 64_000 })
})
it.each([
'claude-opus-4',
'claude-opus-4.0',
'claude-opus-4-0',
'claude-opus-4-preview',
'claude-opus-4.0-preview',
'claude-opus-4-20250101',
// Official Claude API IDs
'claude-opus-4-20250514',
// AWS Bedrock format
'anthropic.claude-opus-4-20250514-v1:0',
// GCP Vertex AI format
'claude-opus-4@20250514'
])('should return { min: 1024, max: 32000 } for Opus variant %s', (modelId) => {
expect(findTokenLimit(modelId)).toEqual({ min: 1024, max: 32_000 })
})
it.each(['CLAUDE-SONNET-4', 'Claude-Opus-4-Preview'])('should be case insensitive for %s', (modelId) => {
const expectedSonnet = { min: 1024, max: 64_000 }
const expectedOpus = { min: 1024, max: 32_000 }
const result = findTokenLimit(modelId)
expect(result).toBeDefined()
expect([expectedSonnet, expectedOpus]).toContainEqual(result)
})
})
describe('Claude Opus 4.1 models', () => {
it.each([
'claude-opus-4.1',
'claude-opus-4-1',
'claude-opus-4.1-preview',
'claude-opus-4-1-preview',
'claude-opus-4.1-20250120',
'claude-opus-4-1-20250120',
// Official Claude API IDs
'claude-opus-4-1-20250805',
// AWS Bedrock format
'anthropic.claude-opus-4-1-20250805-v1:0',
// GCP Vertex AI format
'claude-opus-4-1@20250805'
])('should return { min: 1024, max: 32000 } for %s', (modelId) => {
expect(findTokenLimit(modelId)).toEqual({ min: 1024, max: 32_000 })
})
it.each(['CLAUDE-OPUS-4.1', 'Claude-Opus-4-1-Preview'])('should be case insensitive for %s', (modelId) => {
expect(findTokenLimit(modelId)).toEqual({ min: 1024, max: 32_000 })
})
})
describe('Claude 4.5 series models (Haiku, Sonnet, Opus)', () => {
it.each([
'claude-haiku-4.5',
'claude-haiku-4-5',
'claude-haiku-4.5-preview',
'claude-haiku-4-5-preview',
'claude-haiku-4.5-20250929',
'claude-haiku-4-5-20250929',
// Official Claude API IDs
'claude-haiku-4-5-20251001',
// AWS Bedrock format
'anthropic.claude-haiku-4-5-20251001-v1:0',
// GCP Vertex AI format
'claude-haiku-4-5@20251001'
])('should return { min: 1024, max: 64000 } for Haiku variant %s', (modelId) => {
expect(findTokenLimit(modelId)).toEqual({ min: 1024, max: 64_000 })
})
it.each([
'claude-sonnet-4.5',
'claude-sonnet-4-5',
'claude-sonnet-4.5-preview',
'claude-sonnet-4-5-preview',
'claude-sonnet-4.5-20250929',
'claude-sonnet-4-5-20250929',
// Official Claude API IDs
'claude-sonnet-4-5-20250929',
// AWS Bedrock format
'anthropic.claude-sonnet-4-5-20250929-v1:0',
// GCP Vertex AI format
'claude-sonnet-4-5@20250929'
])('should return { min: 1024, max: 64000 } for Sonnet variant %s', (modelId) => {
expect(findTokenLimit(modelId)).toEqual({ min: 1024, max: 64_000 })
})
it.each([
'claude-opus-4.5',
'claude-opus-4-5',
'claude-opus-4.5-preview',
'claude-opus-4-5-preview',
'claude-opus-4.5-20250929',
'claude-opus-4-5-20250929',
// Official Claude API IDs
'claude-opus-4-5-20251101',
// AWS Bedrock format
'anthropic.claude-opus-4-5-20251101-v1:0',
// GCP Vertex AI format
'claude-opus-4-5@20251101'
])('should return { min: 1024, max: 64000 } for Opus variant %s', (modelId) => {
expect(findTokenLimit(modelId)).toEqual({ min: 1024, max: 64_000 })
})
it.each(['CLAUDE-HAIKU-4.5', 'Claude-Sonnet-4-5-Preview', 'CLAUDE-OPUS-4.5-20250929'])(
'should be case insensitive for %s',
(modelId) => {
expect(findTokenLimit(modelId)).toEqual({ min: 1024, max: 64_000 })
}
)
})
describe('Claude models that should NOT match', () => {
it.each([
'claude-3-opus',
'claude-3-sonnet',
'claude-3-haiku',
'claude-3.5-sonnet',
'claude-3-5-sonnet',
'claude-2.1',
'claude-instant',
'claude-haiku-4',
'claude-haiku-4.0',
'claude-haiku-4-0',
'claude-opus-4.2',
'claude-opus-4-2',
'claude-sonnet-4.2',
'claude-sonnet-4-2',
// Old Haiku models (no Extended thinking support)
'claude-3-5-haiku-20241022',
'claude-3-5-haiku-latest',
'anthropic.claude-3-5-haiku-20241022-v1:0',
'claude-3-5-haiku@20241022',
'claude-3-haiku-20240307',
'anthropic.claude-3-haiku-20240307-v1:0',
'claude-3-haiku@20240307'
])('should return undefined for older/unsupported model %s', (modelId) => {
expect(findTokenLimit(modelId)).toBeUndefined()
})
})
describe('Edge cases', () => {
it('should handle models with custom suffixes', () => {
expect(findTokenLimit('claude-3.7-sonnet-custom-variant')).toEqual({ min: 1024, max: 64_000 })
expect(findTokenLimit('claude-opus-4.1-custom')).toEqual({ min: 1024, max: 32_000 })
expect(findTokenLimit('claude-sonnet-4.5-custom-variant')).toEqual({ min: 1024, max: 64_000 })
})
it('should NOT match non-existent Claude 4.1 variants (only Opus 4.1 exists)', () => {
// Claude Sonnet 4.1 and Haiku 4.1 do not exist
expect(findTokenLimit('claude-sonnet-4.1')).toBeUndefined()
expect(findTokenLimit('claude-haiku-4.1')).toBeUndefined()
})
it('should not match partial model names', () => {
expect(findTokenLimit('claude-3.7')).toBeUndefined()
expect(findTokenLimit('claude-opus')).toBeUndefined()
expect(findTokenLimit('claude-4.5')).toBeUndefined()
})
})
})
})
describe('isGemini3ThinkingTokenModel', () => {
it('should return true for Gemini 3 non-image models', () => {
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-3-pro',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-3-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGemini3ThinkingTokenModel({
id: 'google/gemini-3-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-3.0-flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-3.5-pro-preview',
name: '',
provider: '',
group: ''
})
).toBe(true)
})
it('should return false for Gemini 3 image models', () => {
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-3-flash-image',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-3-pro-image-preview',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-3.0-flash-image-preview',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-3.5-pro-image-preview',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
it('should return false for non-Gemini 3 models', () => {
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-2.5-flash',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isGemini3ThinkingTokenModel({
id: 'gemini-1.5-pro',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isGemini3ThinkingTokenModel({
id: 'gpt-4',
name: '',
provider: '',
group: ''
})
).toBe(false)
expect(
isGemini3ThinkingTokenModel({
id: 'claude-3-opus',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
it('should handle case insensitivity', () => {
expect(
isGemini3ThinkingTokenModel({
id: 'Gemini-3-Flash',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGemini3ThinkingTokenModel({
id: 'GEMINI-3-PRO',
name: '',
provider: '',
group: ''
})
).toBe(true)
expect(
isGemini3ThinkingTokenModel({
id: 'Gemini-3-Pro-Image',
name: '',
provider: '',
group: ''
})
).toBe(false)
})
})

View File

@@ -24,9 +24,9 @@ import {
isGemmaModel,
isGenerateImageModels,
isMaxTemperatureOneModel,
isNotSupportedTextDelta,
isNotSupportSystemMessageModel,
isNotSupportTemperatureAndTopP,
isNotSupportTextDeltaModel,
isSupportedFlexServiceTier,
isSupportedModel,
isSupportFlexServiceTierModel,
@@ -215,12 +215,51 @@ describe('model utils', () => {
it('aggregates boolean helpers based on regex rules', () => {
expect(isAnthropicModel(createModel({ id: 'claude-3.5' }))).toBe(true)
expect(isQwenMTModel(createModel({ id: 'qwen-mt-large' }))).toBe(true)
expect(isNotSupportedTextDelta(createModel({ id: 'qwen-mt-large' }))).toBe(true)
expect(isQwenMTModel(createModel({ id: 'qwen-mt-plus' }))).toBe(true)
expect(isNotSupportSystemMessageModel(createModel({ id: 'gemma-moe' }))).toBe(true)
expect(isOpenAIOpenWeightModel(createModel({ id: 'gpt-oss-free' }))).toBe(true)
})
describe('isNotSupportedTextDelta', () => {
it('returns true for qwen-mt-turbo and qwen-mt-plus models', () => {
// qwen-mt series that don't support text delta
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-turbo' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-plus' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'Qwen-MT-Turbo' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'QWEN-MT-PLUS' }))).toBe(true)
})
it('returns false for qwen-mt-flash and other models', () => {
// qwen-mt-flash supports text delta
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-flash' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'Qwen-MT-Flash' }))).toBe(false)
// Legacy qwen models without mt prefix (support text delta)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-turbo' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-plus' }))).toBe(false)
// Other qwen models
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-max' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen2.5-72b' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-vl-plus' }))).toBe(false)
// Non-qwen models
expect(isNotSupportTextDeltaModel(createModel({ id: 'gpt-4o' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'claude-3.5' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'glm-4-plus' }))).toBe(false)
})
it('handles models with version suffixes', () => {
// qwen-mt models with version suffixes
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-turbo-1201' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-plus-0828' }))).toBe(true)
// Legacy qwen models with version suffixes (support text delta)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-turbo-0828' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-plus-latest' }))).toBe(false)
})
})
it('evaluates GPT-5 family helpers', () => {
expect(isGPT5SeriesModel(createModel({ id: 'gpt-5-preview' }))).toBe(true)
expect(isGPT5SeriesModel(createModel({ id: 'gpt-5.1-preview' }))).toBe(false)

View File

@@ -16,7 +16,7 @@ import {
isOpenAIReasoningModel,
isSupportedReasoningEffortOpenAIModel
} from './openai'
import { GEMINI_FLASH_MODEL_REGEX, isGemini3Model } from './utils'
import { GEMINI_FLASH_MODEL_REGEX, isGemini3ThinkingTokenModel } from './utils'
import { isTextToImageModel } from './vision'
// Reasoning models
@@ -115,7 +115,7 @@ const _getThinkModelType = (model: Model): ThinkingModelType => {
} else {
thinkingModelType = 'gemini_pro'
}
if (isGemini3Model(model)) {
if (isGemini3ThinkingTokenModel(model)) {
thinkingModelType = 'gemini3'
}
} else if (isSupportedReasoningEffortGrokModel(model)) thinkingModelType = 'grok'
@@ -201,7 +201,7 @@ export function isSupportedReasoningEffortGrokModel(model?: Model): boolean {
}
const modelId = getLowerBaseModelName(model.id)
const providerId = model.provider.toLowerCase()
const providerId = model?.provider?.toLowerCase()
if (modelId.includes('grok-3-mini')) {
return true
}
@@ -271,14 +271,6 @@ export const GEMINI_THINKING_MODEL_REGEX =
export const isSupportedThinkingTokenGeminiModel = (model: Model): boolean => {
const modelId = getLowerBaseModelName(model.id, '/')
if (GEMINI_THINKING_MODEL_REGEX.test(modelId)) {
// gemini-3.x 的 image 模型支持思考模式
if (isGemini3Model(model)) {
if (modelId.includes('tts')) {
return false
}
return true
}
// gemini-2.x 的 image/tts 模型不支持
if (modelId.includes('image') || modelId.includes('tts')) {
return false
}
@@ -555,7 +547,7 @@ export function isReasoningModel(model?: Model): boolean {
return REASONING_REGEX.test(modelId) || false
}
export const THINKING_TOKEN_MAP: Record<string, { min: number; max: number }> = {
const THINKING_TOKEN_MAP: Record<string, { min: number; max: number }> = {
// Gemini models
'gemini-2\\.5-flash-lite.*$': { min: 512, max: 24576 },
'gemini-.*-flash.*$': { min: 0, max: 24576 },
@@ -576,10 +568,18 @@ export const THINKING_TOKEN_MAP: Record<string, { min: number; max: number }> =
'qwen-flash.*$': { min: 0, max: 81_920 },
'qwen3-(?!max).*$': { min: 1024, max: 38_912 },
// Claude models
'claude-3[.-]7.*sonnet.*$': { min: 1024, max: 64_000 },
'claude-(:?haiku|sonnet)-4.*$': { min: 1024, max: 64_000 },
'claude-opus-4-1.*$': { min: 1024, max: 32_000 }
// Claude models (supports AWS Bedrock 'anthropic.' prefix, GCP Vertex AI '@' separator, and '-v1:0' suffix)
'(?:anthropic\\.)?claude-3[.-]7.*sonnet.*(?:-v\\d+:\\d+)?$': { min: 1024, max: 64_000 },
'(?:anthropic\\.)?claude-(:?haiku|sonnet|opus)-4[.-]5.*(?:-v\\d+:\\d+)?$': { min: 1024, max: 64_000 },
'(?:anthropic\\.)?claude-opus-4[.-]1.*(?:-v\\d+:\\d+)?$': { min: 1024, max: 32_000 },
'(?:anthropic\\.)?claude-sonnet-4(?:[.-]0)?(?:[@-](?:\\d{4,}|[a-z][\\w-]*))?(?:-v\\d+:\\d+)?$': {
min: 1024,
max: 64_000
},
'(?:anthropic\\.)?claude-opus-4(?:[.-]0)?(?:[@-](?:\\d{4,}|[a-z][\\w-]*))?(?:-v\\d+:\\d+)?$': {
min: 1024,
max: 32_000
}
}
export const findTokenLimit = (modelId: string): { min: number; max: number } | undefined => {

View File

@@ -43,7 +43,8 @@ const FUNCTION_CALLING_EXCLUDED_MODELS = [
'gpt-5-chat(?:-[\\w-]+)?',
'glm-4\\.5v',
'gemini-2.5-flash-image(?:-[\\w-]+)?',
'gemini-2.0-flash-preview-image-generation'
'gemini-2.0-flash-preview-image-generation',
'gemini-3(?:\\.\\d+)?-pro-image(?:-[\\w-]+)?'
]
export const FUNCTION_CALLING_REGEX = new RegExp(

View File

@@ -19,6 +19,7 @@ export function isSupportFlexServiceTierModel(model: Model): boolean {
(modelId.includes('o3') && !modelId.includes('o3-mini')) || modelId.includes('o4-mini') || modelId.includes('gpt-5')
)
}
export function isSupportedFlexServiceTier(model: Model): boolean {
return isSupportFlexServiceTierModel(model)
}
@@ -111,8 +112,11 @@ export const isAnthropicModel = (model?: Model): boolean => {
return modelId.startsWith('claude')
}
export const isNotSupportedTextDelta = (model: Model): boolean => {
return isQwenMTModel(model)
const NOT_SUPPORT_TEXT_DELTA_MODEL_REGEX = new RegExp('qwen-mt-(?:turbo|plus)')
export const isNotSupportTextDeltaModel = (model: Model): boolean => {
const modelId = getLowerBaseModelName(model.id)
return NOT_SUPPORT_TEXT_DELTA_MODEL_REGEX.test(modelId)
}
export const isNotSupportSystemMessageModel = (model: Model): boolean => {
@@ -160,3 +164,8 @@ export const isGemini3Model = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return modelId.includes('gemini-3')
}
export const isGemini3ThinkingTokenModel = (model: Model) => {
const modelId = getLowerBaseModelName(model.id)
return isGemini3Model(model) && !modelId.includes('image')
}

View File

@@ -95,6 +95,7 @@ export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> =
type: 'openai',
apiKey: '',
apiHost: 'https://api.siliconflow.cn',
anthropicApiHost: 'https://api.siliconflow.cn',
models: SYSTEM_MODELS.silicon,
isSystem: true,
enabled: false
@@ -168,6 +169,7 @@ export const SYSTEM_PROVIDERS_CONFIG: Record<SystemProviderId, SystemProvider> =
type: 'openai',
apiKey: '',
apiHost: 'https://www.dmxapi.cn',
anthropicApiHost: 'https://www.dmxapi.cn',
models: SYSTEM_MODELS.dmxapi,
isSystem: true,
enabled: false

View File

@@ -0,0 +1,26 @@
import store, { useAppSelector } from '@renderer/store'
import { setVolcengineProjectName, setVolcengineRegion } from '@renderer/store/llm'
import { useDispatch } from 'react-redux'
export function useVolcengineSettings() {
const settings = useAppSelector((state) => state.llm.settings.volcengine)
const dispatch = useDispatch()
return {
...settings,
setRegion: (region: string) => dispatch(setVolcengineRegion(region)),
setProjectName: (projectName: string) => dispatch(setVolcengineProjectName(projectName))
}
}
export function getVolcengineSettings() {
return store.getState().llm.settings.volcengine
}
export function getVolcengineRegion() {
return store.getState().llm.settings.volcengine.region
}
export function getVolcengineProjectName() {
return store.getState().llm.settings.volcengine.projectName
}

View File

@@ -1148,6 +1148,7 @@
"fullscreen": "Entered fullscreen mode. Press F11 to exit",
"go_to_settings": "Go to settings",
"i_know": "I know",
"ignore": "Ignore",
"inspect": "Inspect",
"invalid_value": "Invalid Value",
"knowledge_base": "Knowledge Base",
@@ -1504,6 +1505,7 @@
"notes_placeholder": "Enter additional information or context for this knowledge base...",
"provider_not_found": "Provider not found",
"quota": "{{name}} Left Quota: {{quota}}",
"quota_empty": "Today's {{name}} quota exhausted, please apply on the official website",
"quota_infinity": "{{name}} Quota: Unlimited",
"rename": "Rename",
"search": "Search knowledge base",
@@ -3771,6 +3773,9 @@
},
"view_webdav_settings": "View WebDAV settings"
},
"groq": {
"title": "Groq Settings"
},
"hardware_acceleration": {
"confirm": {
"content": "Disabling hardware acceleration requires restarting the app to take effect. Do you want to restart now?",
@@ -4357,6 +4362,10 @@
"stream_options": {
"help": "Does the provider support the stream_options parameter?",
"label": "Support stream_options"
},
"verbosity": {
"help": "Whether the provider supports the verbosity parameter",
"label": "Support verbosity"
}
},
"url": {
@@ -4501,6 +4510,23 @@
"private_key_placeholder": "Enter Service Account private key",
"title": "Service Account Configuration"
}
},
"volcengine": {
"access_key_id": "Access Key ID",
"access_key_id_help": "Your Volcengine Access Key ID",
"clear_credentials": "Clear Credentials",
"credentials_cleared": "Credentials cleared",
"credentials_required": "Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "Credentials saved",
"description": "Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Please use IAM user's Access Key for authentication, do not use the root user credentials.",
"project_name": "Project Name",
"project_name_help": "Project name for endpoint filtering, default is 'default'",
"region": "Region",
"region_help": "Service region, e.g., cn-beijing",
"save_credentials": "Save Credentials",
"secret_access_key": "Secret Access Key",
"secret_access_key_help": "Your Volcengine Secret Access Key, please keep it secure",
"title": "Volcengine Configuration"
}
},
"proxy": {

View File

@@ -1148,6 +1148,7 @@
"fullscreen": "已进入全屏模式,按 F11 退出",
"go_to_settings": "前往设置",
"i_know": "我知道了",
"ignore": "忽略",
"inspect": "检查",
"invalid_value": "无效值",
"knowledge_base": "知识库",
@@ -1504,6 +1505,7 @@
"notes_placeholder": "输入此知识库的附加信息或上下文...",
"provider_not_found": "未找到服务商",
"quota": "{{name}} 剩余额度:{{quota}}",
"quota_empty": "今日{{name}}额度不足,请前往官网申请",
"quota_infinity": "{{name}} 剩余额度:无限制",
"rename": "重命名",
"search": "搜索知识库",
@@ -3771,6 +3773,9 @@
},
"view_webdav_settings": "查看 WebDAV 设置"
},
"groq": {
"title": "Groq 设置"
},
"hardware_acceleration": {
"confirm": {
"content": "禁用硬件加速需要重启应用才能生效,是否现在重启?",
@@ -4357,6 +4362,10 @@
"stream_options": {
"help": "该提供商是否支持 stream_options 参数",
"label": "支持 stream_options"
},
"verbosity": {
"help": "该提供商是否支持 verbosity 参数",
"label": "支持 verbosity"
}
},
"url": {
@@ -4501,6 +4510,23 @@
"private_key_placeholder": "请输入 Service Account 私钥",
"title": "Service Account 配置"
}
},
"volcengine": {
"access_key_id": "Access Key ID",
"access_key_id_help": "您的火山引擎 Access Key ID",
"clear_credentials": "清除凭证",
"credentials_cleared": "凭证已清除",
"credentials_required": "请填写 Access Key ID 和 Secret Access Key",
"credentials_saved": "凭证已保存",
"description": "火山引擎是字节跳动旗下的云服务平台,提供豆包等大语言模型服务。请使用 IAM 子用户的 Access Key 进行身份验证,不要使用主账号的根用户密钥。",
"project_name": "项目名称",
"project_name_help": "用于筛选推理接入点的项目名称,默认为 'default'",
"region": "地域",
"region_help": "服务地域,例如 cn-beijing",
"save_credentials": "保存凭证",
"secret_access_key": "Secret Access Key",
"secret_access_key_help": "您的火山引擎 Secret Access Key请妥善保管",
"title": "火山引擎配置"
}
},
"proxy": {

View File

@@ -1148,6 +1148,7 @@
"fullscreen": "已進入全螢幕模式,按 F11 結束",
"go_to_settings": "前往設定",
"i_know": "我知道了",
"ignore": "忽略",
"inspect": "檢查",
"invalid_value": "無效值",
"knowledge_base": "知識庫",
@@ -1504,6 +1505,7 @@
"notes_placeholder": "輸入此知識庫的附加資訊或上下文...",
"provider_not_found": "未找到服務商",
"quota": "{{name}} 剩餘配額:{{quota}}",
"quota_empty": "今日{{name}}額度不足,請前往官網申請",
"quota_infinity": "{{name}} 配額:無限制",
"rename": "重新命名",
"search": "搜尋知識庫",
@@ -3771,6 +3773,9 @@
},
"view_webdav_settings": "檢視 WebDAV 設定"
},
"groq": {
"title": "Groq 設定"
},
"hardware_acceleration": {
"confirm": {
"content": "禁用硬件加速需要重新啟動應用程序才能生效。是否立即重新啟動?",
@@ -4357,6 +4362,10 @@
"stream_options": {
"help": "該提供商是否支援 stream_options 參數",
"label": "支援 stream_options"
},
"verbosity": {
"help": "提供者是否支援詳細程度參數",
"label": "支援詳細資訊"
}
},
"url": {
@@ -4501,6 +4510,23 @@
"private_key_placeholder": "輸入服務帳戶私密金鑰",
"title": "服務帳戶設定"
}
},
"volcengine": {
"access_key_id": "Access Key ID",
"access_key_id_help": "您的火山引擎 Access Key ID",
"clear_credentials": "清除憑證",
"credentials_cleared": "憑證已清除",
"credentials_required": "請填寫 Access Key ID 和 Secret Access Key",
"credentials_saved": "憑證已儲存",
"description": "火山引擎是字節跳動旗下的雲端服務平台,提供豆包等大型語言模型服務。請使用 IAM 子用戶的 Access Key 進行身份驗證,不要使用主帳號的根用戶密鑰。",
"project_name": "專案名稱",
"project_name_help": "用於篩選推論接入點的專案名稱,預設為 'default'",
"region": "地區",
"region_help": "服務地區,例如 cn-beijing",
"save_credentials": "儲存憑證",
"secret_access_key": "Secret Access Key",
"secret_access_key_help": "您的火山引擎 Secret Access Key請妥善保管",
"title": "火山引擎設定"
}
},
"proxy": {

View File

@@ -1148,6 +1148,7 @@
"fullscreen": "Vollbildmodus aktiviert, F11 zum Beenden",
"go_to_settings": "Zu Einstellungen",
"i_know": "Verstanden",
"ignore": "Ignorieren",
"inspect": "Prüfen",
"invalid_value": "Ungültiger Wert",
"knowledge_base": "Wissensdatenbank",
@@ -1504,6 +1505,7 @@
"notes_placeholder": "Zusätzliche Informationen oder Kontext für diese Wissensdatenbank eingeben...",
"provider_not_found": "Anbieter nicht gefunden",
"quota": "{{name}} verbleibendes Kontingent: {{quota}}",
"quota_empty": "Das heutige {{name}}-Kontingent ist erschöpft. Bitte beantragen Sie es auf der offiziellen Website",
"quota_infinity": "{{name}} verbleibendes Kontingent: unbegrenzt",
"rename": "Umbenennen",
"search": "Wissensdatenbank durchsuchen",
@@ -3771,6 +3773,9 @@
},
"view_webdav_settings": "WebDAV-Einstellungen anzeigen"
},
"groq": {
"title": "Groq Einstellungen"
},
"hardware_acceleration": {
"confirm": {
"content": "Deaktivierung der Hardwarebeschleunigung erfordert Neustart. Jetzt neu starten?",
@@ -4357,6 +4362,10 @@
"stream_options": {
"help": "Unterstützt stream_options",
"label": "Unterstützt stream_options"
},
"verbosity": {
"help": "Ob der Anbieter den Ausführlichkeitsparameter unterstützt",
"label": "Unterstützung der Ausführlichkeit"
}
},
"url": {
@@ -4501,6 +4510,23 @@
"private_key_placeholder": "Service Account-Privat-Schlüssel eingeben",
"title": "Service Account-Konfiguration"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -1148,6 +1148,7 @@
"fullscreen": "Εισήχθη σε πλήρη οθόνη, πατήστε F11 για να έξω",
"go_to_settings": "Πηγαίνετε στις ρυθμίσεις",
"i_know": "Το έχω καταλάβει",
"ignore": "Αγνόησε",
"inspect": "Επιθεώρηση",
"invalid_value": "Μη έγκυρη τιμή",
"knowledge_base": "Βάση Γνώσεων",
@@ -1504,6 +1505,7 @@
"notes_placeholder": "Εισάγετε πρόσθετες πληροφορίες ή πληροφορίες προσδιορισμού για αυτή τη βάση γνώσεων...",
"provider_not_found": "Η παροχή υπηρεσιών μοντέλου βάσης γνώσεων χαθηκε, αυτή η βάση γνώσεων δεν θα υποστηρίζεται πλέον, παρακαλείστε να δημιουργήσετε ξανά μια βάση γνώσεων",
"quota": "Διαθέσιμο όριο για {{name}}: {{quota}}",
"quota_empty": "Το σημερινό όριο {{name}} εξαντλήθηκε, παρακαλούμε υποβάλετε αίτηση στην επίσημη ιστοσελίδα",
"quota_infinity": "Διαθέσιμο όριο για {{name}}: Απεριόριστο",
"rename": "Μετονομασία",
"search": "Αναζήτηση βάσης γνώσεων",
@@ -3771,6 +3773,9 @@
},
"view_webdav_settings": "Προβολή ρυθμίσεων WebDAV"
},
"groq": {
"title": "Ρυθμίσεις Groq"
},
"hardware_acceleration": {
"confirm": {
"content": "Η απενεργοποίηση της υλικοποιημένης επιτάχυνσης απαιτεί επανεκκίνηση της εφαρμογής για να τεθεί σε ισχύ. Θέλετε να επανεκκινήσετε τώρα;",
@@ -4357,6 +4362,10 @@
"stream_options": {
"help": "Υποστηρίζει ο πάροχος την παράμετρο stream_options;",
"label": "Υποστήριξη stream_options"
},
"verbosity": {
"help": "Αν ο πάροχος υποστηρίζει την παράμετρο αναλυτικότητας",
"label": "Υποστήριξη πολυλογίας"
}
},
"url": {
@@ -4501,6 +4510,23 @@
"private_key_placeholder": "Παρακαλώ εισάγετε το ιδιωτικό κλειδί του λογαριασμού υπηρεσίας",
"title": "Διαμόρφωση λογαριασμού υπηρεσίας"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -1148,6 +1148,7 @@
"fullscreen": "En modo pantalla completa, presione F11 para salir",
"go_to_settings": "Ir a la configuración",
"i_know": "Entendido",
"ignore": "Ignorar",
"inspect": "Inspeccionar",
"invalid_value": "Valor inválido",
"knowledge_base": "Base de conocimiento",
@@ -1504,6 +1505,7 @@
"notes_placeholder": "Ingrese información adicional o contexto para esta base de conocimientos...",
"provider_not_found": "El proveedor del modelo de la base de conocimientos ha sido perdido, esta base de conocimientos ya no es compatible, por favor cree una nueva base de conocimientos",
"quota": "Cupo restante de {{name}}: {{quota}}",
"quota_empty": "La cuota de {{name}} de hoy está agotada, por favor solicítela en el sitio web oficial",
"quota_infinity": "Cupo restante de {{name}}: ilimitado",
"rename": "Renombrar",
"search": "Buscar en la base de conocimientos",
@@ -3771,6 +3773,9 @@
},
"view_webdav_settings": "Ver configuración WebDAV"
},
"groq": {
"title": "Configuración de Groq"
},
"hardware_acceleration": {
"confirm": {
"content": "La desactivación de la aceleración por hardware requiere reiniciar la aplicación para que surta efecto, ¿desea reiniciar ahora?",
@@ -4357,6 +4362,10 @@
"stream_options": {
"help": "¿Admite el proveedor el parámetro stream_options?",
"label": "Admite stream_options"
},
"verbosity": {
"help": "Si el proveedor admite el parámetro de verbosidad",
"label": "Soporte de verbosidad"
}
},
"url": {
@@ -4501,6 +4510,23 @@
"private_key_placeholder": "Ingrese la clave privada de Service Account",
"title": "Configuración de Service Account"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -1148,6 +1148,7 @@
"fullscreen": "Mode plein écran, appuyez sur F11 pour quitter",
"go_to_settings": "Aller aux paramètres",
"i_know": "J'ai compris",
"ignore": "Ignorer",
"inspect": "Vérifier",
"invalid_value": "valeur invalide",
"knowledge_base": "Base de connaissances",
@@ -1504,6 +1505,7 @@
"notes_placeholder": "Entrez des informations supplémentaires ou un contexte pour cette base de connaissances...",
"provider_not_found": "Le fournisseur du modèle de la base de connaissances a été perdu, cette base de connaissances ne sera plus supportée, veuillez en créer une nouvelle",
"quota": "Quota restant pour {{name}} : {{quota}}",
"quota_empty": "Le quota {{name}} d'aujourd'hui est épuisé, veuillez en faire la demande sur le site officiel",
"quota_infinity": "Quota restant pour {{name}} : illimité",
"rename": "Renommer",
"search": "Rechercher dans la base de connaissances",
@@ -3771,6 +3773,9 @@
},
"view_webdav_settings": "Voir les paramètres WebDAV"
},
"groq": {
"title": "Paramètres Groq"
},
"hardware_acceleration": {
"confirm": {
"content": "La désactivation de l'accélération matérielle nécessite un redémarrage de l'application pour prendre effet. Voulez-vous redémarrer maintenant ?",
@@ -4357,6 +4362,10 @@
"stream_options": {
"help": "Le fournisseur prend-il en charge le paramètre stream_options ?",
"label": "Prise en charge des options de flux"
},
"verbosity": {
"help": "Si le fournisseur prend en charge le paramètre de verbosité",
"label": "Prend en charge la verbosité"
}
},
"url": {
@@ -4501,6 +4510,23 @@
"private_key_placeholder": "Veuillez saisir la clé privée du compte de service",
"title": "Configuration du compte de service"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -893,7 +893,7 @@
"title": "コード実行"
},
"code_fancy_block": {
"label": "<translate_input>\n装飾的なコードブロック\n</translate_input>",
"label": "装飾的なコードブロック",
"tip": "より見栄えの良いコードブロックスタイルを使用する、例えばHTMLカード"
},
"code_image_tools": {
@@ -1148,6 +1148,7 @@
"fullscreen": "全画面モードに入りました。F11キーで終了します",
"go_to_settings": "設定に移動",
"i_know": "わかりました",
"ignore": "無視",
"inspect": "検査",
"invalid_value": "無効な値",
"knowledge_base": "ナレッジベース",
@@ -1296,7 +1297,7 @@
"statusCode": "ステータスコード",
"statusText": "状態テキスト",
"text": "テキスト",
"toolInput": "<translate_input>\nツール入力\n</translate_input>",
"toolInput": "ツール入力",
"toolName": "ツール名",
"unknown": "不明なエラー",
"usage": "用量",
@@ -1504,6 +1505,7 @@
"notes_placeholder": "このナレッジベースの追加情報やコンテキストを入力...",
"provider_not_found": "プロバイダーが見つかりません",
"quota": "{{name}} 残りクォータ: {{quota}}",
"quota_empty": "本日の{{name}}クォータが不足しています。公式サイトで申請してください",
"quota_infinity": "{{name}} クォータ: 無制限",
"rename": "名前を変更",
"search": "ナレッジベースを検索",
@@ -3771,6 +3773,9 @@
},
"view_webdav_settings": "WebDAV設定を表示"
},
"groq": {
"title": "Groq設定"
},
"hardware_acceleration": {
"confirm": {
"content": "ハードウェアアクセラレーションを無効にするには、アプリを再起動する必要があります。再起動しますか?",
@@ -4357,6 +4362,10 @@
"stream_options": {
"help": "このプロバイダーは stream_options パラメータをサポートしていますか",
"label": "stream_options をサポート"
},
"verbosity": {
"help": "プロバイダーが冗長度パラメータをサポートしているかどうか",
"label": "冗長性のサポート"
}
},
"url": {
@@ -4501,6 +4510,23 @@
"private_key_placeholder": "サービスアカウントの秘密鍵を入力してください",
"title": "サービスアカウント設定"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -1148,6 +1148,7 @@
"fullscreen": "Entrou no modo de tela cheia, pressione F11 para sair",
"go_to_settings": "Ir para configurações",
"i_know": "Entendi",
"ignore": "Pular",
"inspect": "Verificar",
"invalid_value": "Valor inválido",
"knowledge_base": "Base de Conhecimento",
@@ -1504,6 +1505,7 @@
"notes_placeholder": "Digite informações adicionais ou contexto para este repositório de conhecimento...",
"provider_not_found": "O provedor do modelo do repositório de conhecimento foi perdido, este repositório de conhecimento não será mais suportado, por favor, crie um novo repositório de conhecimento",
"quota": "Cota restante de {{name}}: {{quota}}",
"quota_empty": "A cota de {{name}} de hoje está esgotada, por favor solicite no site oficial",
"quota_infinity": "Cota restante de {{name}}: ilimitada",
"rename": "Renomear",
"search": "Pesquisar repositório de conhecimento",
@@ -3771,6 +3773,9 @@
},
"view_webdav_settings": "Ver configurações WebDAV"
},
"groq": {
"title": "Configurações do Groq"
},
"hardware_acceleration": {
"confirm": {
"content": "A desativação da aceleração de hardware requer a reinicialização do aplicativo para entrar em vigor. Deseja reiniciar agora?",
@@ -4357,6 +4362,10 @@
"stream_options": {
"help": "O fornecedor suporta o parâmetro stream_options?",
"label": "suporta stream_options"
},
"verbosity": {
"help": "Se o provedor suporta o parâmetro de verbosidade",
"label": "Suportar verbosidade"
}
},
"url": {
@@ -4501,6 +4510,23 @@
"private_key_placeholder": "Por favor, insira a chave privada da Conta de Serviço",
"title": "Configuração da Conta de Serviço"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -1148,6 +1148,7 @@
"fullscreen": "Вы вошли в полноэкранный режим. Нажмите F11 для выхода",
"go_to_settings": "Перейти в настройки",
"i_know": "Я понял",
"ignore": "Игнорировать",
"inspect": "Осмотреть",
"invalid_value": "недопустимое значение",
"knowledge_base": "База знаний",
@@ -1504,6 +1505,7 @@
"notes_placeholder": "Введите дополнительную информацию или контекст для этой базы знаний...",
"provider_not_found": "Поставщик не найден",
"quota": "{{name}} Остаток квоты: {{quota}}",
"quota_empty": "Сегодняшняя квота {{name}} исчерпана, пожалуйста, подайте заявку на официальном сайте",
"quota_infinity": "{{name}} Квота: Не ограничена",
"rename": "Переименовать",
"search": "Поиск в базе знаний",
@@ -1513,7 +1515,7 @@
"preprocessing_tooltip": "Предварительная обработка документов",
"title": "Настройки базы знаний"
},
"sitemap_added": "添加成功",
"sitemap_added": "Карта сайта добавлена",
"sitemap_placeholder": "Введите URL карты сайта",
"sitemaps": "Сайты",
"source": "Источник",
@@ -2173,12 +2175,12 @@
"font_size": "Размер шрифта",
"font_size_description": "Отрегулируйте размер шрифта для лучшего чтения (1030 пикселей)",
"font_size_large": "Большой",
"font_size_medium": "",
"font_size_small": "<translate_input>\nмаленький\n</translate_input>",
"font_size_medium": "Средний",
"font_size_small": "маленький",
"font_title": "Настройки шрифта",
"serif_font": "Serif Font",
"show_table_of_contents": "Показать оглавление",
"show_table_of_contents_description": "显示目录大纲侧边栏,方便文档内导航",
"show_table_of_contents_description": "Показать боковую панель оглавления для удобной навигации по документу",
"title": "показывать"
},
"editor": {
@@ -3771,6 +3773,9 @@
},
"view_webdav_settings": "Просмотр настроек WebDAV"
},
"groq": {
"title": "Настройки Groq"
},
"hardware_acceleration": {
"confirm": {
"content": "Отключение аппаратного ускорения требует перезапуска приложения для вступления в силу. Перезапустить приложение?",
@@ -4357,6 +4362,10 @@
"stream_options": {
"help": "Поддерживает ли этот провайдер параметр stream_options",
"label": "Поддержка stream_options"
},
"verbosity": {
"help": "Поддерживает ли провайдер параметр verbosity",
"label": "Поддержка многословности"
}
},
"url": {
@@ -4501,6 +4510,23 @@
"private_key_placeholder": "Введите приватный ключ Service Account",
"title": "Конфигурация Service Account"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {
@@ -4767,7 +4793,7 @@
}
},
"prompt": "Следуйте системному запросу",
"title": "翻译设置"
"title": "Настройки перевода"
},
"tray": {
"onclose": "Свернуть в трей при закрытии",

View File

@@ -17,6 +17,7 @@ import type { EndpointType, Model } from '@renderer/types'
import { getClaudeSupportedProviders } from '@renderer/utils/provider'
import type { TerminalConfig } from '@shared/config/constant'
import { codeTools, terminalApps } from '@shared/config/constant'
import { isSiliconAnthropicCompatibleModel } from '@shared/config/providers'
import { Alert, Avatar, Button, Checkbox, Input, Popover, Select, Space, Tooltip } from 'antd'
import { ArrowUpRight, Download, FolderOpen, HelpCircle, Terminal, X } from 'lucide-react'
import type { FC } from 'react'
@@ -81,6 +82,10 @@ const CodeToolsPage: FC = () => {
if (m.supported_endpoint_types) {
return m.supported_endpoint_types.includes('anthropic')
}
// Special handling for silicon provider: only specific models support Anthropic API
if (m.provider === 'silicon') {
return isSiliconAnthropicCompatibleModel(m.id)
}
return m.id.includes('claude') || CLAUDE_OFFICIAL_SUPPORTED_PROVIDERS.includes(m.provider)
}

View File

@@ -1,4 +1,4 @@
import type { EndpointType, Model, Provider } from '@renderer/types'
import { type EndpointType, type Model, type Provider, SystemProviderIds } from '@renderer/types'
import { codeTools } from '@shared/config/constant'
export interface LaunchValidationResult {
@@ -25,7 +25,17 @@ export const CLI_TOOLS = [
]
export const GEMINI_SUPPORTED_PROVIDERS = ['aihubmix', 'dmxapi', 'new-api', 'cherryin']
export const CLAUDE_OFFICIAL_SUPPORTED_PROVIDERS = ['deepseek', 'moonshot', 'zhipu', 'dashscope', 'modelscope']
export const CLAUDE_OFFICIAL_SUPPORTED_PROVIDERS = [
'deepseek',
'moonshot',
'zhipu',
'dashscope',
'modelscope',
'minimax',
'longcat',
SystemProviderIds.qiniu,
SystemProviderIds.silicon
]
export const CLAUDE_SUPPORTED_PROVIDERS = [
'aihubmix',
'dmxapi',
@@ -79,6 +89,11 @@ export const getCodeToolsApiBaseUrl = (model: Model, type: EndpointType) => {
anthropic: {
api_base_url: 'https://api-inference.modelscope.cn'
}
},
minimax: {
anthropic: {
api_base_url: 'https://api.minimaxi.com/anthropic'
}
}
}
@@ -125,7 +140,8 @@ export const generateToolEnvironment = ({
switch (tool) {
case codeTools.claudeCode:
env.ANTHROPIC_BASE_URL = getCodeToolsApiBaseUrl(model, 'anthropic') || modelProvider.apiHost
env.ANTHROPIC_BASE_URL =
getCodeToolsApiBaseUrl(model, 'anthropic') || modelProvider.anthropicApiHost || modelProvider.apiHost
env.ANTHROPIC_MODEL = model.id
if (modelProvider.type === 'anthropic') {
env.ANTHROPIC_API_KEY = apiKey

View File

@@ -4,6 +4,7 @@ import { BingLogo, BochaLogo, ExaLogo, SearXNGLogo, TavilyLogo, ZhipuLogo } from
import type { QuickPanelListItem } from '@renderer/components/QuickPanel'
import { QuickPanelReservedSymbol } from '@renderer/components/QuickPanel'
import {
isFunctionCallingModel,
isGeminiModel,
isGPT5SeriesReasoningModel,
isOpenAIWebSearchModel,
@@ -18,6 +19,7 @@ import WebSearchService from '@renderer/services/WebSearchService'
import type { WebSearchProvider, WebSearchProviderId } from '@renderer/types'
import { hasObjectKey } from '@renderer/utils'
import { isToolUseModeFunction } from '@renderer/utils/assistant'
import { isPromptToolUse } from '@renderer/utils/mcp-tools'
import { isGeminiWebSearchProvider } from '@renderer/utils/provider'
import { Globe } from 'lucide-react'
import { useCallback, useEffect, useMemo } from 'react'
@@ -126,20 +128,25 @@ export const useWebSearchPanelController = (assistantId: string, quickPanelContr
const providerItems = useMemo<QuickPanelListItem[]>(() => {
const isWebSearchModelEnabled = assistant.model && isWebSearchModel(assistant.model)
const items: QuickPanelListItem[] = providers
.map((p) => ({
label: p.name,
description: WebSearchService.isWebSearchEnabled(p.id)
? hasObjectKey(p, 'apiKey')
? t('settings.tool.websearch.apikey')
: t('settings.tool.websearch.free')
: t('chat.input.web_search.enable_content'),
icon: <WebSearchProviderIcon size={13} pid={p.id} />,
isSelected: p.id === assistant?.webSearchProviderId,
disabled: !WebSearchService.isWebSearchEnabled(p.id),
action: () => updateQuickPanelItem(p.id)
}))
.filter((item) => !item.disabled)
const items: QuickPanelListItem[] = []
if (isFunctionCallingModel(assistant.model) || isPromptToolUse(assistant)) {
items.push(
...providers
.map((p) => ({
label: p.name,
description: WebSearchService.isWebSearchEnabled(p.id)
? hasObjectKey(p, 'apiKey')
? t('settings.tool.websearch.apikey')
: t('settings.tool.websearch.free')
: t('chat.input.web_search.enable_content'),
icon: <WebSearchProviderIcon size={13} pid={p.id} />,
isSelected: p.id === assistant?.webSearchProviderId,
disabled: !WebSearchService.isWebSearchEnabled(p.id),
action: () => updateQuickPanelItem(p.id)
}))
.filter((item) => !item.disabled)
)
}
if (isWebSearchModelEnabled) {
items.unshift({
@@ -155,15 +162,7 @@ export const useWebSearchPanelController = (assistantId: string, quickPanelContr
}
return items
}, [
assistant.enableWebSearch,
assistant.model,
assistant?.webSearchProviderId,
providers,
t,
updateQuickPanelItem,
updateToModelBuiltinWebSearch
])
}, [assistant, providers, t, updateQuickPanelItem, updateToModelBuiltinWebSearch])
const openQuickPanel = useCallback(() => {
quickPanelController.open({

View File

@@ -1,4 +1,4 @@
import { isMandatoryWebSearchModel, isWebSearchModel } from '@renderer/config/models'
import { isMandatoryWebSearchModel } from '@renderer/config/models'
import { defineTool, registerTool, TopicType } from '@renderer/pages/home/Inputbar/types'
import WebSearchButton from './components/WebSearchButton'
@@ -15,7 +15,7 @@ const webSearchTool = defineTool({
label: (t) => t('chat.input.web_search.label'),
visibleInScopes: [TopicType.Chat],
condition: ({ model }) => isWebSearchModel(model) && !isMandatoryWebSearchModel(model),
condition: ({ model }) => !isMandatoryWebSearchModel(model),
render: function WebSearchToolRender(context) {
const { assistant, quickPanelController } = context

View File

@@ -53,9 +53,10 @@ import {
setThoughtAutoCollapse
} from '@renderer/store/settings'
import type { Assistant, AssistantSettings, CodeStyleVarious, MathEngine } from '@renderer/types'
import { ThemeMode } from '@renderer/types'
import { isGroqSystemProvider, ThemeMode } from '@renderer/types'
import { modalConfirm } from '@renderer/utils'
import { getSendMessageShortcutLabel } from '@renderer/utils/input'
import { isSupportServiceTierProvider } from '@renderer/utils/provider'
import { Button, Col, InputNumber, Row, Slider, Switch } from 'antd'
import { Settings2 } from 'lucide-react'
import type { FC } from 'react'
@@ -63,6 +64,7 @@ import { useCallback, useEffect, useMemo, useState } from 'react'
import { useTranslation } from 'react-i18next'
import styled from 'styled-components'
import GroqSettingsGroup from './components/GroqSettingsGroup'
import OpenAISettingsGroup from './components/OpenAISettingsGroup'
interface Props {
@@ -181,7 +183,7 @@ const SettingsTab: FC<Props> = (props) => {
const model = assistant.model || getDefaultModel()
const isOpenAI = isOpenAIModel(model)
const showOpenAiSettings = isOpenAIModel(model) || isSupportServiceTierProvider(provider)
return (
<Container className="settings-tab">
@@ -332,7 +334,7 @@ const SettingsTab: FC<Props> = (props) => {
</SettingGroup>
</CollapsibleSettingGroup>
)}
{isOpenAI && (
{showOpenAiSettings && (
<OpenAISettingsGroup
model={model}
providerId={provider.id}
@@ -340,6 +342,9 @@ const SettingsTab: FC<Props> = (props) => {
SettingRowTitleSmall={SettingRowTitleSmall}
/>
)}
{isGroqSystemProvider(provider) && (
<GroqSettingsGroup SettingGroup={SettingGroup} SettingRowTitleSmall={SettingRowTitleSmall} />
)}
<CollapsibleSettingGroup title={t('settings.messages.title')} defaultExpanded={true}>
<SettingGroup>
<SettingRow>

View File

@@ -0,0 +1,79 @@
import Selector from '@renderer/components/Selector'
import { useProvider } from '@renderer/hooks/useProvider'
import { SettingDivider, SettingRow } from '@renderer/pages/settings'
import { CollapsibleSettingGroup } from '@renderer/pages/settings/SettingGroup'
import type { GroqServiceTier, ServiceTier } from '@renderer/types'
import { SystemProviderIds } from '@renderer/types'
import { toOptionValue, toRealValue } from '@renderer/utils/select'
import { Tooltip } from 'antd'
import { CircleHelp } from 'lucide-react'
import type { FC } from 'react'
import { useCallback, useMemo } from 'react'
import { useTranslation } from 'react-i18next'
type ServiceTierOptions = { value: NonNullable<GroqServiceTier> | 'undefined'; label: string }
interface Props {
SettingGroup: FC<{ children: React.ReactNode }>
SettingRowTitleSmall: FC<{ children: React.ReactNode }>
}
const GroqSettingsGroup: FC<Props> = ({ SettingGroup, SettingRowTitleSmall }) => {
const { t } = useTranslation()
const { provider, updateProvider } = useProvider(SystemProviderIds.groq)
const serviceTierMode = provider.serviceTier
const setServiceTierMode = useCallback(
(value: ServiceTier) => {
updateProvider({ serviceTier: value })
},
[updateProvider]
)
const serviceTierOptions = useMemo(() => {
const options = [
{
value: 'undefined',
label: t('common.ignore')
},
{
value: 'auto',
label: t('settings.openai.service_tier.auto')
},
{
value: 'on_demand',
label: t('settings.openai.service_tier.on_demand')
},
{
value: 'flex',
label: t('settings.openai.service_tier.flex')
}
] as const satisfies ServiceTierOptions[]
return options
}, [t])
return (
<CollapsibleSettingGroup title={t('settings.groq.title')} defaultExpanded={true}>
<SettingGroup>
<SettingRow>
<SettingRowTitleSmall>
{t('settings.openai.service_tier.title')}{' '}
<Tooltip title={t('settings.openai.service_tier.tip')}>
<CircleHelp size={14} style={{ marginLeft: 4 }} color="var(--color-text-2)" />
</Tooltip>
</SettingRowTitleSmall>
<Selector
value={toOptionValue(serviceTierMode)}
onChange={(value) => {
setServiceTierMode(toRealValue(value))
}}
options={serviceTierOptions}
/>
</SettingRow>
</SettingGroup>
<SettingDivider />
</CollapsibleSettingGroup>
)
}
export default GroqSettingsGroup

View File

@@ -11,10 +11,11 @@ import { CollapsibleSettingGroup } from '@renderer/pages/settings/SettingGroup'
import type { RootState } from '@renderer/store'
import { useAppDispatch } from '@renderer/store'
import { setOpenAISummaryText, setOpenAIVerbosity } from '@renderer/store/settings'
import type { GroqServiceTier, Model, OpenAIServiceTier, ServiceTier } from '@renderer/types'
import { GroqServiceTiers, OpenAIServiceTiers, SystemProviderIds } from '@renderer/types'
import type { Model, OpenAIServiceTier, ServiceTier } from '@renderer/types'
import { SystemProviderIds } from '@renderer/types'
import type { OpenAISummaryText, OpenAIVerbosity } from '@renderer/types/aiCoreTypes'
import { isSupportServiceTierProvider } from '@renderer/utils/provider'
import { isSupportServiceTierProvider, isSupportVerbosityProvider } from '@renderer/utils/provider'
import { toOptionValue, toRealValue } from '@renderer/utils/select'
import { Tooltip } from 'antd'
import { CircleHelp } from 'lucide-react'
import type { FC } from 'react'
@@ -23,19 +24,16 @@ import { useTranslation } from 'react-i18next'
import { useSelector } from 'react-redux'
type VerbosityOption = {
value: OpenAIVerbosity
value: NonNullable<OpenAIVerbosity> | 'undefined'
label: string
}
type SummaryTextOption = {
value: OpenAISummaryText
value: NonNullable<OpenAISummaryText> | 'undefined'
label: string
}
type OpenAIServiceTierOption = { value: OpenAIServiceTier; label: string }
type GroqServiceTierOption = { value: GroqServiceTier; label: string }
type ServiceTierOptions = OpenAIServiceTierOption[] | GroqServiceTierOption[]
type OpenAIServiceTierOption = { value: NonNullable<OpenAIServiceTier> | 'null' | 'undefined'; label: string }
interface Props {
model: Model
@@ -52,13 +50,14 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
const serviceTierMode = provider.serviceTier
const dispatch = useAppDispatch()
const isOpenAIReasoning =
const showSummarySetting =
isSupportedReasoningEffortOpenAIModel(model) &&
!model.id.includes('o1-pro') &&
(provider.type === 'openai-response' || provider.id === 'aihubmix')
const isSupportVerbosity = isSupportVerbosityModel(model)
(provider.type === 'openai-response' || model.endpoint_type === 'openai-response' || provider.id === 'aihubmix')
const showVerbositySetting = isSupportVerbosityModel(model) && isSupportVerbosityProvider(provider)
const isSupportFlexServiceTier = isSupportFlexServiceTierModel(model)
const isSupportServiceTier = isSupportServiceTierProvider(provider)
const isSupportedFlexServiceTier = isSupportFlexServiceTierModel(model)
const showServiceTierSetting = isSupportServiceTier && providerId !== SystemProviderIds.groq
const setSummaryText = useCallback(
(value: OpenAISummaryText) => {
@@ -83,8 +82,8 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
const summaryTextOptions = [
{
value: undefined,
label: t('common.default')
value: 'undefined',
label: t('common.ignore')
},
{
value: 'auto',
@@ -103,8 +102,8 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
const verbosityOptions = useMemo(() => {
const allOptions = [
{
value: undefined,
label: t('common.default')
value: 'undefined',
label: t('common.ignore')
},
{
value: 'low',
@@ -119,73 +118,44 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
label: t('settings.openai.verbosity.high')
}
] as const satisfies VerbosityOption[]
const supportedVerbosityLevels = getModelSupportedVerbosity(model)
const supportedVerbosityLevels = getModelSupportedVerbosity(model).map((v) => toOptionValue(v))
return allOptions.filter((option) => supportedVerbosityLevels.includes(option.value))
}, [model, t])
const serviceTierOptions = useMemo(() => {
let options: ServiceTierOptions
if (provider.id === SystemProviderIds.groq) {
options = [
{
value: null,
label: t('common.off')
},
{
value: undefined,
label: t('common.default')
},
{
value: 'auto',
label: t('settings.openai.service_tier.auto')
},
{
value: 'on_demand',
label: t('settings.openai.service_tier.on_demand')
},
{
value: 'flex',
label: t('settings.openai.service_tier.flex')
}
] as const satisfies GroqServiceTierOption[]
} else {
// 其他情况默认是和 OpenAI 相同
options = [
{
value: 'auto',
label: t('settings.openai.service_tier.auto')
},
{
value: 'default',
label: t('settings.openai.service_tier.default')
},
{
value: 'flex',
label: t('settings.openai.service_tier.flex')
},
{
value: 'priority',
label: t('settings.openai.service_tier.priority')
}
] as const satisfies OpenAIServiceTierOption[]
}
const options = [
{
value: 'undefined',
label: t('common.ignore')
},
{
value: 'null',
label: t('common.off')
},
{
value: 'auto',
label: t('settings.openai.service_tier.auto')
},
{
value: 'default',
label: t('settings.openai.service_tier.default')
},
{
value: 'flex',
label: t('settings.openai.service_tier.flex')
},
{
value: 'priority',
label: t('settings.openai.service_tier.priority')
}
] as const satisfies OpenAIServiceTierOption[]
return options.filter((option) => {
if (option.value === 'flex') {
return isSupportedFlexServiceTier
return isSupportFlexServiceTier
}
return true
})
}, [isSupportedFlexServiceTier, provider.id, t])
useEffect(() => {
if (serviceTierMode && !serviceTierOptions.some((option) => option.value === serviceTierMode)) {
if (provider.id === SystemProviderIds.groq) {
setServiceTierMode(GroqServiceTiers.on_demand)
} else {
setServiceTierMode(OpenAIServiceTiers.auto)
}
}
}, [provider.id, serviceTierMode, serviceTierOptions, setServiceTierMode])
}, [isSupportFlexServiceTier, t])
useEffect(() => {
if (verbosity && !verbosityOptions.some((option) => option.value === verbosity)) {
@@ -196,14 +166,14 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
}
}, [model, verbosity, verbosityOptions, setVerbosity])
if (!isOpenAIReasoning && !isSupportServiceTier && !isSupportVerbosity) {
if (!showSummarySetting && !showServiceTierSetting && !showVerbositySetting) {
return null
}
return (
<CollapsibleSettingGroup title={t('settings.openai.title')} defaultExpanded={true}>
<SettingGroup>
{isSupportServiceTier && (
{showServiceTierSetting && (
<>
<SettingRow>
<SettingRowTitleSmall>
@@ -213,18 +183,17 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
</Tooltip>
</SettingRowTitleSmall>
<Selector
value={serviceTierMode}
value={toOptionValue(serviceTierMode)}
onChange={(value) => {
setServiceTierMode(value as OpenAIServiceTier)
setServiceTierMode(toRealValue(value))
}}
options={serviceTierOptions}
placeholder={t('settings.openai.service_tier.auto')}
/>
</SettingRow>
{(isOpenAIReasoning || isSupportVerbosity) && <SettingDivider />}
{(showSummarySetting || showVerbositySetting) && <SettingDivider />}
</>
)}
{isOpenAIReasoning && (
{showSummarySetting && (
<>
<SettingRow>
<SettingRowTitleSmall>
@@ -241,10 +210,10 @@ const OpenAISettingsGroup: FC<Props> = ({ model, providerId, SettingGroup, Setti
options={summaryTextOptions}
/>
</SettingRow>
{isSupportVerbosity && <SettingDivider />}
{showVerbositySetting && <SettingDivider />}
</>
)}
{isSupportVerbosity && (
{showVerbositySetting && (
<SettingRow>
<SettingRowTitleSmall>
{t('settings.openai.verbosity.title')}{' '}

View File

@@ -10,6 +10,8 @@ import { useTranslation } from 'react-i18next'
const logger = loggerService.withContext('QuotaTag')
const QUOTA_UNLIMITED = -9999
const QuotaTag: FC<{ base: KnowledgeBase; providerId: PreprocessProviderId; quota?: number }> = ({
base,
providerId,
@@ -24,8 +26,8 @@ const QuotaTag: FC<{ base: KnowledgeBase; providerId: PreprocessProviderId; quot
if (provider.id !== 'mineru') return
// 使用用户的key时quota为无限
if (provider.apiKey) {
setQuota(-9999)
updateProvider({ quota: -9999 })
setQuota(QUOTA_UNLIMITED)
updateProvider({ quota: QUOTA_UNLIMITED })
return
}
if (quota === undefined) {
@@ -43,28 +45,37 @@ const QuotaTag: FC<{ base: KnowledgeBase; providerId: PreprocessProviderId; quot
}
}
if (_quota !== undefined) {
setQuota(_quota)
updateProvider({ quota: _quota })
return
}
checkQuota()
}, [_quota, base, provider.id, provider.apiKey, provider, quota, updateProvider])
return (
<>
{quota && (
const getQuotaDisplay = () => {
if (quota === undefined) return null
if (quota === QUOTA_UNLIMITED) {
return (
<Tag color="orange" style={{ borderRadius: 20, margin: 0 }}>
{quota === -9999
? t('knowledge.quota_infinity', {
name: provider.name
})
: t('knowledge.quota', {
name: provider.name,
quota: quota
})}
{t('knowledge.quota_infinity', { name: provider.name })}
</Tag>
)}
</>
)
)
}
if (quota === 0) {
return (
<Tag color="red" style={{ borderRadius: 20, margin: 0 }}>
{t('knowledge.quota_empty', { name: provider.name })}
</Tag>
)
}
return (
<Tag color="orange" style={{ borderRadius: 20, margin: 0 }}>
{t('knowledge.quota', { name: provider.name, quota: quota })}
</Tag>
)
}
return <>{getQuotaDisplay()}</>
}
export default QuotaTag

View File

@@ -93,7 +93,7 @@ const AihubmixPage: FC<{ Options: string[] }> = ({ Options }) => {
const getNewPainting = useCallback(() => {
return {
...DEFAULT_PAINTING,
model: mode === 'aihubmix_image_generate' ? 'gpt-image-1' : 'V_3',
model: mode === 'aihubmix_image_generate' ? 'gemini-3-pro-image-preview' : 'V_3',
id: uuid()
}
}, [mode])
@@ -201,6 +201,74 @@ const AihubmixPage: FC<{ Options: string[] }> = ({ Options }) => {
updatePaintingState({ files: validFiles, urls: validFiles.map((file) => file.name) })
}
return
} else if (painting.model === 'gemini-3-pro-image-preview') {
const geminiUrl = `${aihubmixProvider.apiHost}/gemini/v1beta/models/gemini-3-pro-image-preview:streamGenerateContent`
const geminiHeaders = {
'Content-Type': 'application/json',
'x-goog-api-key': aihubmixProvider.apiKey
}
const requestBody = {
contents: [
{
parts: [
{
text: prompt
}
],
role: 'user'
}
],
generationConfig: {
responseModalities: ['TEXT', 'IMAGE'],
imageConfig: {
aspectRatio: painting.aspectRatio?.replace('ASPECT_', '').replace('_', ':') || '1:1',
imageSize: painting.imageSize || '1k'
}
}
}
logger.silly(`Gemini Request: ${JSON.stringify(requestBody)}`)
const response = await fetch(geminiUrl, {
method: 'POST',
headers: geminiHeaders,
body: JSON.stringify(requestBody)
})
if (!response.ok) {
const errorData = await response.json()
logger.error('Gemini API Error:', errorData)
throw new Error(errorData.error?.message || '生成图像失败')
}
const data = await response.json()
logger.silly(`Gemini API Response: ${JSON.stringify(data)}`)
// Handle array response (stream) or single object
const responseItems = Array.isArray(data) ? data : [data]
const base64s: string[] = []
responseItems.forEach((item) => {
item.candidates?.forEach((candidate: any) => {
candidate.content?.parts?.forEach((part: any) => {
if (part.inlineData?.data) {
base64s.push(part.inlineData.data)
}
})
})
})
if (base64s.length > 0) {
const validFiles = await Promise.all(
base64s.map(async (base64: string) => {
return await window.api.file.saveBase64Image(base64)
})
)
await FileManager.addFiles(validFiles)
updatePaintingState({ files: validFiles, urls: validFiles.map((file) => file.name) })
}
return
} else if (painting.model === 'V_3') {
// V3 API uses different endpoint and parameters format
const formData = new FormData()

View File

@@ -72,6 +72,7 @@ export const createModeConfigs = (): Record<AihubmixMode, ConfigItem[]> => {
label: 'Gemini',
title: 'Gemini',
options: [
{ label: 'Nano Banana Pro', value: 'gemini-3-pro-image-preview' },
{ label: 'imagen-4.0-preview', value: 'imagen-4.0-generate-preview-06-06' },
{ label: 'imagen-4.0-ultra', value: 'imagen-4.0-ultra-generate-preview-06-06' }
]
@@ -224,7 +225,20 @@ export const createModeConfigs = (): Record<AihubmixMode, ConfigItem[]> => {
{ label: '16:9', value: 'ASPECT_16_9' }
],
initialValue: 'ASPECT_1_1',
condition: (painting) => Boolean(painting.model?.startsWith('imagen-'))
condition: (painting) =>
Boolean(painting.model?.startsWith('imagen-') || painting.model === 'gemini-3-pro-image-preview')
},
{
type: 'select',
key: 'imageSize',
title: 'paintings.image.size',
options: [
{ label: '1K', value: '1K' },
{ label: '2K', value: '2K' },
{ label: '4K', value: '4K' }
],
initialValue: '1K',
condition: (painting) => painting.model === 'gemini-3-pro-image-preview'
},
{
type: 'select',
@@ -398,7 +412,7 @@ export const createModeConfigs = (): Record<AihubmixMode, ConfigItem[]> => {
// 几种默认的绘画配置
export const DEFAULT_PAINTING: PaintingAction = {
id: 'aihubmix_1',
model: 'gpt-image-1',
model: 'gemini-3-pro-image-preview',
aspectRatio: 'ASPECT_1_1',
numImages: 1,
styleType: 'AUTO',
@@ -420,5 +434,6 @@ export const DEFAULT_PAINTING: PaintingAction = {
moderation: 'auto',
n: 1,
numberOfImages: 4,
safetyTolerance: 6
safetyTolerance: 6,
imageSize: '1K'
}

View File

@@ -76,6 +76,17 @@ const ApiOptionsSettings = ({ providerId }: Props) => {
})
},
checked: !provider.apiOptions?.isNotSupportEnableThinking
},
{
key: 'openai_verbosity',
label: t('settings.provider.api.options.verbosity.label'),
tip: t('settings.provider.api.options.verbosity.help'),
onChange: (checked: boolean) => {
updateProviderTransition({
apiOptions: { ...provider.apiOptions, isNotSupportVerbosity: !checked }
})
},
checked: !provider.apiOptions?.isNotSupportVerbosity
}
],
[t, provider, updateProviderTransition]

View File

@@ -1,5 +1,5 @@
import { TopView } from '@renderer/components/TopView'
import { isNotSupportedTextDelta } from '@renderer/config/models'
import { isNotSupportTextDeltaModel } from '@renderer/config/models'
import { useProvider } from '@renderer/hooks/useProvider'
import type { Model, Provider } from '@renderer/types'
import { getDefaultGroupName } from '@renderer/utils'
@@ -58,7 +58,7 @@ const PopupContainer: React.FC<Props> = ({ title, provider, resolve }) => {
group: values.group ?? getDefaultGroupName(id)
}
addModel({ ...model, supported_text_delta: !isNotSupportedTextDelta(model) })
addModel({ ...model, supported_text_delta: !isNotSupportTextDeltaModel(model) })
return true
}

View File

@@ -6,7 +6,7 @@ import {
groupQwenModels,
isEmbeddingModel,
isFunctionCallingModel,
isNotSupportedTextDelta,
isNotSupportTextDeltaModel,
isReasoningModel,
isRerankModel,
isVisionModel,
@@ -136,13 +136,13 @@ const PopupContainer: React.FC<Props> = ({ providerId, resolve }) => {
addModel({
...model,
endpoint_type: endpointTypes.includes('image-generation') ? 'image-generation' : endpointTypes[0],
supported_text_delta: !isNotSupportedTextDelta(model)
supported_text_delta: !isNotSupportTextDeltaModel(model)
})
} else {
NewApiAddModelPopup.show({ title: t('settings.models.add.add_model'), provider, model })
}
} else {
addModel({ ...model, supported_text_delta: !isNotSupportedTextDelta(model) })
addModel({ ...model, supported_text_delta: !isNotSupportTextDeltaModel(model) })
}
}
},

View File

@@ -1,6 +1,6 @@
import { TopView } from '@renderer/components/TopView'
import { endpointTypeOptions } from '@renderer/config/endpointTypes'
import { isNotSupportedTextDelta } from '@renderer/config/models'
import { isNotSupportTextDeltaModel } from '@renderer/config/models'
import { useDynamicLabelWidth } from '@renderer/hooks/useDynamicLabelWidth'
import { useProvider } from '@renderer/hooks/useProvider'
import type { EndpointType, Model, Provider } from '@renderer/types'
@@ -65,7 +65,7 @@ const PopupContainer: React.FC<Props> = ({ title, provider, resolve, model, endp
endpoint_type: isNewApiProvider(provider) ? values.endpointType : undefined
}
addModel({ ...model, supported_text_delta: !isNotSupportedTextDelta(model) })
addModel({ ...model, supported_text_delta: !isNotSupportTextDeltaModel(model) })
return true
}

Some files were not shown because too many files have changed in this diff Show More