Compare commits

..

4 Commits

Author SHA1 Message Date
suyao
76d48f9ccb fix: address PR review issues for Volcengine integration
- Fix region field being ignored: pass user-configured region to listFoundationModels and listEndpoints
- Add user notification before silent fallback when API fails
- Throw error on credential corruption instead of returning null
- Remove redundant credentials (accessKeyId, secretAccessKey) from Redux store (they're securely stored via safeStorage)
- Add warnings field to ListModelsResult for partial API failures
- Fix Redux/IPC order: save to secure storage first, then update Redux on success
- Update related tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 19:39:15 +08:00
suyao
115cd80432 fix: format 2025-11-27 01:33:17 +08:00
suyao
531101742e feat: add project name support for Volcengine integration 2025-11-27 01:29:02 +08:00
suyao
c3c577dff4 feat: add Volcengine integration with settings and API client
- Implement Volcengine configuration in multiple languages (el-gr, es-es, fr-fr, ja-jp, pt-pt, ru-ru).
- Add Volcengine settings component to manage access key ID, secret access key, and region.
- Create Volcengine service for API interactions, including credential management and model listing.
- Extend OpenAI API client to support Volcengine's signed API for model retrieval.
- Update Redux store to handle Volcengine settings and credentials.
- Implement migration for Volcengine settings in the store.
- Add hooks for accessing and managing Volcengine settings in the application.
2025-11-27 01:15:22 +08:00
134 changed files with 2035 additions and 3490 deletions

View File

@@ -11,7 +11,6 @@
"dist/**",
"out/**",
"local/**",
"tests/**",
".yarn/**",
".gitignore",
"scripts/cloudflare-worker.js",

View File

@@ -12,15 +12,7 @@ This file provides guidance to AI coding assistants when working with code in th
- **Always propose before executing**: Before making any changes, clearly explain your planned approach and wait for explicit user approval to ensure alignment and prevent unwanted modifications.
- **Lint, test, and format before completion**: Coding tasks are only complete after running `yarn lint`, `yarn test`, and `yarn format` successfully.
- **Write conventional commits**: Commit small, focused changes using Conventional Commit messages (e.g., `feat:`, `fix:`, `refactor:`, `docs:`).
## Pull Request Workflow (CRITICAL)
When creating a Pull Request, you MUST:
1. **Read the PR template first**: Always read `.github/pull_request_template.md` before creating the PR
2. **Follow ALL template sections**: Structure the `--body` parameter to include every section from the template
3. **Never skip sections**: Include all sections even if marking them as N/A or "None"
4. **Use proper formatting**: Match the template's markdown structure exactly (headings, checkboxes, code blocks)
- **Follow PR template**: When submitting pull requests, follow the template in `.github/pull_request_template.md` to ensure complete context and documentation.
## Development Commands

View File

@@ -134,108 +134,56 @@ artifactBuildCompleted: scripts/artifact-build-completed.js
releaseInfo:
releaseNotes: |
<!--LANG:en-->
A New Era of Intelligence with Cherry Studio 1.7.1
What's New in v1.7.0-rc.3
Today we're releasing Cherry Studio 1.7.1 — our most ambitious update yet, introducing Agent: autonomous AI that thinks, plans, and acts.
✨ New Features:
- Provider: Added Silicon provider support for Anthropic API compatibility
- Provider: AIHubMix support for nano banana
For years, AI assistants have been reactive — waiting for your commands, responding to your questions. With Agent, we're changing that. Now, AI can truly work alongside you: understanding complex goals, breaking them into steps, and executing them independently.
🐛 Bug Fixes:
- i18n: Clean up translation tags and untranslated strings
- Provider: Fixed Silicon provider code list
- Provider: Fixed Poe API reasoning parameters for GPT-5 and reasoning models
- Provider: Fixed duplicate /v1 in Anthropic API endpoints
- Provider: Fixed Azure provider handling in AI SDK integration
- Models: Added Claude Opus 4.5 pattern to THINKING_TOKEN_MAP
- Models: Improved Gemini reasoning and message handling
- Models: Fixed custom parameters for Gemini models
- Models: Fixed qwen-mt-flash text delta support
- Models: Fixed Groq verbosity setting
- UI: Fixed quota display and quota tips
- UI: Fixed web search button condition
- Settings: Fixed updateAssistantPreset reducer to properly update preset
- Settings: Respect enableMaxTokens setting when maxTokens is not configured
- SDK: Fixed header merging logic in AI SDK
This is what we've been building toward. And it's just the beginning.
🤖 Meet Agent
Imagine having a brilliant colleague who never sleeps. Give Agent a goal — write a report, analyze data, refactor code — and watch it work. It reasons through problems, breaks them into steps, calls the right tools, and adapts when things change.
- **Think → Plan → Act**: From goal to execution, fully autonomous
- **Deep Reasoning**: Multi-turn thinking that solves real problems
- **Tool Mastery**: File operations, web search, code execution, and more
- **Skill Plugins**: Extend with custom commands and capabilities
- **You Stay in Control**: Real-time approval for sensitive actions
- **Full Visibility**: Every thought, every decision, fully transparent
🌐 Expanding Ecosystem
- **New Providers**: HuggingFace, Mistral, CherryIN, AI Gateway, Intel OVMS, Didi MCP
- **New Models**: Claude 4.5 Haiku, DeepSeek v3.2, GLM-4.6, Doubao, Ling series
- **MCP Integration**: Alibaba Cloud, ModelScope, Higress, MCP.so, TokenFlux and more
📚 Smarter Knowledge Base
- **OpenMinerU**: Self-hosted document processing
- **Full-Text Search**: Find anything instantly across your notes
- **Enhanced Tool Selection**: Smarter configuration for better AI assistance
📝 Notes, Reimagined
- Full-text search with highlighted results
- AI-powered smart rename
- Export as image
- Auto-wrap for tables
🖼️ Image & OCR
- Intel OVMS painting capabilities
- Intel OpenVINO NPU-accelerated OCR
🌍 Now in 10+ Languages
- Added German support
- Enhanced internationalization
⚡ Faster & More Polished
- Electron 38 upgrade
- New MCP management interface
- Dozens of UI refinements
❤️ Fully Open Source
Commercial restrictions removed. Cherry Studio now follows standard AGPL v3 — free for teams of any size.
The Agent Era is here. We can't wait to see what you'll create.
⚡ Improvements:
- SDK: Upgraded @anthropic-ai/claude-agent-sdk to 0.1.53
<!--LANG:zh-CN-->
Cherry Studio 1.7.1:开启智能新纪元
v1.7.0-rc.3 更新内容
今天,我们正式发布 Cherry Studio 1.7.1 —— 迄今最具雄心的版本,带来全新的 Agent能够自主思考、规划和行动的 AI。
✨ 新功能:
- 提供商:新增 Silicon 提供商对 Anthropic API 的兼容性支持
- 提供商AIHubMix 支持 nano banana
多年来AI 助手一直是被动的——等待你的指令回应你的问题。Agent 改变了这一切。现在AI 能够真正与你并肩工作:理解复杂目标,将其拆解为步骤,并独立执行。
🐛 问题修复:
- 国际化:清理翻译标签和未翻译字符串
- 提供商:修复 Silicon 提供商代码列表
- 提供商:修复 Poe API 对 GPT-5 和推理模型的推理参数
- 提供商:修复 Anthropic API 端点重复 /v1 问题
- 提供商:修复 Azure 提供商在 AI SDK 集成中的处理
- 模型Claude Opus 4.5 添加到 THINKING_TOKEN_MAP
- 模型:改进 Gemini 推理和消息处理
- 模型:修复 Gemini 模型自定义参数
- 模型:修复 qwen-mt-flash text delta 支持
- 模型:修复 Groq verbosity 设置
- 界面:修复配额显示和配额提示
- 界面:修复 Web 搜索按钮条件
- 设置:修复 updateAssistantPreset reducer 正确更新 preset
- 设置:尊重 enableMaxTokens 设置
- SDK修复 AI SDK 中 header 合并逻辑
这是我们一直在构建的未来。而这,仅仅是开始。
🤖 认识 Agent
想象一位永不疲倦的得力伙伴。给 Agent 一个目标——撰写报告、分析数据、重构代码——然后看它工作。它会推理问题、拆解步骤、调用工具,并在情况变化时灵活应对。
- **思考 → 规划 → 行动**:从目标到执行,全程自主
- **深度推理**:多轮思考,解决真实问题
- **工具大师**:文件操作、网络搜索、代码执行,样样精通
- **技能插件**:自定义命令,无限扩展
- **你掌控全局**:敏感操作,实时审批
- **完全透明**:每一步思考,每一个决策,清晰可见
🌐 生态持续壮大
- **新增服务商**Hugging Face、Mistral、Perplexity、SophNet、AI Gateway、Cerebras AI
- **新增模型**Gemini 3、Gemini 3 Pro支持图像预览、GPT-5.1、Claude Opus 4.5
- **MCP 集成**百炼、魔搭、Higress、MCP.so、TokenFlux 等平台
📚 更智能的知识库
- **OpenMinerU**:本地自部署文档处理
- **全文搜索**:笔记内容一搜即达
- **增强工具选择**:更智能的配置,更好的 AI 协助
📝 笔记,焕然一新
- 全文搜索,结果高亮
- AI 智能重命名
- 导出为图片
- 表格自动换行
🖼️ 图像与 OCR
- Intel OVMS 绘图能力
- Intel OpenVINO NPU 加速 OCR
🌍 支持 10+ 种语言
- 新增德语支持
- 全面增强国际化
⚡ 更快、更精致
- 升级 Electron 38
- 新的 MCP 管理界面
- 数十处 UI 细节打磨
❤️ 完全开源
商用限制已移除。Cherry Studio 现遵循标准 AGPL v3 协议——任意规模团队均可自由使用。
Agent 纪元已至。期待你的创造。
⚡ 改进:
- SDK升级 @anthropic-ai/claude-agent-sdk 到 0.1.53
<!--LANG:END-->

View File

@@ -58,7 +58,6 @@ export default defineConfig([
'dist/**',
'out/**',
'local/**',
'tests/**',
'.yarn/**',
'.gitignore',
'scripts/cloudflare-worker.js',

View File

@@ -1,6 +1,6 @@
{
"name": "CherryStudio",
"version": "1.7.1",
"version": "1.7.0-rc.3",
"private": true,
"description": "A powerful AI assistant for producer.",
"main": "./out/main/index.js",
@@ -62,7 +62,6 @@
"test": "vitest run --silent",
"test:main": "vitest run --project main",
"test:renderer": "vitest run --project renderer",
"test:aicore": "vitest run --project aiCore",
"test:update": "yarn test:renderer --update",
"test:coverage": "vitest run --coverage --silent",
"test:ui": "vitest --ui",
@@ -165,7 +164,7 @@
"@modelcontextprotocol/sdk": "^1.17.5",
"@mozilla/readability": "^0.6.0",
"@notionhq/client": "^2.2.15",
"@openrouter/ai-sdk-provider": "^1.2.8",
"@openrouter/ai-sdk-provider": "^1.2.5",
"@opentelemetry/api": "^1.9.0",
"@opentelemetry/core": "2.0.0",
"@opentelemetry/exporter-trace-otlp-http": "^0.200.0",
@@ -173,7 +172,7 @@
"@opentelemetry/sdk-trace-node": "^2.0.0",
"@opentelemetry/sdk-trace-web": "^2.0.0",
"@opeoginni/github-copilot-openai-compatible": "^0.1.21",
"@playwright/test": "^1.55.1",
"@playwright/test": "^1.52.0",
"@radix-ui/react-context-menu": "^2.2.16",
"@reduxjs/toolkit": "^2.2.5",
"@shikijs/markdown-it": "^3.12.0",
@@ -322,6 +321,7 @@
"p-queue": "^8.1.0",
"pdf-lib": "^1.17.1",
"pdf-parse": "^1.1.1",
"playwright": "^1.55.1",
"proxy-agent": "^6.5.0",
"react": "^19.2.0",
"react-dom": "^19.2.0",

View File

@@ -69,7 +69,6 @@ export interface CherryInProviderSettings {
headers?: HeadersInput
/**
* Optional endpoint type to distinguish different endpoint behaviors.
* "image-generation" is also openai endpoint, but specifically for image generation.
*/
endpointType?: 'openai' | 'openai-response' | 'anthropic' | 'gemini' | 'image-generation' | 'jina-rerank'
}

View File

@@ -3,13 +3,12 @@
* Provides realistic mock responses for all provider types
*/
import type { ModelMessage, Tool } from 'ai'
import { jsonSchema } from 'ai'
import { jsonSchema, type ModelMessage, type Tool } from 'ai'
/**
* Standard test messages for all scenarios
*/
export const testMessages: Record<string, ModelMessage[]> = {
export const testMessages = {
simple: [{ role: 'user' as const, content: 'Hello, how are you?' }],
conversation: [
@@ -46,7 +45,7 @@ export const testMessages: Record<string, ModelMessage[]> = {
{ role: 'assistant' as const, content: '15 * 23 = 345' },
{ role: 'user' as const, content: 'Now divide that by 5' }
]
}
} satisfies Record<string, ModelMessage[]>
/**
* Standard test tools for tool calling scenarios
@@ -139,17 +138,68 @@ export const testTools: Record<string, Tool> = {
}
}
/**
* Mock streaming chunks for different providers
*/
export const mockStreamingChunks = {
text: [
{ type: 'text-delta' as const, textDelta: 'Hello' },
{ type: 'text-delta' as const, textDelta: ', ' },
{ type: 'text-delta' as const, textDelta: 'this ' },
{ type: 'text-delta' as const, textDelta: 'is ' },
{ type: 'text-delta' as const, textDelta: 'a ' },
{ type: 'text-delta' as const, textDelta: 'test.' }
],
withToolCall: [
{ type: 'text-delta' as const, textDelta: 'Let me check the weather for you.' },
{
type: 'tool-call-delta' as const,
toolCallType: 'function' as const,
toolCallId: 'call_123',
toolName: 'getWeather',
argsTextDelta: '{"location":'
},
{
type: 'tool-call-delta' as const,
toolCallType: 'function' as const,
toolCallId: 'call_123',
toolName: 'getWeather',
argsTextDelta: ' "San Francisco, CA"}'
},
{
type: 'tool-call' as const,
toolCallType: 'function' as const,
toolCallId: 'call_123',
toolName: 'getWeather',
args: { location: 'San Francisco, CA' }
}
],
withFinish: [
{ type: 'text-delta' as const, textDelta: 'Complete response.' },
{
type: 'finish' as const,
finishReason: 'stop' as const,
usage: {
promptTokens: 10,
completionTokens: 5,
totalTokens: 15
}
}
]
}
/**
* Mock complete responses for non-streaming scenarios
* Note: AI SDK v5 uses inputTokens/outputTokens instead of promptTokens/completionTokens
*/
export const mockCompleteResponses = {
simple: {
text: 'This is a simple response.',
finishReason: 'stop' as const,
usage: {
inputTokens: 15,
outputTokens: 8,
promptTokens: 15,
completionTokens: 8,
totalTokens: 23
}
},
@@ -165,8 +215,8 @@ export const mockCompleteResponses = {
],
finishReason: 'tool-calls' as const,
usage: {
inputTokens: 25,
outputTokens: 12,
promptTokens: 25,
completionTokens: 12,
totalTokens: 37
}
},
@@ -175,15 +225,14 @@ export const mockCompleteResponses = {
text: 'Response with warnings.',
finishReason: 'stop' as const,
usage: {
inputTokens: 10,
outputTokens: 5,
promptTokens: 10,
completionTokens: 5,
totalTokens: 15
},
warnings: [
{
type: 'unsupported-setting' as const,
setting: 'temperature',
details: 'Temperature parameter not supported for this model'
message: 'Temperature parameter not supported for this model'
}
]
}
@@ -236,3 +285,47 @@ export const mockImageResponses = {
warnings: []
}
}
/**
* Mock error responses
*/
export const mockErrors = {
invalidApiKey: {
name: 'APIError',
message: 'Invalid API key provided',
statusCode: 401
},
rateLimitExceeded: {
name: 'RateLimitError',
message: 'Rate limit exceeded. Please try again later.',
statusCode: 429,
headers: {
'retry-after': '60'
}
},
modelNotFound: {
name: 'ModelNotFoundError',
message: 'The requested model was not found',
statusCode: 404
},
contextLengthExceeded: {
name: 'ContextLengthError',
message: "This model's maximum context length is 4096 tokens",
statusCode: 400
},
timeout: {
name: 'TimeoutError',
message: 'Request timed out after 30000ms',
code: 'ETIMEDOUT'
},
networkError: {
name: 'NetworkError',
message: 'Network connection failed',
code: 'ECONNREFUSED'
}
}

View File

@@ -1,35 +0,0 @@
/**
* Mock for @cherrystudio/ai-sdk-provider
* This mock is used in tests to avoid importing the actual package
*/
export type CherryInProviderSettings = {
apiKey?: string
baseURL?: string
}
// oxlint-disable-next-line no-unused-vars
export const createCherryIn = (_options?: CherryInProviderSettings) => ({
// oxlint-disable-next-line no-unused-vars
languageModel: (_modelId: string) => ({
specificationVersion: 'v1',
provider: 'cherryin',
modelId: 'mock-model',
doGenerate: async () => ({ text: 'mock response' }),
doStream: async () => ({ stream: (async function* () {})() })
}),
// oxlint-disable-next-line no-unused-vars
chat: (_modelId: string) => ({
specificationVersion: 'v1',
provider: 'cherryin-chat',
modelId: 'mock-model',
doGenerate: async () => ({ text: 'mock response' }),
doStream: async () => ({ stream: (async function* () {})() })
}),
// oxlint-disable-next-line no-unused-vars
textEmbeddingModel: (_modelId: string) => ({
specificationVersion: 'v1',
provider: 'cherryin',
modelId: 'mock-embedding-model'
})
})

View File

@@ -1,9 +0,0 @@
/**
* Vitest Setup File
* Global test configuration and mocks for @cherrystudio/ai-core package
*/
// Mock Vite SSR helper to avoid Node environment errors
;(globalThis as any).__vite_ssr_exportName__ = (_name: string, value: any) => value
// Note: @cherrystudio/ai-sdk-provider is mocked via alias in vitest.config.ts

View File

@@ -1,109 +0,0 @@
import { describe, expect, it } from 'vitest'
import { createOpenAIOptions, createOpenRouterOptions, mergeProviderOptions } from '../factory'
describe('mergeProviderOptions', () => {
it('deep merges provider options for the same provider', () => {
const reasoningOptions = createOpenRouterOptions({
reasoning: {
enabled: true,
effort: 'medium'
}
})
const webSearchOptions = createOpenRouterOptions({
plugins: [{ id: 'web', max_results: 5 }]
})
const merged = mergeProviderOptions(reasoningOptions, webSearchOptions)
expect(merged.openrouter).toEqual({
reasoning: {
enabled: true,
effort: 'medium'
},
plugins: [{ id: 'web', max_results: 5 }]
})
})
it('preserves options from other providers while merging', () => {
const openRouter = createOpenRouterOptions({
reasoning: { enabled: true }
})
const openAI = createOpenAIOptions({
reasoningEffort: 'low'
})
const merged = mergeProviderOptions(openRouter, openAI)
expect(merged.openrouter).toEqual({ reasoning: { enabled: true } })
expect(merged.openai).toEqual({ reasoningEffort: 'low' })
})
it('overwrites primitive values with later values', () => {
const first = createOpenAIOptions({
reasoningEffort: 'low',
user: 'user-123'
})
const second = createOpenAIOptions({
reasoningEffort: 'high',
maxToolCalls: 5
})
const merged = mergeProviderOptions(first, second)
expect(merged.openai).toEqual({
reasoningEffort: 'high', // overwritten by second
user: 'user-123', // preserved from first
maxToolCalls: 5 // added from second
})
})
it('overwrites arrays with later values instead of merging', () => {
const first = createOpenRouterOptions({
models: ['gpt-4', 'gpt-3.5-turbo']
})
const second = createOpenRouterOptions({
models: ['claude-3-opus', 'claude-3-sonnet']
})
const merged = mergeProviderOptions(first, second)
// Array is completely replaced, not merged
expect(merged.openrouter?.models).toEqual(['claude-3-opus', 'claude-3-sonnet'])
})
it('deeply merges nested objects while overwriting primitives', () => {
const first = createOpenRouterOptions({
reasoning: {
enabled: true,
effort: 'low'
},
user: 'user-123'
})
const second = createOpenRouterOptions({
reasoning: {
effort: 'high',
max_tokens: 500
},
user: 'user-456'
})
const merged = mergeProviderOptions(first, second)
expect(merged.openrouter).toEqual({
reasoning: {
enabled: true, // preserved from first
effort: 'high', // overwritten by second
max_tokens: 500 // added from second
},
user: 'user-456' // overwritten by second
})
})
it('replaces arrays instead of merging them', () => {
const first = createOpenRouterOptions({ plugins: [{ id: 'old' }] })
const second = createOpenRouterOptions({ plugins: [{ id: 'new' }] })
const merged = mergeProviderOptions(first, second)
// @ts-expect-error type-check for openrouter options is skipped. see function signature of createOpenRouterOptions
expect(merged.openrouter?.plugins).toEqual([{ id: 'new' }])
})
})

View File

@@ -26,65 +26,13 @@ export function createGenericProviderOptions<T extends string>(
return { [provider]: options } as Record<T, Record<string, any>>
}
type PlainObject = Record<string, any>
const isPlainObject = (value: unknown): value is PlainObject => {
return typeof value === 'object' && value !== null && !Array.isArray(value)
}
function deepMergeObjects<T extends PlainObject>(target: T, source: PlainObject): T {
const result: PlainObject = { ...target }
Object.entries(source).forEach(([key, value]) => {
if (isPlainObject(value) && isPlainObject(result[key])) {
result[key] = deepMergeObjects(result[key], value)
} else {
result[key] = value
}
})
return result as T
}
/**
* Deep-merge multiple provider-specific options.
* Nested objects are recursively merged; primitive values are overwritten.
*
* When the same key appears in multiple options:
* - If both values are plain objects: they are deeply merged (recursive merge)
* - If values are primitives/arrays: the later value overwrites the earlier one
*
* @example
* mergeProviderOptions(
* { openrouter: { reasoning: { enabled: true, effort: 'low' }, user: 'user-123' } },
* { openrouter: { reasoning: { effort: 'high', max_tokens: 500 }, models: ['gpt-4'] } }
* )
* // Result: {
* // openrouter: {
* // reasoning: { enabled: true, effort: 'high', max_tokens: 500 },
* // user: 'user-123',
* // models: ['gpt-4']
* // }
* // }
*
* @param optionsMap Objects containing options for multiple providers
* @returns Fully merged TypedProviderOptions
* 合并多个供应商的options
* @param optionsMap 包含多个供应商选项的对象
* @returns 合并后的TypedProviderOptions
*/
export function mergeProviderOptions(...optionsMap: Partial<TypedProviderOptions>[]): TypedProviderOptions {
return optionsMap.reduce<TypedProviderOptions>((acc, options) => {
if (!options) {
return acc
}
Object.entries(options).forEach(([providerId, providerOptions]) => {
if (!providerOptions) {
return
}
if (acc[providerId]) {
acc[providerId] = deepMergeObjects(acc[providerId] as PlainObject, providerOptions as PlainObject)
} else {
acc[providerId] = providerOptions as any
}
})
return acc
}, {} as TypedProviderOptions)
return Object.assign({}, ...optionsMap)
}
/**

View File

@@ -19,20 +19,15 @@ describe('Provider Schemas', () => {
expect(Array.isArray(baseProviders)).toBe(true)
expect(baseProviders.length).toBeGreaterThan(0)
// These are the actual base providers defined in schemas.ts
const expectedIds = [
'openai',
'openai-chat',
'openai-responses',
'openai-compatible',
'anthropic',
'google',
'xai',
'azure',
'azure-responses',
'deepseek',
'openrouter',
'cherryin',
'cherryin-chat'
'deepseek'
]
const actualIds = baseProviders.map((p) => p.id)
expectedIds.forEach((id) => {

View File

@@ -232,13 +232,11 @@ describe('RuntimeExecutor.generateImage', () => {
expect(pluginCallOrder).toEqual(['onRequestStart', 'transformParams', 'transformResult', 'onRequestEnd'])
// transformParams receives params without model (model is handled separately)
// and context with core fields + dynamic fields (requestId, startTime, etc.)
expect(testPlugin.transformParams).toHaveBeenCalledWith(
expect.objectContaining({ prompt: 'A test image' }),
{ prompt: 'A test image' },
expect.objectContaining({
providerId: 'openai',
model: 'dall-e-3'
modelId: 'dall-e-3'
})
)
@@ -275,12 +273,11 @@ describe('RuntimeExecutor.generateImage', () => {
await executorWithPlugin.generateImage({ model: 'dall-e-3', prompt: 'A test image' })
// resolveModel receives model id and context with core fields
expect(modelResolutionPlugin.resolveModel).toHaveBeenCalledWith(
'dall-e-3',
expect.objectContaining({
providerId: 'openai',
model: 'dall-e-3'
modelId: 'dall-e-3'
})
)
@@ -342,11 +339,12 @@ describe('RuntimeExecutor.generateImage', () => {
.generateImage({ model: 'invalid-model', prompt: 'A test image' })
.catch((error) => error)
// Error is thrown from pluginEngine directly as ImageModelResolutionError
expect(thrownError).toBeInstanceOf(ImageModelResolutionError)
expect(thrownError.message).toContain('Failed to resolve image model: invalid-model')
expect(thrownError).toBeInstanceOf(ImageGenerationError)
expect(thrownError.message).toContain('Failed to generate image:')
expect(thrownError.providerId).toBe('openai')
expect(thrownError.modelId).toBe('invalid-model')
expect(thrownError.cause).toBeInstanceOf(ImageModelResolutionError)
expect(thrownError.cause.message).toContain('Failed to resolve image model: invalid-model')
})
it('should handle ImageModelResolutionError without provider', async () => {
@@ -364,9 +362,8 @@ describe('RuntimeExecutor.generateImage', () => {
const apiError = new Error('API request failed')
vi.mocked(aiGenerateImage).mockRejectedValue(apiError)
// Error propagates directly from pluginEngine without wrapping
await expect(executor.generateImage({ model: 'dall-e-3', prompt: 'A test image' })).rejects.toThrow(
'API request failed'
'Failed to generate image:'
)
})
@@ -379,9 +376,8 @@ describe('RuntimeExecutor.generateImage', () => {
vi.mocked(aiGenerateImage).mockRejectedValue(noImageError)
vi.mocked(NoImageGeneratedError.isInstance).mockReturnValue(true)
// Error propagates directly from pluginEngine
await expect(executor.generateImage({ model: 'dall-e-3', prompt: 'A test image' })).rejects.toThrow(
'No image generated'
'Failed to generate image:'
)
})
@@ -402,17 +398,15 @@ describe('RuntimeExecutor.generateImage', () => {
[errorPlugin]
)
// Error propagates directly from pluginEngine
await expect(executorWithPlugin.generateImage({ model: 'dall-e-3', prompt: 'A test image' })).rejects.toThrow(
'Generation failed'
'Failed to generate image:'
)
// onError receives the original error and context with core fields
expect(errorPlugin.onError).toHaveBeenCalledWith(
error,
expect.objectContaining({
providerId: 'openai',
model: 'dall-e-3'
modelId: 'dall-e-3'
})
)
})
@@ -425,10 +419,9 @@ describe('RuntimeExecutor.generateImage', () => {
const abortController = new AbortController()
setTimeout(() => abortController.abort(), 10)
// Error propagates directly from pluginEngine
await expect(
executor.generateImage({ model: 'dall-e-3', prompt: 'A test image', abortSignal: abortController.signal })
).rejects.toThrow('Operation was aborted')
).rejects.toThrow('Failed to generate image:')
})
})

View File

@@ -17,14 +17,10 @@ import type { AiPlugin } from '../../plugins'
import { globalRegistryManagement } from '../../providers/RegistryManagement'
import { RuntimeExecutor } from '../executor'
// Mock AI SDK - use importOriginal to keep jsonSchema and other non-mocked exports
vi.mock('ai', async (importOriginal) => {
const actual = (await importOriginal()) as Record<string, unknown>
return {
...actual,
generateText: vi.fn()
}
})
// Mock AI SDK
vi.mock('ai', () => ({
generateText: vi.fn()
}))
vi.mock('../../providers/RegistryManagement', () => ({
globalRegistryManagement: {
@@ -413,12 +409,11 @@ describe('RuntimeExecutor.generateText', () => {
})
).rejects.toThrow('Generation failed')
// onError receives the original error and context with core fields
expect(errorPlugin.onError).toHaveBeenCalledWith(
error,
expect.objectContaining({
providerId: 'openai',
model: 'gpt-4'
modelId: 'gpt-4'
})
)
})

View File

@@ -11,14 +11,10 @@ import type { AiPlugin } from '../../plugins'
import { globalRegistryManagement } from '../../providers/RegistryManagement'
import { RuntimeExecutor } from '../executor'
// Mock AI SDK - use importOriginal to keep jsonSchema and other non-mocked exports
vi.mock('ai', async (importOriginal) => {
const actual = (await importOriginal()) as Record<string, unknown>
return {
...actual,
streamText: vi.fn()
}
})
// Mock AI SDK
vi.mock('ai', () => ({
streamText: vi.fn()
}))
vi.mock('../../providers/RegistryManagement', () => ({
globalRegistryManagement: {
@@ -157,7 +153,7 @@ describe('RuntimeExecutor.streamText', () => {
describe('Max Tokens Parameter', () => {
const maxTokensValues = [10, 50, 100, 500, 1000, 2000, 4000]
it.each(maxTokensValues)('should support maxOutputTokens=%s', async (maxOutputTokens) => {
it.each(maxTokensValues)('should support maxTokens=%s', async (maxTokens) => {
const mockStream = {
textStream: (async function* () {
yield 'Response'
@@ -172,13 +168,12 @@ describe('RuntimeExecutor.streamText', () => {
await executor.streamText({
model: 'gpt-4',
messages: testMessages.simple,
maxOutputTokens
maxOutputTokens: maxTokens
})
// Parameters are passed through without transformation
expect(streamText).toHaveBeenCalledWith(
expect.objectContaining({
maxOutputTokens
maxTokens
})
)
})
@@ -518,12 +513,11 @@ describe('RuntimeExecutor.streamText', () => {
})
).rejects.toThrow('Stream error')
// onError receives the original error and context with core fields
expect(errorPlugin.onError).toHaveBeenCalledWith(
error,
expect.objectContaining({
providerId: 'openai',
model: 'gpt-4'
modelId: 'gpt-4'
})
)
})

View File

@@ -1,20 +1,12 @@
import path from 'node:path'
import { fileURLToPath } from 'node:url'
import { defineConfig } from 'vitest/config'
const __dirname = path.dirname(fileURLToPath(import.meta.url))
export default defineConfig({
test: {
globals: true,
setupFiles: [path.resolve(__dirname, './src/__tests__/setup.ts')]
globals: true
},
resolve: {
alias: {
'@': path.resolve(__dirname, './src'),
// Mock external packages that may not be available in test environment
'@cherrystudio/ai-sdk-provider': path.resolve(__dirname, './src/__tests__/mocks/ai-sdk-provider.ts')
'@': './src'
}
},
esbuild: {

View File

@@ -374,5 +374,13 @@ export enum IpcChannel {
WebSocket_Stop = 'webSocket:stop',
WebSocket_Status = 'webSocket:status',
WebSocket_SendFile = 'webSocket:send-file',
WebSocket_GetAllCandidates = 'webSocket:get-all-candidates'
WebSocket_GetAllCandidates = 'webSocket:get-all-candidates',
// Volcengine
Volcengine_SaveCredentials = 'volcengine:save-credentials',
Volcengine_HasCredentials = 'volcengine:has-credentials',
Volcengine_ClearCredentials = 'volcengine:clear-credentials',
Volcengine_ListModels = 'volcengine:list-models',
Volcengine_GetAuthHeaders = 'volcengine:get-auth-headers',
Volcengine_MakeRequest = 'volcengine:make-request'
}

View File

@@ -1,64 +1,42 @@
import { defineConfig } from '@playwright/test'
import { defineConfig, devices } from '@playwright/test'
/**
* Playwright configuration for Electron e2e testing.
* See https://playwright.dev/docs/test-configuration
* See https://playwright.dev/docs/test-configuration.
*/
export default defineConfig({
// Look for test files in the specs directory
testDir: './tests/e2e/specs',
// Global timeout for each test
timeout: 60000,
// Assertion timeout
expect: {
timeout: 10000
},
// Electron apps should run tests sequentially to avoid conflicts
fullyParallel: false,
workers: 1,
// Fail the build on CI if you accidentally left test.only in the source code
// Look for test files, relative to this configuration file.
testDir: './tests/e2e',
/* Run tests in files in parallel */
fullyParallel: true,
/* Fail the build on CI if you accidentally left test.only in the source code. */
forbidOnly: !!process.env.CI,
// Retry on CI only
/* Retry on CI only */
retries: process.env.CI ? 2 : 0,
// Reporter configuration
reporter: [['html', { outputFolder: 'playwright-report' }], ['list']],
// Global setup and teardown
globalSetup: './tests/e2e/global-setup.ts',
globalTeardown: './tests/e2e/global-teardown.ts',
// Output directory for test artifacts
outputDir: './test-results',
// Shared settings for all tests
/* Opt out of parallel tests on CI. */
workers: process.env.CI ? 1 : undefined,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: 'html',
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: {
// Collect trace when retrying the failed test
trace: 'retain-on-failure',
/* Base URL to use in actions like `await page.goto('/')`. */
// baseURL: 'http://localhost:3000',
// Take screenshot only on failure
screenshot: 'only-on-failure',
// Record video only on failure
video: 'retain-on-failure',
// Action timeout
actionTimeout: 15000,
// Navigation timeout
navigationTimeout: 30000
/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
trace: 'on-first-retry'
},
// Single project for Electron testing
/* Configure projects for major browsers */
projects: [
{
name: 'electron',
testMatch: '**/*.spec.ts'
name: 'chromium',
use: { ...devices['Desktop Chrome'] }
}
]
/* Run your local dev server before starting the tests */
// webServer: {
// command: 'npm run start',
// url: 'http://localhost:3000',
// reuseExistingServer: !process.env.CI,
// },
})

View File

@@ -73,6 +73,7 @@ import {
import storeSyncService from './services/StoreSyncService'
import { themeService } from './services/ThemeService'
import VertexAIService from './services/VertexAIService'
import VolcengineService from './services/VolcengineService'
import WebSocketService from './services/WebSocketService'
import { setOpenLinkExternal } from './services/WebviewService'
import { windowService } from './services/WindowService'
@@ -1077,6 +1078,14 @@ export function registerIpc(mainWindow: BrowserWindow, app: Electron.App) {
ipcMain.handle(IpcChannel.WebSocket_SendFile, WebSocketService.sendFile)
ipcMain.handle(IpcChannel.WebSocket_GetAllCandidates, WebSocketService.getAllCandidates)
// Volcengine
ipcMain.handle(IpcChannel.Volcengine_SaveCredentials, VolcengineService.saveCredentials)
ipcMain.handle(IpcChannel.Volcengine_HasCredentials, VolcengineService.hasCredentials)
ipcMain.handle(IpcChannel.Volcengine_ClearCredentials, VolcengineService.clearCredentials)
ipcMain.handle(IpcChannel.Volcengine_ListModels, VolcengineService.listModels)
ipcMain.handle(IpcChannel.Volcengine_GetAuthHeaders, VolcengineService.getAuthHeaders)
ipcMain.handle(IpcChannel.Volcengine_MakeRequest, VolcengineService.makeRequest)
ipcMain.handle(IpcChannel.APP_CrashRenderProcess, () => {
mainWindow.webContents.forcefullyCrashRenderer()
})

View File

@@ -548,17 +548,6 @@ class CodeToolsService {
logger.debug(`Environment variables:`, Object.keys(env))
logger.debug(`Options:`, options)
// Validate directory exists before proceeding
if (!directory || !fs.existsSync(directory)) {
const errorMessage = `Directory does not exist: ${directory}`
logger.error(errorMessage)
return {
success: false,
message: errorMessage,
command: ''
}
}
const packageName = await this.getPackageName(cliTool)
const bunPath = await this.getBunPath()
const executableName = await this.getCliExecutableName(cliTool)
@@ -720,7 +709,6 @@ class CodeToolsService {
// Build bat file content, including debug information
const batContent = [
'@echo off',
'chcp 65001 >nul 2>&1', // Switch to UTF-8 code page for international path support
`title ${cliTool} - Cherry Studio`, // Set window title in bat file
'echo ================================================',
'echo Cherry Studio CLI Tool Launcher',

View File

@@ -620,7 +620,7 @@ class McpService {
tools.map((tool: SDKTool) => {
const serverTool: MCPTool = {
...tool,
id: buildFunctionCallToolName(server.name, tool.name, server.id),
id: buildFunctionCallToolName(server.name, tool.name),
serverId: server.id,
serverName: server.name,
type: 'mcp'

View File

@@ -0,0 +1,732 @@
import { loggerService } from '@logger'
import crypto from 'crypto'
import { app, net, safeStorage } from 'electron'
import fs from 'fs'
import path from 'path'
import * as z from 'zod'
import { getConfigDir } from '../utils/file'
const logger = loggerService.withContext('VolcengineService')
// Configuration constants
const CONFIG = {
ALGORITHM: 'HMAC-SHA256',
REQUEST_TYPE: 'request',
DEFAULT_REGION: 'cn-beijing',
SERVICE_NAME: 'ark',
DEFAULT_HEADERS: {
'content-type': 'application/json',
accept: 'application/json'
},
API_URLS: {
ARK_HOST: 'open.volcengineapi.com'
},
CREDENTIALS_FILE_NAME: '.volcengine_credentials',
API_VERSION: '2024-01-01',
DEFAULT_PAGE_SIZE: 100
} as const
// Request schemas
const ListFoundationModelsRequestSchema = z.object({
PageNumber: z.optional(z.number()),
PageSize: z.optional(z.number())
})
const ListEndpointsRequestSchema = z.object({
ProjectName: z.optional(z.string()),
PageNumber: z.optional(z.number()),
PageSize: z.optional(z.number())
})
// Response schemas - only keep fields needed for model list
const FoundationModelItemSchema = z.object({
Name: z.string(),
DisplayName: z.optional(z.string()),
Description: z.optional(z.string())
})
const EndpointItemSchema = z.object({
Id: z.string(),
Name: z.optional(z.string()),
Description: z.optional(z.string()),
ModelReference: z.optional(
z.object({
FoundationModel: z.optional(
z.object({
Name: z.optional(z.string()),
ModelVersion: z.optional(z.string())
})
),
CustomModelId: z.optional(z.string())
})
)
})
const ListFoundationModelsResponseSchema = z.object({
Result: z.object({
TotalCount: z.number(),
Items: z.array(FoundationModelItemSchema)
})
})
const ListEndpointsResponseSchema = z.object({
Result: z.object({
TotalCount: z.number(),
Items: z.array(EndpointItemSchema)
})
})
// Infer types from schemas
type ListFoundationModelsRequest = z.infer<typeof ListFoundationModelsRequestSchema>
type ListEndpointsRequest = z.infer<typeof ListEndpointsRequestSchema>
type ListFoundationModelsResponse = z.infer<typeof ListFoundationModelsResponseSchema>
type ListEndpointsResponse = z.infer<typeof ListEndpointsResponseSchema>
// ============= Internal Type Definitions =============
interface VolcengineCredentials {
accessKeyId: string
secretAccessKey: string
}
interface SignedRequestParams {
method: 'GET' | 'POST'
host: string
path: string
query: Record<string, string>
headers: Record<string, string>
body?: string
service: string
region: string
}
interface SignedHeaders {
Authorization: string
'X-Date': string
'X-Content-Sha256': string
Host: string
}
interface ModelInfo {
id: string
name: string
description?: string
created?: number
}
interface ListModelsResult {
models: ModelInfo[]
total?: number
warnings?: string[]
}
// Custom error class
class VolcengineServiceError extends Error {
constructor(
message: string,
public readonly cause?: unknown
) {
super(message)
this.name = 'VolcengineServiceError'
}
}
/**
* Volcengine API Signing Service
*
* Implements HMAC-SHA256 signing algorithm for Volcengine API authentication.
* Securely stores credentials using Electron's safeStorage.
*/
class VolcengineService {
private readonly credentialsFilePath: string
constructor() {
this.credentialsFilePath = this.getCredentialsFilePath()
}
/**
* Get the path for storing encrypted credentials
*/
private getCredentialsFilePath(): string {
const oldPath = path.join(app.getPath('userData'), CONFIG.CREDENTIALS_FILE_NAME)
if (fs.existsSync(oldPath)) {
return oldPath
}
return path.join(getConfigDir(), CONFIG.CREDENTIALS_FILE_NAME)
}
// ============= Cryptographic Helper Methods =============
/**
* Calculate SHA256 hash of data and return hex encoded string
*/
private sha256Hash(data: string | Buffer): string {
return crypto.createHash('sha256').update(data).digest('hex')
}
/**
* Calculate HMAC-SHA256 and return buffer
*/
private hmacSha256(key: Buffer | string, data: string): Buffer {
return crypto.createHmac('sha256', key).update(data, 'utf8').digest()
}
/**
* Calculate HMAC-SHA256 and return hex encoded string
*/
private hmacSha256Hex(key: Buffer | string, data: string): string {
return crypto.createHmac('sha256', key).update(data, 'utf8').digest('hex')
}
/**
* URL encode according to RFC3986
*/
private uriEncode(str: string, encodeSlash: boolean = true): string {
if (!str) return ''
return str
.split('')
.map((char) => {
if (
(char >= 'A' && char <= 'Z') ||
(char >= 'a' && char <= 'z') ||
(char >= '0' && char <= '9') ||
char === '_' ||
char === '-' ||
char === '~' ||
char === '.'
) {
return char
}
if (char === '/' && !encodeSlash) {
return char
}
return encodeURIComponent(char)
})
.join('')
}
// ============= Signing Implementation =============
/**
* Get current UTC time in ISO8601 format (YYYYMMDD'T'HHMMSS'Z')
*/
private getIso8601DateTime(): string {
const now = new Date()
return now
.toISOString()
.replace(/[-:]/g, '')
.replace(/\.\d{3}/, '')
}
/**
* Get date portion from datetime (YYYYMMDD)
*/
private getDateFromDateTime(dateTime: string): string {
return dateTime.substring(0, 8)
}
/**
* Build canonical query string from query parameters
*/
private buildCanonicalQueryString(query: Record<string, string>): string {
if (!query || Object.keys(query).length === 0) {
return ''
}
return Object.keys(query)
.sort()
.map((key) => `${this.uriEncode(key)}=${this.uriEncode(query[key])}`)
.join('&')
}
/**
* Build canonical headers string
*/
private buildCanonicalHeaders(headers: Record<string, string>): {
canonicalHeaders: string
signedHeaders: string
} {
const sortedKeys = Object.keys(headers)
.map((k) => k.toLowerCase())
.sort()
const canonicalHeaders = sortedKeys.map((key) => `${key}:${headers[key]?.trim() || ''}`).join('\n') + '\n'
const signedHeaders = sortedKeys.join(';')
return { canonicalHeaders, signedHeaders }
}
/**
* Create the signing key through a series of HMAC operations
*
* kSecret = SecretAccessKey
* kDate = HMAC(kSecret, Date)
* kRegion = HMAC(kDate, Region)
* kService = HMAC(kRegion, Service)
* kSigning = HMAC(kService, "request")
*/
private deriveSigningKey(secretKey: string, date: string, region: string, service: string): Buffer {
const kDate = this.hmacSha256(secretKey, date)
const kRegion = this.hmacSha256(kDate, region)
const kService = this.hmacSha256(kRegion, service)
const kSigning = this.hmacSha256(kService, CONFIG.REQUEST_TYPE)
return kSigning
}
/**
* Create canonical request string
*
* CanonicalRequest =
* HTTPRequestMethod + '\n' +
* CanonicalURI + '\n' +
* CanonicalQueryString + '\n' +
* CanonicalHeaders + '\n' +
* SignedHeaders + '\n' +
* HexEncode(Hash(RequestPayload))
*/
private createCanonicalRequest(
method: string,
canonicalUri: string,
canonicalQueryString: string,
canonicalHeaders: string,
signedHeaders: string,
payloadHash: string
): string {
return [method, canonicalUri, canonicalQueryString, canonicalHeaders, signedHeaders, payloadHash].join('\n')
}
/**
* Create string to sign
*
* StringToSign =
* Algorithm + '\n' +
* RequestDateTime + '\n' +
* CredentialScope + '\n' +
* HexEncode(Hash(CanonicalRequest))
*/
private createStringToSign(dateTime: string, credentialScope: string, canonicalRequest: string): string {
const hashedCanonicalRequest = this.sha256Hash(canonicalRequest)
return [CONFIG.ALGORITHM, dateTime, credentialScope, hashedCanonicalRequest].join('\n')
}
/**
* Generate signature for the request
*/
private generateSignature(params: SignedRequestParams, credentials: VolcengineCredentials): SignedHeaders {
const { method, host, path: requestPath, query, body, service, region } = params
// Step 1: Prepare datetime
const dateTime = this.getIso8601DateTime()
const date = this.getDateFromDateTime(dateTime)
// Step 2: Calculate payload hash
const payloadHash = this.sha256Hash(body || '')
// Step 3: Prepare headers for signing
const headersToSign: Record<string, string> = {
host: host,
'x-date': dateTime,
'x-content-sha256': payloadHash,
'content-type': 'application/json'
}
// Step 4: Build canonical components
const canonicalUri = this.uriEncode(requestPath, false) || '/'
const canonicalQueryString = this.buildCanonicalQueryString(query)
const { canonicalHeaders, signedHeaders } = this.buildCanonicalHeaders(headersToSign)
// Step 5: Create canonical request
const canonicalRequest = this.createCanonicalRequest(
method.toUpperCase(),
canonicalUri,
canonicalQueryString,
canonicalHeaders,
signedHeaders,
payloadHash
)
// Step 6: Create credential scope and string to sign
const credentialScope = `${date}/${region}/${service}/${CONFIG.REQUEST_TYPE}`
const stringToSign = this.createStringToSign(dateTime, credentialScope, canonicalRequest)
// Step 7: Calculate signature
const signingKey = this.deriveSigningKey(credentials.secretAccessKey, date, region, service)
const signature = this.hmacSha256Hex(signingKey, stringToSign)
// Step 8: Build authorization header
const authorization = `${CONFIG.ALGORITHM} Credential=${credentials.accessKeyId}/${credentialScope}, SignedHeaders=${signedHeaders}, Signature=${signature}`
return {
Authorization: authorization,
'X-Date': dateTime,
'X-Content-Sha256': payloadHash,
Host: host
}
}
// ============= Credential Management =============
/**
* Save credentials securely using Electron's safeStorage
*/
public saveCredentials = async (
_: Electron.IpcMainInvokeEvent,
accessKeyId: string,
secretAccessKey: string
): Promise<void> => {
try {
if (!accessKeyId || !secretAccessKey) {
throw new VolcengineServiceError('Access Key ID and Secret Access Key are required')
}
const credentials: VolcengineCredentials = { accessKeyId, secretAccessKey }
const credentialsJson = JSON.stringify(credentials)
const encryptedData = safeStorage.encryptString(credentialsJson)
// Ensure directory exists
const dir = path.dirname(this.credentialsFilePath)
if (!fs.existsSync(dir)) {
await fs.promises.mkdir(dir, { recursive: true })
}
await fs.promises.writeFile(this.credentialsFilePath, encryptedData)
logger.info('Volcengine credentials saved successfully')
} catch (error) {
logger.error('Failed to save Volcengine credentials:', error as Error)
throw new VolcengineServiceError('Failed to save credentials', error)
}
}
/**
* Load credentials from encrypted storage
* @throws VolcengineServiceError if credentials file exists but is corrupted
*/
private async loadCredentials(): Promise<VolcengineCredentials | null> {
if (!fs.existsSync(this.credentialsFilePath)) {
return null
}
try {
const encryptedData = await fs.promises.readFile(this.credentialsFilePath)
const decryptedJson = safeStorage.decryptString(Buffer.from(encryptedData))
return JSON.parse(decryptedJson) as VolcengineCredentials
} catch (error) {
logger.error('Failed to load Volcengine credentials:', error as Error)
throw new VolcengineServiceError(
'Credentials file exists but could not be loaded. Please re-enter your credentials.',
error
)
}
}
/**
* Check if credentials exist
*/
public hasCredentials = async (): Promise<boolean> => {
return fs.existsSync(this.credentialsFilePath)
}
/**
* Clear stored credentials
*/
public clearCredentials = async (): Promise<void> => {
try {
if (fs.existsSync(this.credentialsFilePath)) {
await fs.promises.unlink(this.credentialsFilePath)
logger.info('Volcengine credentials cleared')
}
} catch (error) {
logger.error('Failed to clear Volcengine credentials:', error as Error)
throw new VolcengineServiceError('Failed to clear credentials', error)
}
}
// ============= API Methods =============
/**
* Make a signed request to Volcengine API
*/
private async makeSignedRequest<T>(
method: 'GET' | 'POST',
host: string,
path: string,
action: string,
version: string,
query?: Record<string, string>,
body?: Record<string, unknown>,
service: string = CONFIG.SERVICE_NAME,
region: string = CONFIG.DEFAULT_REGION
): Promise<T> {
const credentials = await this.loadCredentials()
if (!credentials) {
throw new VolcengineServiceError('No credentials found. Please save credentials first.')
}
const fullQuery: Record<string, string> = {
Action: action,
Version: version,
...query
}
const bodyString = body ? JSON.stringify(body) : ''
const signedHeaders = this.generateSignature(
{
method,
host,
path,
query: fullQuery,
headers: {},
body: bodyString,
service,
region
},
credentials
)
// Build URL with query string (use simple encoding for URL, canonical encoding is only for signature)
const urlParams = new URLSearchParams(fullQuery)
const url = `https://${host}${path}?${urlParams.toString()}`
const requestHeaders: Record<string, string> = {
...CONFIG.DEFAULT_HEADERS,
Authorization: signedHeaders.Authorization,
'X-Date': signedHeaders['X-Date'],
'X-Content-Sha256': signedHeaders['X-Content-Sha256']
}
logger.debug('Making Volcengine API request', { url, method, action })
try {
const response = await net.fetch(url, {
method,
headers: requestHeaders,
body: method === 'POST' && bodyString ? bodyString : undefined
})
if (!response.ok) {
const errorText = await response.text()
logger.error(`Volcengine API error: ${response.status}`, { errorText })
throw new VolcengineServiceError(`API request failed: ${response.status} - ${errorText}`)
}
return (await response.json()) as T
} catch (error) {
if (error instanceof VolcengineServiceError) {
throw error
}
logger.error('Volcengine API request failed:', error as Error)
throw new VolcengineServiceError('API request failed', error)
}
}
/**
* List foundation models from Volcengine ARK
*/
private async listFoundationModels(region: string = CONFIG.DEFAULT_REGION): Promise<ListFoundationModelsResponse> {
const requestBody: ListFoundationModelsRequest = {
PageNumber: 1,
PageSize: CONFIG.DEFAULT_PAGE_SIZE
}
const response = await this.makeSignedRequest<unknown>(
'POST',
CONFIG.API_URLS.ARK_HOST,
'/',
'ListFoundationModels',
CONFIG.API_VERSION,
{},
requestBody,
CONFIG.SERVICE_NAME,
region
)
return ListFoundationModelsResponseSchema.parse(response)
}
/**
* List user-created endpoints from Volcengine ARK
*/
private async listEndpoints(
projectName?: string,
region: string = CONFIG.DEFAULT_REGION
): Promise<ListEndpointsResponse> {
const requestBody: ListEndpointsRequest = {
ProjectName: projectName || 'default',
PageNumber: 1,
PageSize: CONFIG.DEFAULT_PAGE_SIZE
}
const response = await this.makeSignedRequest<unknown>(
'POST',
CONFIG.API_URLS.ARK_HOST,
'/',
'ListEndpoints',
CONFIG.API_VERSION,
{},
requestBody,
CONFIG.SERVICE_NAME,
region
)
return ListEndpointsResponseSchema.parse(response)
}
/**
* List all available models from Volcengine ARK
* Combines foundation models and user-created endpoints
*/
public listModels = async (
_?: Electron.IpcMainInvokeEvent,
projectName?: string,
region?: string
): Promise<ListModelsResult> => {
try {
const effectiveRegion = region || CONFIG.DEFAULT_REGION
const [foundationModelsResult, endpointsResult] = await Promise.allSettled([
this.listFoundationModels(effectiveRegion),
this.listEndpoints(projectName, effectiveRegion)
])
const models: ModelInfo[] = []
const warnings: string[] = []
if (foundationModelsResult.status === 'fulfilled') {
const foundationModels = foundationModelsResult.value
for (const item of foundationModels.Result.Items) {
models.push({
id: item.Name,
name: item.DisplayName || item.Name,
description: item.Description
})
}
logger.info(`Found ${foundationModels.Result.Items.length} foundation models`)
} else {
const errorMsg = `Failed to fetch foundation models: ${foundationModelsResult.reason}`
logger.warn(errorMsg)
warnings.push(errorMsg)
}
// Process endpoints
if (endpointsResult.status === 'fulfilled') {
const endpoints = endpointsResult.value
for (const item of endpoints.Result.Items) {
const modelRef = item.ModelReference
const foundationModelName = modelRef?.FoundationModel?.Name
const modelVersion = modelRef?.FoundationModel?.ModelVersion
const customModelId = modelRef?.CustomModelId
let displayName = item.Name || item.Id
if (foundationModelName) {
displayName = modelVersion ? `${foundationModelName} (${modelVersion})` : foundationModelName
} else if (customModelId) {
displayName = customModelId
}
models.push({
id: item.Id,
name: displayName,
description: item.Description
})
}
logger.info(`Found ${endpoints.Result.Items.length} endpoints`)
} else {
const errorMsg = `Failed to fetch endpoints: ${endpointsResult.reason}`
logger.warn(errorMsg)
warnings.push(errorMsg)
}
// If both failed, throw error
if (foundationModelsResult.status === 'rejected' && endpointsResult.status === 'rejected') {
throw new VolcengineServiceError('Failed to fetch both foundation models and endpoints')
}
const total =
(foundationModelsResult.status === 'fulfilled' ? foundationModelsResult.value.Result.TotalCount : 0) +
(endpointsResult.status === 'fulfilled' ? endpointsResult.value.Result.TotalCount : 0)
logger.info(`Total models found: ${models.length}`)
return {
models,
total,
warnings: warnings.length > 0 ? warnings : undefined
}
} catch (error) {
logger.error('Failed to list Volcengine models:', error as Error)
throw new VolcengineServiceError('Failed to list models', error)
}
}
/**
* Get authorization headers for external use
* This allows the renderer process to make direct API calls with proper authentication
*/
public getAuthHeaders = async (
_: Electron.IpcMainInvokeEvent,
params: {
method: 'GET' | 'POST'
host: string
path: string
query?: Record<string, string>
body?: string
service?: string
region?: string
}
): Promise<SignedHeaders> => {
const credentials = await this.loadCredentials()
if (!credentials) {
throw new VolcengineServiceError('No credentials found. Please save credentials first.')
}
return this.generateSignature(
{
method: params.method,
host: params.host,
path: params.path,
query: params.query || {},
headers: {},
body: params.body,
service: params.service || CONFIG.SERVICE_NAME,
region: params.region || CONFIG.DEFAULT_REGION
},
credentials
)
}
/**
* Make a generic signed API request
* This is a more flexible method that allows custom API calls
*/
public makeRequest = async (
_: Electron.IpcMainInvokeEvent,
params: {
method: 'GET' | 'POST'
host: string
path: string
action: string
version: string
query?: Record<string, string>
body?: Record<string, unknown>
service?: string
region?: string
}
): Promise<unknown> => {
return this.makeSignedRequest(
params.method,
params.host,
params.path,
params.action,
params.version,
params.query,
params.body,
params.service || CONFIG.SERVICE_NAME,
params.region || CONFIG.DEFAULT_REGION
)
}
}
export default new VolcengineService()

View File

@@ -1,7 +1,6 @@
// src/main/services/agents/services/claudecode/index.ts
import { EventEmitter } from 'node:events'
import { createRequire } from 'node:module'
import path from 'node:path'
import type {
CanUseTool,
@@ -122,11 +121,7 @@ class ClaudeCodeService implements AgentServiceInterface {
// TODO: support set small model in UI
ANTHROPIC_DEFAULT_HAIKU_MODEL: modelInfo.modelId,
ELECTRON_RUN_AS_NODE: '1',
ELECTRON_NO_ATTACH_CONSOLE: '1',
// Set CLAUDE_CONFIG_DIR to app's userData directory to avoid path encoding issues
// on Windows when the username contains non-ASCII characters (e.g., Chinese characters)
// This prevents the SDK from using the user's home directory which may have encoding problems
CLAUDE_CONFIG_DIR: path.join(app.getPath('userData'), '.claude')
ELECTRON_NO_ATTACH_CONSOLE: '1'
}
const errorChunks: string[] = []

View File

@@ -1,196 +0,0 @@
import { describe, expect, it } from 'vitest'
import { buildFunctionCallToolName } from '../mcp'
describe('buildFunctionCallToolName', () => {
describe('basic functionality', () => {
it('should combine server name and tool name', () => {
const result = buildFunctionCallToolName('github', 'search_issues')
expect(result).toContain('github')
expect(result).toContain('search')
})
it('should sanitize names by replacing dashes with underscores', () => {
const result = buildFunctionCallToolName('my-server', 'my-tool')
// Input dashes are replaced, but the separator between server and tool is a dash
expect(result).toBe('my_serv-my_tool')
expect(result).toContain('_')
})
it('should handle empty server names gracefully', () => {
const result = buildFunctionCallToolName('', 'tool')
expect(result).toBeTruthy()
})
})
describe('uniqueness with serverId', () => {
it('should generate different IDs for same server name but different serverIds', () => {
const serverId1 = 'server-id-123456'
const serverId2 = 'server-id-789012'
const serverName = 'github'
const toolName = 'search_repos'
const result1 = buildFunctionCallToolName(serverName, toolName, serverId1)
const result2 = buildFunctionCallToolName(serverName, toolName, serverId2)
expect(result1).not.toBe(result2)
expect(result1).toContain('123456')
expect(result2).toContain('789012')
})
it('should generate same ID when serverId is not provided', () => {
const serverName = 'github'
const toolName = 'search_repos'
const result1 = buildFunctionCallToolName(serverName, toolName)
const result2 = buildFunctionCallToolName(serverName, toolName)
expect(result1).toBe(result2)
})
it('should include serverId suffix when provided', () => {
const serverId = 'abc123def456'
const result = buildFunctionCallToolName('server', 'tool', serverId)
// Should include last 6 chars of serverId
expect(result).toContain('ef456')
})
})
describe('character sanitization', () => {
it('should replace invalid characters with underscores', () => {
const result = buildFunctionCallToolName('test@server', 'tool#name')
expect(result).not.toMatch(/[@#]/)
expect(result).toMatch(/^[a-zA-Z0-9_-]+$/)
})
it('should ensure name starts with a letter', () => {
const result = buildFunctionCallToolName('123server', '456tool')
expect(result).toMatch(/^[a-zA-Z]/)
})
it('should handle consecutive underscores/dashes', () => {
const result = buildFunctionCallToolName('my--server', 'my__tool')
expect(result).not.toMatch(/[_-]{2,}/)
})
})
describe('length constraints', () => {
it('should truncate names longer than 63 characters', () => {
const longServerName = 'a'.repeat(50)
const longToolName = 'b'.repeat(50)
const result = buildFunctionCallToolName(longServerName, longToolName, 'id123456')
expect(result.length).toBeLessThanOrEqual(63)
})
it('should not end with underscore or dash after truncation', () => {
const longServerName = 'a'.repeat(50)
const longToolName = 'b'.repeat(50)
const result = buildFunctionCallToolName(longServerName, longToolName, 'id123456')
expect(result).not.toMatch(/[_-]$/)
})
it('should preserve serverId suffix even with long server/tool names', () => {
const longServerName = 'a'.repeat(50)
const longToolName = 'b'.repeat(50)
const serverId = 'server-id-xyz789'
const result = buildFunctionCallToolName(longServerName, longToolName, serverId)
// The suffix should be preserved and not truncated
expect(result).toContain('xyz789')
expect(result.length).toBeLessThanOrEqual(63)
})
it('should ensure two long-named servers with different IDs produce different results', () => {
const longServerName = 'a'.repeat(50)
const longToolName = 'b'.repeat(50)
const serverId1 = 'server-id-abc123'
const serverId2 = 'server-id-def456'
const result1 = buildFunctionCallToolName(longServerName, longToolName, serverId1)
const result2 = buildFunctionCallToolName(longServerName, longToolName, serverId2)
// Both should be within limit
expect(result1.length).toBeLessThanOrEqual(63)
expect(result2.length).toBeLessThanOrEqual(63)
// They should be different due to preserved suffix
expect(result1).not.toBe(result2)
})
})
describe('edge cases with serverId', () => {
it('should handle serverId with only non-alphanumeric characters', () => {
const serverId = '------' // All dashes
const result = buildFunctionCallToolName('server', 'tool', serverId)
// Should still produce a valid unique suffix via fallback hash
expect(result).toBeTruthy()
expect(result.length).toBeLessThanOrEqual(63)
expect(result).toMatch(/^[a-zA-Z][a-zA-Z0-9_-]*$/)
// Should have a suffix (underscore followed by something)
expect(result).toMatch(/_[a-z0-9]+$/)
})
it('should produce different results for different non-alphanumeric serverIds', () => {
const serverId1 = '------'
const serverId2 = '!!!!!!'
const result1 = buildFunctionCallToolName('server', 'tool', serverId1)
const result2 = buildFunctionCallToolName('server', 'tool', serverId2)
// Should be different because the hash fallback produces different values
expect(result1).not.toBe(result2)
})
it('should handle empty string serverId differently from undefined', () => {
const resultWithEmpty = buildFunctionCallToolName('server', 'tool', '')
const resultWithUndefined = buildFunctionCallToolName('server', 'tool', undefined)
// Empty string is falsy, so both should behave the same (no suffix)
expect(resultWithEmpty).toBe(resultWithUndefined)
})
it('should handle serverId with mixed alphanumeric and special chars', () => {
const serverId = 'ab@#cd' // Mixed chars, last 6 chars contain some alphanumeric
const result = buildFunctionCallToolName('server', 'tool', serverId)
// Should extract alphanumeric chars: 'abcd' from 'ab@#cd'
expect(result).toContain('abcd')
})
})
describe('real-world scenarios', () => {
it('should handle GitHub MCP server instances correctly', () => {
const serverName = 'github'
const toolName = 'search_repositories'
const githubComId = 'server-github-com-abc123'
const gheId = 'server-ghe-internal-xyz789'
const tool1 = buildFunctionCallToolName(serverName, toolName, githubComId)
const tool2 = buildFunctionCallToolName(serverName, toolName, gheId)
// Should be different
expect(tool1).not.toBe(tool2)
// Both should be valid identifiers
expect(tool1).toMatch(/^[a-zA-Z][a-zA-Z0-9_-]*$/)
expect(tool2).toMatch(/^[a-zA-Z][a-zA-Z0-9_-]*$/)
// Both should be <= 63 chars
expect(tool1.length).toBeLessThanOrEqual(63)
expect(tool2.length).toBeLessThanOrEqual(63)
})
it('should handle tool names that already include server name prefix', () => {
const result = buildFunctionCallToolName('github', 'github_search_repos')
expect(result).toBeTruthy()
// Should not double the server name
expect(result.split('github').length - 1).toBeLessThanOrEqual(2)
})
})
})

View File

@@ -1,25 +1,7 @@
export function buildFunctionCallToolName(serverName: string, toolName: string, serverId?: string) {
export function buildFunctionCallToolName(serverName: string, toolName: string) {
const sanitizedServer = serverName.trim().replace(/-/g, '_')
const sanitizedTool = toolName.trim().replace(/-/g, '_')
// Calculate suffix first to reserve space for it
// Suffix format: "_" + 6 alphanumeric chars = 7 chars total
let serverIdSuffix = ''
if (serverId) {
// Take the last 6 characters of the serverId for brevity
serverIdSuffix = serverId.slice(-6).replace(/[^a-zA-Z0-9]/g, '')
// Fallback: if suffix becomes empty (all non-alphanumeric chars), use a simple hash
if (!serverIdSuffix) {
const hash = serverId.split('').reduce((acc, char) => acc + char.charCodeAt(0), 0)
serverIdSuffix = hash.toString(36).slice(-6) || 'x'
}
}
// Reserve space for suffix when calculating max base name length
const SUFFIX_LENGTH = serverIdSuffix ? serverIdSuffix.length + 1 : 0 // +1 for underscore
const MAX_BASE_LENGTH = 63 - SUFFIX_LENGTH
// Combine server name and tool name
let name = sanitizedTool
if (!sanitizedTool.includes(sanitizedServer.slice(0, 7))) {
@@ -38,9 +20,9 @@ export function buildFunctionCallToolName(serverName: string, toolName: string,
// Remove consecutive underscores/dashes (optional improvement)
name = name.replace(/[_-]{2,}/g, '_')
// Truncate base name BEFORE adding suffix to ensure suffix is never cut off
if (name.length > MAX_BASE_LENGTH) {
name = name.slice(0, MAX_BASE_LENGTH)
// Truncate to 63 characters maximum
if (name.length > 63) {
name = name.slice(0, 63)
}
// Handle edge case: ensure we still have a valid name if truncation left invalid chars at edges
@@ -48,10 +30,5 @@ export function buildFunctionCallToolName(serverName: string, toolName: string,
name = name.slice(0, -1)
}
// Now append the suffix - it will always fit within 63 chars
if (serverIdSuffix) {
name = `${name}_${serverIdSuffix}`
}
return name
}

View File

@@ -572,6 +572,41 @@ const api = {
status: () => ipcRenderer.invoke(IpcChannel.WebSocket_Status),
sendFile: (filePath: string) => ipcRenderer.invoke(IpcChannel.WebSocket_SendFile, filePath),
getAllCandidates: () => ipcRenderer.invoke(IpcChannel.WebSocket_GetAllCandidates)
},
volcengine: {
saveCredentials: (accessKeyId: string, secretAccessKey: string): Promise<void> =>
ipcRenderer.invoke(IpcChannel.Volcengine_SaveCredentials, accessKeyId, secretAccessKey),
hasCredentials: (): Promise<boolean> => ipcRenderer.invoke(IpcChannel.Volcengine_HasCredentials),
clearCredentials: (): Promise<void> => ipcRenderer.invoke(IpcChannel.Volcengine_ClearCredentials),
listModels: (
projectName?: string,
region?: string
): Promise<{
models: Array<{ id: string; name: string; description?: string; created?: number }>
total?: number
warnings?: string[]
}> => ipcRenderer.invoke(IpcChannel.Volcengine_ListModels, projectName, region),
getAuthHeaders: (params: {
method: 'GET' | 'POST'
host: string
path: string
query?: Record<string, string>
body?: string
service?: string
region?: string
}): Promise<{ Authorization: string; 'X-Date': string; 'X-Content-Sha256': string; Host: string }> =>
ipcRenderer.invoke(IpcChannel.Volcengine_GetAuthHeaders, params),
makeRequest: (params: {
method: 'GET' | 'POST'
host: string
path: string
action: string
version: string
query?: Record<string, string>
body?: Record<string, unknown>
service?: string
region?: string
}): Promise<unknown> => ipcRenderer.invoke(IpcChannel.Volcengine_MakeRequest, params)
}
}

View File

@@ -212,9 +212,8 @@ export class ToolCallChunkHandler {
description: toolName,
type: 'builtin'
} as BaseTool
} else if ((mcpTool = this.mcpTools.find((t) => t.id === toolName) as MCPTool)) {
} else if ((mcpTool = this.mcpTools.find((t) => t.name === toolName) as MCPTool)) {
// 如果是客户端执行的 MCP 工具,沿用现有逻辑
// toolName is mcpTool.id (registered with id as key in convertMcpToolsToAiSdkTools)
logger.info(`[ToolCallChunkHandler] Handling client-side MCP tool: ${toolName}`)
// mcpTool = this.mcpTools.find((t) => t.name === toolName) as MCPTool
// if (!mcpTool) {

View File

@@ -14,6 +14,7 @@ import { OpenAIAPIClient } from './openai/OpenAIApiClient'
import { OpenAIResponseAPIClient } from './openai/OpenAIResponseAPIClient'
import { OVMSClient } from './ovms/OVMSClient'
import { PPIOAPIClient } from './ppio/PPIOAPIClient'
import { VolcengineAPIClient } from './volcengine/VolcengineAPIClient'
import { ZhipuAPIClient } from './zhipu/ZhipuAPIClient'
const logger = loggerService.withContext('ApiClientFactory')
@@ -64,6 +65,12 @@ export class ApiClientFactory {
return instance
}
if (provider.id === 'doubao') {
logger.debug(`Creating VolcengineAPIClient for provider: ${provider.id}`)
instance = new VolcengineAPIClient(provider) as BaseApiClient
return instance
}
if (provider.id === 'ovms') {
logger.debug(`Creating OVMSClient for provider: ${provider.id}`)
instance = new OVMSClient(provider) as BaseApiClient

View File

@@ -405,9 +405,6 @@ export abstract class BaseApiClient<
if (!param.name?.trim()) {
return acc
}
// Parse JSON type parameters (Legacy API clients)
// Related: src/renderer/src/pages/settings/AssistantSettings/AssistantModelSettings.tsx:133-148
// The UI stores JSON type params as strings, this function parses them before sending to API
if (param.type === 'json') {
const value = param.value as string
if (value === 'undefined') {

View File

@@ -46,7 +46,6 @@ import type {
GeminiSdkRawOutput,
GeminiSdkToolCall
} from '@renderer/types/sdk'
import { getTrailingApiVersion, withoutTrailingApiVersion } from '@renderer/utils'
import { isToolUseModeFunction } from '@renderer/utils/assistant'
import {
geminiFunctionCallToMcpTool,
@@ -164,10 +163,6 @@ export class GeminiAPIClient extends BaseApiClient<
return models
}
override getBaseURL(): string {
return withoutTrailingApiVersion(super.getBaseURL())
}
override async getSdkInstance() {
if (this.sdkInstance) {
return this.sdkInstance
@@ -193,13 +188,6 @@ export class GeminiAPIClient extends BaseApiClient<
if (this.provider.isVertex) {
return 'v1'
}
// Extract trailing API version from the URL
const trailingVersion = getTrailingApiVersion(this.provider.apiHost || '')
if (trailingVersion) {
return trailingVersion
}
return 'v1beta'
}

View File

@@ -0,0 +1,74 @@
import type OpenAI from '@cherrystudio/openai'
import { loggerService } from '@logger'
import { getVolcengineProjectName, getVolcengineRegion } from '@renderer/hooks/useVolcengine'
import type { Provider } from '@renderer/types'
import { OpenAIAPIClient } from '../openai/OpenAIApiClient'
const logger = loggerService.withContext('VolcengineAPIClient')
/**
* Volcengine (Doubao) API Client
*
* Extends OpenAIAPIClient for standard chat completions (OpenAI-compatible),
* but overrides listModels to use Volcengine's signed API via IPC.
*/
export class VolcengineAPIClient extends OpenAIAPIClient {
constructor(provider: Provider) {
super(provider)
}
/**
* List models using Volcengine's signed API
* This calls the main process VolcengineService which handles HMAC-SHA256 signing
*/
override async listModels(): Promise<OpenAI.Models.Model[]> {
try {
const hasCredentials = await window.api.volcengine.hasCredentials()
if (!hasCredentials) {
logger.info('Volcengine credentials not configured, falling back to OpenAI-compatible list')
// Fall back to standard OpenAI-compatible API if no Volcengine credentials
return super.listModels()
}
logger.info('Fetching models from Volcengine API using signed request')
const projectName = getVolcengineProjectName()
const region = getVolcengineRegion()
const response = await window.api.volcengine.listModels(projectName, region)
if (!response || !response.models) {
logger.warn('Empty response from Volcengine listModels')
return []
}
// Notify user of any partial failures
if (response.warnings && response.warnings.length > 0) {
for (const warning of response.warnings) {
logger.warn(warning)
}
window.toast?.warning('Some Volcengine models could not be fetched. Check logs for details.')
}
const models: OpenAI.Models.Model[] = response.models.map((model) => ({
id: model.id,
object: 'model' as const,
created: model.created || Math.floor(Date.now() / 1000),
owned_by: 'volcengine',
// @ts-ignore - description is used by UI to display model name
name: model.name || model.id
}))
logger.info(`Found ${models.length} models from Volcengine API`)
return models
} catch (error) {
logger.error('Failed to list Volcengine models:', error as Error)
// Notify user before falling back
window.toast?.warning('Failed to fetch Volcengine models. Check credentials if this persists.')
// Fall back to standard OpenAI-compatible API on error
logger.info('Falling back to OpenAI-compatible model list')
return super.listModels()
}
}
}

View File

@@ -7,7 +7,7 @@ import { isAwsBedrockProvider, isVertexProvider } from '@renderer/utils/provider
// https://docs.claude.com/en/docs/build-with-claude/extended-thinking#interleaved-thinking
const INTERLEAVED_THINKING_HEADER = 'interleaved-thinking-2025-05-14'
// https://docs.claude.com/en/docs/build-with-claude/context-windows#1m-token-context-window
// const CONTEXT_100M_HEADER = 'context-1m-2025-08-07'
const CONTEXT_100M_HEADER = 'context-1m-2025-08-07'
// https://docs.cloud.google.com/vertex-ai/generative-ai/docs/partner-models/claude/web-search
const WEBSEARCH_HEADER = 'web-search-2025-03-05'
@@ -17,7 +17,7 @@ export function addAnthropicHeaders(assistant: Assistant, model: Model): string[
if (
isClaude45ReasoningModel(model) &&
isToolUseModeFunction(assistant) &&
!(isVertexProvider(provider) || isAwsBedrockProvider(provider))
!(isVertexProvider(provider) && isAwsBedrockProvider(provider))
) {
anthropicHeaders.push(INTERLEAVED_THINKING_HEADER)
}
@@ -25,9 +25,7 @@ export function addAnthropicHeaders(assistant: Assistant, model: Model): string[
if (isVertexProvider(provider) && assistant.enableWebSearch) {
anthropicHeaders.push(WEBSEARCH_HEADER)
}
// We may add it by user preference in assistant.settings instead of always adding it.
// See #11540, #11397
// anthropicHeaders.push(CONTEXT_100M_HEADER)
anthropicHeaders.push(CONTEXT_100M_HEADER)
}
return anthropicHeaders
}

View File

@@ -28,7 +28,6 @@ import { type Assistant, type MCPTool, type Provider } from '@renderer/types'
import type { StreamTextParams } from '@renderer/types/aiCoreTypes'
import { mapRegexToPatterns } from '@renderer/utils/blacklistMatchPattern'
import { replacePromptVariables } from '@renderer/utils/prompt'
import { isAwsBedrockProvider } from '@renderer/utils/provider'
import type { ModelMessage, Tool } from 'ai'
import { stepCountIs } from 'ai'
@@ -107,7 +106,7 @@ export async function buildStreamTextParams(
searchWithTime: store.getState().websearch.searchWithTime
}
const { providerOptions, standardParams, bodyParams } = buildProviderOptions(assistant, model, provider, {
const { providerOptions, standardParams } = buildProviderOptions(assistant, model, provider, {
enableReasoning,
enableWebSearch,
enableGenerateImage
@@ -176,7 +175,7 @@ export async function buildStreamTextParams(
let headers: Record<string, string | undefined> = options.requestOptions?.headers ?? {}
if (isAnthropicModel(model) && !isAwsBedrockProvider(provider)) {
if (isAnthropicModel(model)) {
const newBetaHeaders = { 'anthropic-beta': addAnthropicHeaders(assistant, model).join(',') }
headers = combineHeaders(headers, newBetaHeaders)
}
@@ -185,7 +184,6 @@ export async function buildStreamTextParams(
// Note: standardParams (topK, frequencyPenalty, presencePenalty, stopSequences, seed)
// are extracted from custom parameters and passed directly to streamText()
// instead of being placed in providerOptions
// Note: bodyParams are custom parameters for AI Gateway that should be at body level
const params: StreamTextParams = {
messages: sdkMessages,
maxOutputTokens: getMaxTokens(assistant, model),
@@ -193,8 +191,6 @@ export async function buildStreamTextParams(
topP: getTopP(assistant, model),
// Include AI SDK standard params extracted from custom parameters
...standardParams,
// Include body-level params for AI Gateway custom parameters
...bodyParams,
abortSignal: options.requestOptions?.signal,
headers,
providerOptions,

View File

@@ -1,4 +1,4 @@
import type { Model, Provider } from '@renderer/types'
import type { Provider } from '@renderer/types'
import { describe, expect, it, vi } from 'vitest'
import { getAiSdkProviderId } from '../factory'
@@ -68,18 +68,6 @@ function createTestProvider(id: string, type: string): Provider {
} as Provider
}
function createAzureProvider(id: string, apiVersion?: string, model?: string): Provider {
return {
id,
type: 'azure-openai',
name: `Azure Test ${id}`,
apiKey: 'azure-test-key',
apiHost: 'azure-test-host',
apiVersion,
models: [{ id: model || 'gpt-4' } as Model]
}
}
describe('Integrated Provider Registry', () => {
describe('Provider ID Resolution', () => {
it('should resolve openrouter provider correctly', () => {
@@ -123,24 +111,6 @@ describe('Integrated Provider Registry', () => {
const result = getAiSdkProviderId(unknownProvider)
expect(result).toBe('unknown-provider')
})
it('should handle Azure OpenAI providers correctly', () => {
const azureProvider = createAzureProvider('azure-test', '2024-02-15', 'gpt-4o')
const result = getAiSdkProviderId(azureProvider)
expect(result).toBe('azure')
})
it('should handle Azure OpenAI providers response endpoint correctly', () => {
const azureProvider = createAzureProvider('azure-test', 'v1', 'gpt-4o')
const result = getAiSdkProviderId(azureProvider)
expect(result).toBe('azure-responses')
})
it('should handle Azure provider Claude Models', () => {
const provider = createTestProvider('azure-anthropic', 'anthropic')
const result = getAiSdkProviderId(provider)
expect(result).toBe('azure-anthropic')
})
})
describe('Backward Compatibility', () => {

View File

@@ -245,8 +245,8 @@ export class AiSdkSpanAdapter {
'gen_ai.usage.output_tokens'
]
const promptTokens = attributes[inputsTokenKeys.find((key) => attributes[key]) || '']
const completionTokens = attributes[outputTokenKeys.find((key) => attributes[key]) || '']
const completionTokens = attributes[inputsTokenKeys.find((key) => attributes[key]) || '']
const promptTokens = attributes[outputTokenKeys.find((key) => attributes[key]) || '']
if (completionTokens !== undefined || promptTokens !== undefined) {
const usage: TokenUsage = {

View File

@@ -1,53 +0,0 @@
import type { Span } from '@opentelemetry/api'
import { SpanKind, SpanStatusCode } from '@opentelemetry/api'
import { describe, expect, it, vi } from 'vitest'
import { AiSdkSpanAdapter } from '../AiSdkSpanAdapter'
vi.mock('@logger', () => ({
loggerService: {
withContext: () => ({
debug: vi.fn(),
error: vi.fn(),
info: vi.fn(),
warn: vi.fn()
})
}
}))
describe('AiSdkSpanAdapter', () => {
const createMockSpan = (attributes: Record<string, unknown>): Span => {
const span = {
spanContext: () => ({
traceId: 'trace-id',
spanId: 'span-id'
}),
_attributes: attributes,
_events: [],
name: 'test span',
status: { code: SpanStatusCode.OK },
kind: SpanKind.CLIENT,
startTime: [0, 0] as [number, number],
endTime: [0, 1] as [number, number],
ended: true,
parentSpanId: '',
links: []
}
return span as unknown as Span
}
it('maps prompt and completion usage tokens to the correct fields', () => {
const attributes = {
'ai.usage.promptTokens': 321,
'ai.usage.completionTokens': 654
}
const span = createMockSpan(attributes)
const result = AiSdkSpanAdapter.convertToSpanEntity({ span })
expect(result.usage).toBeDefined()
expect(result.usage?.prompt_tokens).toBe(321)
expect(result.usage?.completion_tokens).toBe(654)
expect(result.usage?.total_tokens).toBe(975)
})
})

View File

@@ -71,11 +71,10 @@ describe('mcp utils', () => {
const result = setupToolsConfig(mcpTools)
expect(result).not.toBeUndefined()
// Tools are now keyed by id (which includes serverId suffix) for uniqueness
expect(Object.keys(result!)).toEqual(['test-tool-1'])
expect(result!['test-tool-1']).toHaveProperty('description')
expect(result!['test-tool-1']).toHaveProperty('inputSchema')
expect(result!['test-tool-1']).toHaveProperty('execute')
expect(Object.keys(result!)).toEqual(['test-tool'])
expect(result!['test-tool']).toHaveProperty('description')
expect(result!['test-tool']).toHaveProperty('inputSchema')
expect(result!['test-tool']).toHaveProperty('execute')
})
it('should handle multiple MCP tools', () => {
@@ -110,8 +109,7 @@ describe('mcp utils', () => {
expect(result).not.toBeUndefined()
expect(Object.keys(result!)).toHaveLength(2)
// Tools are keyed by id for uniqueness
expect(Object.keys(result!)).toEqual(['tool1-id', 'tool2-id'])
expect(Object.keys(result!)).toEqual(['tool1', 'tool2'])
})
})
@@ -137,10 +135,9 @@ describe('mcp utils', () => {
const result = convertMcpToolsToAiSdkTools(mcpTools)
// Tools are keyed by id for uniqueness when multiple server instances exist
expect(Object.keys(result)).toEqual(['get-weather-id'])
expect(Object.keys(result)).toEqual(['get-weather'])
const tool = result['get-weather-id'] as Tool
const tool = result['get-weather'] as Tool
expect(tool.description).toBe('Get weather information')
expect(tool.inputSchema).toBeDefined()
expect(typeof tool.execute).toBe('function')
@@ -163,8 +160,8 @@ describe('mcp utils', () => {
const result = convertMcpToolsToAiSdkTools(mcpTools)
expect(Object.keys(result)).toEqual(['no-desc-tool-id'])
const tool = result['no-desc-tool-id'] as Tool
expect(Object.keys(result)).toEqual(['no-desc-tool'])
const tool = result['no-desc-tool'] as Tool
expect(tool.description).toBe('Tool from test-server')
})
@@ -205,13 +202,13 @@ describe('mcp utils', () => {
const result = convertMcpToolsToAiSdkTools(mcpTools)
expect(Object.keys(result)).toEqual(['complex-tool-id'])
const tool = result['complex-tool-id'] as Tool
expect(Object.keys(result)).toEqual(['complex-tool'])
const tool = result['complex-tool'] as Tool
expect(tool.inputSchema).toBeDefined()
expect(typeof tool.execute).toBe('function')
})
it('should preserve tool id with special characters', () => {
it('should preserve tool names with special characters', () => {
const mcpTools: MCPTool[] = [
{
id: 'special-tool-id',
@@ -228,8 +225,7 @@ describe('mcp utils', () => {
]
const result = convertMcpToolsToAiSdkTools(mcpTools)
// Tools are keyed by id for uniqueness
expect(Object.keys(result)).toEqual(['special-tool-id'])
expect(Object.keys(result)).toEqual(['tool_with-special.chars'])
})
it('should handle multiple tools with different schemas', () => {
@@ -280,11 +276,10 @@ describe('mcp utils', () => {
const result = convertMcpToolsToAiSdkTools(mcpTools)
// Tools are keyed by id for uniqueness
expect(Object.keys(result).sort()).toEqual(['boolean-tool-id', 'number-tool-id', 'string-tool-id'])
expect(result['string-tool-id']).toBeDefined()
expect(result['number-tool-id']).toBeDefined()
expect(result['boolean-tool-id']).toBeDefined()
expect(Object.keys(result).sort()).toEqual(['boolean-tool', 'number-tool', 'string-tool'])
expect(result['string-tool']).toBeDefined()
expect(result['number-tool']).toBeDefined()
expect(result['boolean-tool']).toBeDefined()
})
})
@@ -315,7 +310,7 @@ describe('mcp utils', () => {
]
const tools = convertMcpToolsToAiSdkTools(mcpTools)
const tool = tools['test-exec-tool-id'] as Tool
const tool = tools['test-exec-tool'] as Tool
const result = await tool.execute!({}, { messages: [], abortSignal: undefined, toolCallId: 'test-call-123' })
expect(requestToolConfirmation).toHaveBeenCalled()
@@ -348,7 +343,7 @@ describe('mcp utils', () => {
]
const tools = convertMcpToolsToAiSdkTools(mcpTools)
const tool = tools['cancelled-tool-id'] as Tool
const tool = tools['cancelled-tool'] as Tool
const result = await tool.execute!({}, { messages: [], abortSignal: undefined, toolCallId: 'cancel-call-123' })
expect(requestToolConfirmation).toHaveBeenCalled()
@@ -390,7 +385,7 @@ describe('mcp utils', () => {
]
const tools = convertMcpToolsToAiSdkTools(mcpTools)
const tool = tools['error-tool-id'] as Tool
const tool = tools['error-tool'] as Tool
await expect(
tool.execute!({}, { messages: [], abortSignal: undefined, toolCallId: 'error-call-123' })
@@ -426,7 +421,7 @@ describe('mcp utils', () => {
]
const tools = convertMcpToolsToAiSdkTools(mcpTools)
const tool = tools['auto-approve-tool-id'] as Tool
const tool = tools['auto-approve-tool'] as Tool
const result = await tool.execute!({}, { messages: [], abortSignal: undefined, toolCallId: 'auto-call-123' })
expect(requestToolConfirmation).not.toHaveBeenCalled()

View File

@@ -37,7 +37,7 @@ vi.mock('@cherrystudio/ai-core/provider', async (importOriginal) => {
},
customProviderIdSchema: {
safeParse: vi.fn((id) => {
const customProviders = ['google-vertex', 'google-vertex-anthropic', 'bedrock', 'ai-gateway']
const customProviders = ['google-vertex', 'google-vertex-anthropic', 'bedrock']
if (customProviders.includes(id)) {
return { success: true, data: id }
}
@@ -56,8 +56,7 @@ vi.mock('../provider/factory', () => ({
[SystemProviderIds.anthropic]: 'anthropic',
[SystemProviderIds.grok]: 'xai',
[SystemProviderIds.deepseek]: 'deepseek',
[SystemProviderIds.openrouter]: 'openrouter',
[SystemProviderIds['ai-gateway']]: 'ai-gateway'
[SystemProviderIds.openrouter]: 'openrouter'
}
return mapping[provider.id] || provider.id
})
@@ -155,10 +154,6 @@ vi.mock('../websearch', () => ({
getWebSearchParams: vi.fn(() => ({ enable_search: true }))
}))
vi.mock('../../prepareParams/header', () => ({
addAnthropicHeaders: vi.fn(() => ['context-1m-2025-08-07'])
}))
const ensureWindowApi = () => {
const globalWindow = window as any
globalWindow.api = globalWindow.api || {}
@@ -205,8 +200,6 @@ describe('options utils', () => {
expect(result.providerOptions).toHaveProperty('openai')
expect(result.providerOptions.openai).toBeDefined()
expect(result.standardParams).toBeDefined()
expect(result.bodyParams).toBeDefined()
expect(result.bodyParams).toEqual({})
})
it('should include reasoning parameters when enabled', () => {
@@ -640,149 +633,5 @@ describe('options utils', () => {
expect(result.providerOptions).toHaveProperty('anthropic')
})
})
describe('AWS Bedrock provider', () => {
const bedrockProvider = {
id: 'bedrock',
name: 'AWS Bedrock',
type: 'aws-bedrock',
apiKey: 'test-key',
apiHost: 'https://bedrock.us-east-1.amazonaws.com',
models: [] as Model[]
} as Provider
const bedrockModel: Model = {
id: 'anthropic.claude-sonnet-4-20250514-v1:0',
name: 'Claude Sonnet 4',
provider: 'bedrock'
} as Model
it('should build basic Bedrock options', () => {
const result = buildProviderOptions(mockAssistant, bedrockModel, bedrockProvider, {
enableReasoning: false,
enableWebSearch: false,
enableGenerateImage: false
})
expect(result.providerOptions).toHaveProperty('bedrock')
expect(result.providerOptions.bedrock).toBeDefined()
})
it('should include anthropicBeta when Anthropic headers are needed', async () => {
const { addAnthropicHeaders } = await import('../../prepareParams/header')
vi.mocked(addAnthropicHeaders).mockReturnValue(['interleaved-thinking-2025-05-14', 'context-1m-2025-08-07'])
const result = buildProviderOptions(mockAssistant, bedrockModel, bedrockProvider, {
enableReasoning: false,
enableWebSearch: false,
enableGenerateImage: false
})
expect(result.providerOptions.bedrock).toHaveProperty('anthropicBeta')
expect(result.providerOptions.bedrock.anthropicBeta).toEqual([
'interleaved-thinking-2025-05-14',
'context-1m-2025-08-07'
])
})
it('should include reasoning parameters when enabled', () => {
const result = buildProviderOptions(mockAssistant, bedrockModel, bedrockProvider, {
enableReasoning: true,
enableWebSearch: false,
enableGenerateImage: false
})
expect(result.providerOptions.bedrock).toHaveProperty('reasoningConfig')
expect(result.providerOptions.bedrock.reasoningConfig).toEqual({
type: 'enabled',
budgetTokens: 5000
})
})
})
describe('AI Gateway provider', () => {
const aiGatewayProvider: Provider = {
id: SystemProviderIds['ai-gateway'],
name: 'AI Gateway',
type: 'ai-gateway',
apiKey: 'test-key',
apiHost: 'https://ai-gateway.vercel.sh/v1/ai',
isSystem: true,
models: [] as Model[]
} as Provider
const aiGatewayModel: Model = {
id: 'openai/gpt-4',
name: 'GPT-4',
provider: SystemProviderIds['ai-gateway']
} as Model
it('should build basic AI Gateway options with empty bodyParams', () => {
const result = buildProviderOptions(mockAssistant, aiGatewayModel, aiGatewayProvider, {
enableReasoning: false,
enableWebSearch: false,
enableGenerateImage: false
})
expect(result.providerOptions).toHaveProperty('gateway')
expect(result.providerOptions.gateway).toBeDefined()
expect(result.bodyParams).toEqual({})
})
it('should place custom parameters in bodyParams for AI Gateway instead of providerOptions', async () => {
const { getCustomParameters } = await import('../reasoning')
vi.mocked(getCustomParameters).mockReturnValue({
tools: [{ id: 'openai.image_generation' }],
custom_param: 'custom_value'
})
const result = buildProviderOptions(mockAssistant, aiGatewayModel, aiGatewayProvider, {
enableReasoning: false,
enableWebSearch: false,
enableGenerateImage: false
})
// Custom parameters should be in bodyParams, NOT in providerOptions.gateway
expect(result.bodyParams).toHaveProperty('tools')
expect(result.bodyParams.tools).toEqual([{ id: 'openai.image_generation' }])
expect(result.bodyParams).toHaveProperty('custom_param')
expect(result.bodyParams.custom_param).toBe('custom_value')
// providerOptions.gateway should NOT contain custom parameters
expect(result.providerOptions.gateway).not.toHaveProperty('tools')
expect(result.providerOptions.gateway).not.toHaveProperty('custom_param')
})
it('should still extract AI SDK standard params from custom parameters for AI Gateway', async () => {
const { getCustomParameters } = await import('../reasoning')
vi.mocked(getCustomParameters).mockReturnValue({
topK: 5,
frequencyPenalty: 0.5,
tools: [{ id: 'openai.image_generation' }]
})
const result = buildProviderOptions(mockAssistant, aiGatewayModel, aiGatewayProvider, {
enableReasoning: false,
enableWebSearch: false,
enableGenerateImage: false
})
// Standard params should be extracted and returned separately
expect(result.standardParams).toEqual({
topK: 5,
frequencyPenalty: 0.5
})
// Custom params (non-standard) should be in bodyParams
expect(result.bodyParams).toHaveProperty('tools')
expect(result.bodyParams.tools).toEqual([{ id: 'openai.image_generation' }])
// Neither should be in providerOptions.gateway
expect(result.providerOptions.gateway).not.toHaveProperty('topK')
expect(result.providerOptions.gateway).not.toHaveProperty('tools')
})
})
})
})

View File

@@ -144,7 +144,7 @@ describe('reasoning utils', () => {
expect(result).toEqual({})
})
it('should not override reasoning for OpenRouter when reasoning effort undefined', async () => {
it('should disable reasoning for OpenRouter when no reasoning effort set', async () => {
const { isReasoningModel } = await import('@renderer/config/models')
vi.mocked(isReasoningModel).mockReturnValue(true)
@@ -161,29 +161,6 @@ describe('reasoning utils', () => {
settings: {}
} as Assistant
const result = getReasoningEffort(assistant, model)
expect(result).toEqual({})
})
it('should disable reasoning for OpenRouter when reasoning effort explicitly none', async () => {
const { isReasoningModel } = await import('@renderer/config/models')
vi.mocked(isReasoningModel).mockReturnValue(true)
const model: Model = {
id: 'anthropic/claude-sonnet-4',
name: 'Claude Sonnet 4',
provider: SystemProviderIds.openrouter
} as Model
const assistant: Assistant = {
id: 'test',
name: 'Test',
settings: {
reasoning_effort: 'none'
}
} as Assistant
const result = getReasoningEffort(assistant, model)
expect(result).toEqual({ reasoning: { enabled: false, exclude: true } })
})
@@ -292,9 +269,7 @@ describe('reasoning utils', () => {
const assistant: Assistant = {
id: 'test',
name: 'Test',
settings: {
reasoning_effort: 'none'
}
settings: {}
} as Assistant
const result = getReasoningEffort(assistant, model)

View File

@@ -28,9 +28,7 @@ export function convertMcpToolsToAiSdkTools(mcpTools: MCPTool[]): ToolSet {
const tools: ToolSet = {}
for (const mcpTool of mcpTools) {
// Use mcpTool.id (which includes serverId suffix) to ensure uniqueness
// when multiple instances of the same MCP server type are configured
tools[mcpTool.id] = tool({
tools[mcpTool.name] = tool({
description: mcpTool.description || `Tool from ${mcpTool.serverName}`,
inputSchema: jsonSchema(mcpTool.inputSchema as JSONSchema7),
execute: async (params, { toolCallId }) => {

View File

@@ -36,7 +36,6 @@ import { isSupportServiceTierProvider, isSupportVerbosityProvider } from '@rende
import type { JSONValue } from 'ai'
import { t } from 'i18next'
import { addAnthropicHeaders } from '../prepareParams/header'
import { getAiSdkProviderId } from '../provider/factory'
import { buildGeminiGenerateImageParams } from './image'
import {
@@ -155,7 +154,6 @@ export function buildProviderOptions(
): {
providerOptions: Record<string, Record<string, JSONValue>>
standardParams: Partial<Record<AiSdkParam, any>>
bodyParams: Record<string, any>
} {
logger.debug('buildProviderOptions', { assistant, model, actualProvider, capabilities })
const rawProviderId = getAiSdkProviderId(actualProvider)
@@ -254,6 +252,12 @@ export function buildProviderOptions(
const customParams = getCustomParameters(assistant)
const { standardParams, providerParams } = extractAiSdkStandardParams(customParams)
// 合并 provider 特定的自定义参数到 providerSpecificOptions
providerSpecificOptions = {
...providerSpecificOptions,
...providerParams
}
let rawProviderKey =
{
'google-vertex': 'google',
@@ -268,27 +272,12 @@ export function buildProviderOptions(
rawProviderKey = { gemini: 'google', ['openai-response']: 'openai' }[actualProvider.type] || actualProvider.type
}
// For AI Gateway, custom parameters should be placed at body level, not inside providerOptions.gateway
// See: https://github.com/CherryHQ/cherry-studio/issues/4197
let bodyParams: Record<string, any> = {}
if (rawProviderKey === 'gateway') {
// Custom parameters go to body level for AI Gateway
bodyParams = providerParams
} else {
// For other providers, merge custom parameters into providerSpecificOptions
providerSpecificOptions = {
...providerSpecificOptions,
...providerParams
}
}
// 返回 AI Core SDK 要求的格式:{ 'providerId': providerOptions } 以及提取的标准参数和 body 参数
// 返回 AI Core SDK 要求的格式:{ 'providerId': providerOptions } 以及提取的标准参数
return {
providerOptions: {
[rawProviderKey]: providerSpecificOptions
},
standardParams,
bodyParams
standardParams
}
}
@@ -480,11 +469,6 @@ function buildBedrockProviderOptions(
}
}
const betaHeaders = addAnthropicHeaders(assistant, model)
if (betaHeaders.length > 0) {
providerOptions.anthropicBeta = betaHeaders
}
return providerOptions
}

View File

@@ -16,8 +16,10 @@ import {
isGPT5SeriesModel,
isGPT51SeriesModel,
isGrok4FastReasoningModel,
isGrokReasoningModel,
isOpenAIDeepResearchModel,
isOpenAIModel,
isOpenAIReasoningModel,
isQwenAlwaysThinkModel,
isQwenReasoningModel,
isReasoningModel,
@@ -62,22 +64,30 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
}
const reasoningEffort = assistant?.settings?.reasoning_effort
// reasoningEffort is not set, no extra reasoning setting
// Generally, for every model which supports reasoning control, the reasoning effort won't be undefined.
// It's for some reasoning models that don't support reasoning control, such as deepseek reasoner.
if (!reasoningEffort) {
return {}
}
// Handle 'none' reasoningEffort. It's explicitly off.
if (reasoningEffort === 'none') {
// Handle undefined and 'none' reasoningEffort.
// TODO: They should be separated.
if (!reasoningEffort || reasoningEffort === 'none') {
// openrouter: use reasoning
if (model.provider === SystemProviderIds.openrouter) {
// Don't disable reasoning for Gemini models that support thinking tokens
if (isSupportedThinkingTokenGeminiModel(model) && !GEMINI_FLASH_MODEL_REGEX.test(model.id)) {
return {}
}
// 'none' is not an available value for effort for now.
// I think they should resolve this issue soon, so I'll just go ahead and use this value.
if (isGPT51SeriesModel(model) && reasoningEffort === 'none') {
return { reasoning: { effort: 'none' } }
}
// Don't disable reasoning for models that require it
if (
isGrokReasoningModel(model) ||
isOpenAIReasoningModel(model) ||
isQwenAlwaysThinkModel(model) ||
model.id.includes('seed-oss') ||
model.id.includes('minimax-m2')
) {
return {}
}
return { reasoning: { enabled: false, exclude: true } }
}
@@ -91,6 +101,11 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
return { enable_thinking: false }
}
// claude
if (isSupportedThinkingTokenClaudeModel(model)) {
return {}
}
// gemini
if (isSupportedThinkingTokenGeminiModel(model)) {
if (GEMINI_FLASH_MODEL_REGEX.test(model.id)) {
@@ -103,10 +118,8 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
}
}
}
} else {
logger.warn(`Model ${model.id} cannot disable reasoning. Fallback to empty reasoning param.`)
return {}
}
return {}
}
// use thinking, doubao, zhipu, etc.
@@ -126,7 +139,6 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
}
}
logger.warn(`Model ${model.id} doesn't match any disable reasoning behavior. Fallback to empty reasoning param.`)
return {}
}
@@ -281,7 +293,6 @@ export function getReasoningEffort(assistant: Assistant, model: Model): Reasonin
}
// OpenRouter models, use reasoning
// FIXME: duplicated openrouter handling. remove one
if (model.provider === SystemProviderIds.openrouter) {
if (isSupportedReasoningEffortModel(model) || isSupportedThinkingTokenModel(model)) {
return {
@@ -673,10 +684,6 @@ export function getCustomParameters(assistant: Assistant): Record<string, any> {
if (!param.name?.trim()) {
return acc
}
// Parse JSON type parameters
// Related: src/renderer/src/pages/settings/AssistantSettings/AssistantModelSettings.tsx:133-148
// The UI stores JSON type params as strings (e.g., '{"key":"value"}')
// This function parses them into objects before sending to the API
if (param.type === 'json') {
const value = param.value as string
if (value === 'undefined') {

View File

@@ -215,10 +215,6 @@
border-top: none !important;
}
.ant-collapse-header-text {
overflow-x: hidden;
}
.ant-slider .ant-slider-handle::after {
box-shadow: 0 1px 4px 0px rgb(128 128 128 / 50%) !important;
}

View File

@@ -10,7 +10,6 @@ import {
} from '@ant-design/icons'
import { loggerService } from '@logger'
import { download } from '@renderer/utils/download'
import { convertImageToPng } from '@renderer/utils/image'
import type { ImageProps as AntImageProps } from 'antd'
import { Dropdown, Image as AntImage, Space } from 'antd'
import { Base64 } from 'js-base64'
@@ -34,38 +33,39 @@ const ImageViewer: React.FC<ImageViewerProps> = ({ src, style, ...props }) => {
// 复制图片到剪贴板
const handleCopyImage = async (src: string) => {
try {
let blob: Blob
if (src.startsWith('data:')) {
// 处理 base64 格式的图片
const match = src.match(/^data:(image\/\w+);base64,(.+)$/)
if (!match) throw new Error('Invalid base64 image format')
const mimeType = match[1]
const byteArray = Base64.toUint8Array(match[2])
blob = new Blob([byteArray], { type: mimeType })
const blob = new Blob([byteArray], { type: mimeType })
await navigator.clipboard.write([new ClipboardItem({ [mimeType]: blob })])
} else if (src.startsWith('file://')) {
// 处理本地文件路径
const bytes = await window.api.fs.read(src)
const mimeType = mime.getType(src) || 'application/octet-stream'
blob = new Blob([bytes], { type: mimeType })
const blob = new Blob([bytes], { type: mimeType })
await navigator.clipboard.write([
new ClipboardItem({
[mimeType]: blob
})
])
} else {
// 处理 URL 格式的图片
const response = await fetch(src)
blob = await response.blob()
const blob = await response.blob()
await navigator.clipboard.write([
new ClipboardItem({
[blob.type]: blob
})
])
}
// 统一转换为 PNG 以确保兼容性(剪贴板 API 不支持 JPEG
const pngBlob = await convertImageToPng(blob)
const item = new ClipboardItem({
'image/png': pngBlob
})
await navigator.clipboard.write([item])
window.toast.success(t('message.copy.success'))
} catch (error) {
const err = error as Error
logger.error(`Failed to copy image: ${err.message}`, { stack: err.stack })
logger.error('Failed to copy image:', error as Error)
window.toast.error(t('message.copy.failed'))
}
}

View File

@@ -57,7 +57,7 @@ const PopupContainer: React.FC<Props> = ({ model, apiFilter, modelFilter, showTa
const [_searchText, setSearchText] = useState('')
const searchText = useDeferredValue(_searchText)
const { models, isLoading } = useApiModels(apiFilter)
const adaptedModels = useMemo(() => models.map((model) => apiModelAdapter(model)), [models])
const adaptedModels = models.map((model) => apiModelAdapter(model))
// 当前选中的模型ID
const currentModelId = model ? model.id : ''

View File

@@ -309,14 +309,11 @@ describe('Ling Models', () => {
describe('Claude & regional providers', () => {
it('identifies claude 4.5 variants', () => {
expect(isClaude45ReasoningModel(createModel({ id: 'claude-sonnet-4.5-preview' }))).toBe(true)
expect(isClaude4SeriesModel(createModel({ id: 'claude-sonnet-4-5@20250929' }))).toBe(true)
expect(isClaude45ReasoningModel(createModel({ id: 'claude-3-sonnet' }))).toBe(false)
})
it('identifies claude 4 variants', () => {
expect(isClaude4SeriesModel(createModel({ id: 'claude-opus-4' }))).toBe(true)
expect(isClaude4SeriesModel(createModel({ id: 'claude-sonnet-4@20250514' }))).toBe(true)
expect(isClaude4SeriesModel(createModel({ id: 'anthropic.claude-sonnet-4-20250514-v1:0' }))).toBe(true)
expect(isClaude4SeriesModel(createModel({ id: 'claude-4.2-sonnet-variant' }))).toBe(false)
expect(isClaude4SeriesModel(createModel({ id: 'claude-3-haiku' }))).toBe(false)
})

View File

@@ -125,371 +125,195 @@ describe('model utils', () => {
openAIWebSearchOnlyMock.mockReturnValue(false)
})
describe('OpenAI model detection', () => {
describe('isOpenAILLMModel', () => {
it('returns false for undefined model', () => {
expect(isOpenAILLMModel(undefined as unknown as Model)).toBe(false)
})
it('detects OpenAI LLM models through reasoning and GPT prefix', () => {
expect(isOpenAILLMModel(undefined as unknown as Model)).toBe(false)
expect(isOpenAILLMModel(createModel({ id: 'gpt-4o-image' }))).toBe(false)
it('returns false for image generation models', () => {
expect(isOpenAILLMModel(createModel({ id: 'gpt-4o-image' }))).toBe(false)
})
reasoningMock.mockReturnValueOnce(true)
expect(isOpenAILLMModel(createModel({ id: 'o1-preview' }))).toBe(true)
it('returns true for reasoning models', () => {
reasoningMock.mockReturnValueOnce(true)
expect(isOpenAILLMModel(createModel({ id: 'o1-preview' }))).toBe(true)
})
expect(isOpenAILLMModel(createModel({ id: 'GPT-5-turbo' }))).toBe(true)
})
it('returns true for GPT-prefixed models', () => {
expect(isOpenAILLMModel(createModel({ id: 'GPT-5-turbo' }))).toBe(true)
})
it('detects OpenAI models via GPT prefix or reasoning support', () => {
expect(isOpenAIModel(createModel({ id: 'gpt-4.1' }))).toBe(true)
reasoningMock.mockReturnValueOnce(true)
expect(isOpenAIModel(createModel({ id: 'o3' }))).toBe(true)
})
it('evaluates support for flex service tier and alias helper', () => {
expect(isSupportFlexServiceTierModel(createModel({ id: 'o3' }))).toBe(true)
expect(isSupportFlexServiceTierModel(createModel({ id: 'o3-mini' }))).toBe(false)
expect(isSupportFlexServiceTierModel(createModel({ id: 'o4-mini' }))).toBe(true)
expect(isSupportFlexServiceTierModel(createModel({ id: 'gpt-5-preview' }))).toBe(true)
expect(isSupportedFlexServiceTier(createModel({ id: 'gpt-4o' }))).toBe(false)
})
it('detects verbosity support for GPT-5+ families', () => {
expect(isSupportVerbosityModel(createModel({ id: 'gpt-5' }))).toBe(true)
expect(isSupportVerbosityModel(createModel({ id: 'gpt-5-chat' }))).toBe(false)
expect(isSupportVerbosityModel(createModel({ id: 'gpt-5.1-preview' }))).toBe(true)
})
it('limits verbosity controls for GPT-5 Pro models', () => {
const proModel = createModel({ id: 'gpt-5-pro' })
const previewModel = createModel({ id: 'gpt-5-preview' })
expect(getModelSupportedVerbosity(proModel)).toEqual([undefined, 'high'])
expect(getModelSupportedVerbosity(previewModel)).toEqual([undefined, 'low', 'medium', 'high'])
expect(isGPT5ProModel(proModel)).toBe(true)
expect(isGPT5ProModel(previewModel)).toBe(false)
})
it('identifies OpenAI chat-completion-only models', () => {
expect(isOpenAIChatCompletionOnlyModel(createModel({ id: 'gpt-4o-search-preview' }))).toBe(true)
expect(isOpenAIChatCompletionOnlyModel(createModel({ id: 'o1-mini' }))).toBe(true)
expect(isOpenAIChatCompletionOnlyModel(createModel({ id: 'gpt-4o' }))).toBe(false)
})
it('filters unsupported OpenAI catalog entries', () => {
expect(isSupportedModel({ id: 'gpt-4', object: 'model' } as any)).toBe(true)
expect(isSupportedModel({ id: 'tts-1', object: 'model' } as any)).toBe(false)
})
it('calculates temperature/top-p support correctly', () => {
const model = createModel({ id: 'o1' })
reasoningMock.mockReturnValue(true)
expect(isNotSupportTemperatureAndTopP(model)).toBe(true)
const openWeight = createModel({ id: 'gpt-oss-debug' })
expect(isNotSupportTemperatureAndTopP(openWeight)).toBe(false)
const chatOnly = createModel({ id: 'o1-preview' })
reasoningMock.mockReturnValue(false)
expect(isNotSupportTemperatureAndTopP(chatOnly)).toBe(true)
const qwenMt = createModel({ id: 'qwen-mt-large', provider: 'aliyun' })
expect(isNotSupportTemperatureAndTopP(qwenMt)).toBe(true)
})
it('handles gemma and gemini detections plus zhipu tagging', () => {
expect(isGemmaModel(createModel({ id: 'Gemma-3-27B' }))).toBe(true)
expect(isGemmaModel(createModel({ group: 'Gemma' }))).toBe(true)
expect(isGemmaModel(createModel({ id: 'gpt-4o' }))).toBe(false)
expect(isGeminiModel(createModel({ id: 'Gemini-2.0' }))).toBe(true)
expect(isZhipuModel(createModel({ provider: 'zhipu' }))).toBe(true)
expect(isZhipuModel(createModel({ provider: 'openai' }))).toBe(false)
})
it('groups qwen models by prefix', () => {
const qwen = createModel({ id: 'Qwen-7B', provider: 'qwen', name: 'Qwen-7B' })
const qwenOmni = createModel({ id: 'qwen2.5-omni', name: 'qwen2.5-omni' })
const other = createModel({ id: 'deepseek-v3', group: 'DeepSeek' })
const grouped = groupQwenModels([qwen, qwenOmni, other])
expect(Object.keys(grouped)).toContain('qwen-7b')
expect(Object.keys(grouped)).toContain('qwen2.5')
expect(grouped.DeepSeek).toContain(other)
})
it('aggregates boolean helpers based on regex rules', () => {
expect(isAnthropicModel(createModel({ id: 'claude-3.5' }))).toBe(true)
expect(isQwenMTModel(createModel({ id: 'qwen-mt-plus' }))).toBe(true)
expect(isNotSupportSystemMessageModel(createModel({ id: 'gemma-moe' }))).toBe(true)
expect(isOpenAIOpenWeightModel(createModel({ id: 'gpt-oss-free' }))).toBe(true)
})
describe('isNotSupportedTextDelta', () => {
it('returns true for qwen-mt-turbo and qwen-mt-plus models', () => {
// qwen-mt series that don't support text delta
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-turbo' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-plus' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'Qwen-MT-Turbo' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'QWEN-MT-PLUS' }))).toBe(true)
})
describe('isOpenAIModel', () => {
it('detects models via GPT prefix', () => {
expect(isOpenAIModel(createModel({ id: 'gpt-4.1' }))).toBe(true)
})
it('returns false for qwen-mt-flash and other models', () => {
// qwen-mt-flash supports text delta
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-flash' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'Qwen-MT-Flash' }))).toBe(false)
it('detects models via reasoning support', () => {
reasoningMock.mockReturnValueOnce(true)
expect(isOpenAIModel(createModel({ id: 'o3' }))).toBe(true)
})
// Legacy qwen models without mt prefix (support text delta)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-turbo' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-plus' }))).toBe(false)
// Other qwen models
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-max' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen2.5-72b' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-vl-plus' }))).toBe(false)
// Non-qwen models
expect(isNotSupportTextDeltaModel(createModel({ id: 'gpt-4o' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'claude-3.5' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'glm-4-plus' }))).toBe(false)
})
describe('isOpenAIChatCompletionOnlyModel', () => {
it('identifies chat-completion-only models', () => {
expect(isOpenAIChatCompletionOnlyModel(createModel({ id: 'gpt-4o-search-preview' }))).toBe(true)
expect(isOpenAIChatCompletionOnlyModel(createModel({ id: 'o1-mini' }))).toBe(true)
})
it('handles models with version suffixes', () => {
// qwen-mt models with version suffixes
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-turbo-1201' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-plus-0828' }))).toBe(true)
it('returns false for general models', () => {
expect(isOpenAIChatCompletionOnlyModel(createModel({ id: 'gpt-4o' }))).toBe(false)
})
// Legacy qwen models with version suffixes (support text delta)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-turbo-0828' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-plus-latest' }))).toBe(false)
})
})
describe('GPT-5 family detection', () => {
describe('isGPT5SeriesModel', () => {
it('returns true for GPT-5 models', () => {
expect(isGPT5SeriesModel(createModel({ id: 'gpt-5-preview' }))).toBe(true)
})
it('returns false for GPT-5.1 models', () => {
expect(isGPT5SeriesModel(createModel({ id: 'gpt-5.1-preview' }))).toBe(false)
})
})
describe('isGPT51SeriesModel', () => {
it('returns true for GPT-5.1 models', () => {
expect(isGPT51SeriesModel(createModel({ id: 'gpt-5.1-mini' }))).toBe(true)
})
})
describe('isGPT5SeriesReasoningModel', () => {
it('returns true for GPT-5 reasoning models', () => {
expect(isGPT5SeriesReasoningModel(createModel({ id: 'gpt-5' }))).toBe(true)
})
it('returns false for gpt-5-chat', () => {
expect(isGPT5SeriesReasoningModel(createModel({ id: 'gpt-5-chat' }))).toBe(false)
})
})
describe('isGPT5ProModel', () => {
it('returns true for GPT-5 Pro models', () => {
expect(isGPT5ProModel(createModel({ id: 'gpt-5-pro' }))).toBe(true)
})
it('returns false for non-Pro GPT-5 models', () => {
expect(isGPT5ProModel(createModel({ id: 'gpt-5-preview' }))).toBe(false)
})
})
it('evaluates GPT-5 family helpers', () => {
expect(isGPT5SeriesModel(createModel({ id: 'gpt-5-preview' }))).toBe(true)
expect(isGPT5SeriesModel(createModel({ id: 'gpt-5.1-preview' }))).toBe(false)
expect(isGPT51SeriesModel(createModel({ id: 'gpt-5.1-mini' }))).toBe(true)
expect(isGPT5SeriesReasoningModel(createModel({ id: 'gpt-5-prompt' }))).toBe(true)
expect(isSupportVerbosityModel(createModel({ id: 'gpt-5-chat' }))).toBe(false)
})
describe('Verbosity support', () => {
describe('isSupportVerbosityModel', () => {
it('returns true for GPT-5 models', () => {
expect(isSupportVerbosityModel(createModel({ id: 'gpt-5' }))).toBe(true)
})
it('wraps generate/vision helpers that operate on arrays', () => {
const models = [createModel({ id: 'gpt-4o' }), createModel({ id: 'gpt-4o-mini' })]
expect(isVisionModels(models)).toBe(true)
visionMock.mockReturnValueOnce(true).mockReturnValueOnce(false)
expect(isVisionModels(models)).toBe(false)
it('returns false for GPT-5 chat models', () => {
expect(isSupportVerbosityModel(createModel({ id: 'gpt-5-chat' }))).toBe(false)
})
it('returns true for GPT-5.1 models', () => {
expect(isSupportVerbosityModel(createModel({ id: 'gpt-5.1-preview' }))).toBe(true)
})
})
describe('getModelSupportedVerbosity', () => {
it('returns only "high" for GPT-5 Pro models', () => {
expect(getModelSupportedVerbosity(createModel({ id: 'gpt-5-pro' }))).toEqual([undefined, 'high'])
expect(getModelSupportedVerbosity(createModel({ id: 'gpt-5-pro-2025-10-06' }))).toEqual([undefined, 'high'])
})
it('returns all levels for non-Pro GPT-5 models', () => {
const previewModel = createModel({ id: 'gpt-5-preview' })
expect(getModelSupportedVerbosity(previewModel)).toEqual([undefined, 'low', 'medium', 'high'])
})
it('returns all levels for GPT-5.1 models', () => {
const gpt51Model = createModel({ id: 'gpt-5.1-preview' })
expect(getModelSupportedVerbosity(gpt51Model)).toEqual([undefined, 'low', 'medium', 'high'])
})
it('returns only undefined for non-GPT-5 models', () => {
expect(getModelSupportedVerbosity(createModel({ id: 'gpt-4o' }))).toEqual([undefined])
expect(getModelSupportedVerbosity(createModel({ id: 'claude-3.5' }))).toEqual([undefined])
})
it('returns only undefined for undefiend/null input', () => {
expect(getModelSupportedVerbosity(undefined)).toEqual([undefined])
expect(getModelSupportedVerbosity(null)).toEqual([undefined])
})
})
expect(isGenerateImageModels(models)).toBe(true)
generateImageMock.mockReturnValueOnce(true).mockReturnValueOnce(false)
expect(isGenerateImageModels(models)).toBe(false)
})
describe('Flex service tier support', () => {
describe('isSupportFlexServiceTierModel', () => {
it('returns true for supported models', () => {
expect(isSupportFlexServiceTierModel(createModel({ id: 'o3' }))).toBe(true)
expect(isSupportFlexServiceTierModel(createModel({ id: 'o4-mini' }))).toBe(true)
expect(isSupportFlexServiceTierModel(createModel({ id: 'gpt-5-preview' }))).toBe(true)
})
it('filters models for agent usage', () => {
expect(agentModelFilter(createModel())).toBe(true)
it('returns false for unsupported models', () => {
expect(isSupportFlexServiceTierModel(createModel({ id: 'o3-mini' }))).toBe(false)
})
})
embeddingMock.mockReturnValueOnce(true)
expect(agentModelFilter(createModel({ id: 'text-embedding' }))).toBe(false)
describe('isSupportedFlexServiceTier', () => {
it('returns false for non-flex models', () => {
expect(isSupportedFlexServiceTier(createModel({ id: 'gpt-4o' }))).toBe(false)
})
})
embeddingMock.mockReturnValue(false)
rerankMock.mockReturnValueOnce(true)
expect(agentModelFilter(createModel({ id: 'rerank' }))).toBe(false)
rerankMock.mockReturnValue(false)
textToImageMock.mockReturnValueOnce(true)
expect(agentModelFilter(createModel({ id: 'gpt-image-1' }))).toBe(false)
})
describe('Temperature and top-p support', () => {
describe('isNotSupportTemperatureAndTopP', () => {
it('returns true for reasoning models', () => {
const model = createModel({ id: 'o1' })
reasoningMock.mockReturnValue(true)
expect(isNotSupportTemperatureAndTopP(model)).toBe(true)
})
it('identifies models with maximum temperature of 1.0', () => {
// Zhipu models should have max temperature of 1.0
expect(isMaxTemperatureOneModel(createModel({ id: 'glm-4' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'GLM-4-Plus' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'glm-3-turbo' }))).toBe(true)
it('returns false for open weight models', () => {
const openWeight = createModel({ id: 'gpt-oss-debug' })
expect(isNotSupportTemperatureAndTopP(openWeight)).toBe(false)
})
// Anthropic models should have max temperature of 1.0
expect(isMaxTemperatureOneModel(createModel({ id: 'claude-3.5-sonnet' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'Claude-3-opus' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'claude-2.1' }))).toBe(true)
it('returns true for chat-only models without reasoning', () => {
const chatOnly = createModel({ id: 'o1-preview' })
reasoningMock.mockReturnValue(false)
expect(isNotSupportTemperatureAndTopP(chatOnly)).toBe(true)
})
// Moonshot models should have max temperature of 1.0
expect(isMaxTemperatureOneModel(createModel({ id: 'moonshot-1.0' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'kimi-k2-thinking' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'Moonshot-Pro' }))).toBe(true)
it('returns true for Qwen MT models', () => {
const qwenMt = createModel({ id: 'qwen-mt-large', provider: 'aliyun' })
expect(isNotSupportTemperatureAndTopP(qwenMt)).toBe(true)
})
})
})
describe('Text delta support', () => {
describe('isNotSupportTextDeltaModel', () => {
it('returns true for qwen-mt-turbo and qwen-mt-plus models', () => {
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-turbo' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-plus' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'Qwen-MT-Turbo' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'QWEN-MT-PLUS' }))).toBe(true)
})
it('returns false for qwen-mt-flash and other models', () => {
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-flash' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'Qwen-MT-Flash' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-turbo' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-plus' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-max' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen2.5-72b' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-vl-plus' }))).toBe(false)
})
it('returns false for non-qwen models', () => {
expect(isNotSupportTextDeltaModel(createModel({ id: 'gpt-4o' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'claude-3.5' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'glm-4-plus' }))).toBe(false)
})
it('handles models with version suffixes', () => {
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-turbo-1201' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-mt-plus-0828' }))).toBe(true)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-turbo-0828' }))).toBe(false)
expect(isNotSupportTextDeltaModel(createModel({ id: 'qwen-plus-latest' }))).toBe(false)
})
})
})
describe('Model provider detection', () => {
describe('isGemmaModel', () => {
it('detects Gemma models by ID', () => {
expect(isGemmaModel(createModel({ id: 'Gemma-3-27B' }))).toBe(true)
})
it('detects Gemma models by group', () => {
expect(isGemmaModel(createModel({ group: 'Gemma' }))).toBe(true)
})
it('returns false for non-Gemma models', () => {
expect(isGemmaModel(createModel({ id: 'gpt-4o' }))).toBe(false)
})
})
describe('isGeminiModel', () => {
it('detects Gemini models', () => {
expect(isGeminiModel(createModel({ id: 'Gemini-2.0' }))).toBe(true)
})
})
describe('isZhipuModel', () => {
it('detects Zhipu models by provider', () => {
expect(isZhipuModel(createModel({ provider: 'zhipu' }))).toBe(true)
})
it('returns false for non-Zhipu models', () => {
expect(isZhipuModel(createModel({ provider: 'openai' }))).toBe(false)
})
})
describe('isAnthropicModel', () => {
it('detects Anthropic models', () => {
expect(isAnthropicModel(createModel({ id: 'claude-3.5' }))).toBe(true)
})
})
describe('isQwenMTModel', () => {
it('detects Qwen MT models', () => {
expect(isQwenMTModel(createModel({ id: 'qwen-mt-plus' }))).toBe(true)
})
})
describe('isOpenAIOpenWeightModel', () => {
it('detects OpenAI open weight models', () => {
expect(isOpenAIOpenWeightModel(createModel({ id: 'gpt-oss-free' }))).toBe(true)
})
})
})
describe('System message support', () => {
describe('isNotSupportSystemMessageModel', () => {
it('returns true for models that do not support system messages', () => {
expect(isNotSupportSystemMessageModel(createModel({ id: 'gemma-moe' }))).toBe(true)
})
})
})
describe('Model grouping', () => {
describe('groupQwenModels', () => {
it('groups qwen models by prefix', () => {
const qwen = createModel({ id: 'Qwen-7B', provider: 'qwen', name: 'Qwen-7B' })
const qwenOmni = createModel({ id: 'qwen2.5-omni', name: 'qwen2.5-omni' })
const other = createModel({ id: 'deepseek-v3', group: 'DeepSeek' })
const grouped = groupQwenModels([qwen, qwenOmni, other])
expect(Object.keys(grouped)).toContain('qwen-7b')
expect(Object.keys(grouped)).toContain('qwen2.5')
expect(grouped.DeepSeek).toContain(other)
})
})
})
describe('Vision and image generation', () => {
describe('isVisionModels', () => {
it('returns true when all models support vision', () => {
const models = [createModel({ id: 'gpt-4o' }), createModel({ id: 'gpt-4o-mini' })]
expect(isVisionModels(models)).toBe(true)
})
it('returns false when some models do not support vision', () => {
const models = [createModel({ id: 'gpt-4o' }), createModel({ id: 'gpt-4o-mini' })]
visionMock.mockReturnValueOnce(true).mockReturnValueOnce(false)
expect(isVisionModels(models)).toBe(false)
})
})
describe('isGenerateImageModels', () => {
it('returns true when all models support image generation', () => {
const models = [createModel({ id: 'gpt-4o' }), createModel({ id: 'gpt-4o-mini' })]
expect(isGenerateImageModels(models)).toBe(true)
})
it('returns false when some models do not support image generation', () => {
const models = [createModel({ id: 'gpt-4o' }), createModel({ id: 'gpt-4o-mini' })]
generateImageMock.mockReturnValueOnce(true).mockReturnValueOnce(false)
expect(isGenerateImageModels(models)).toBe(false)
})
})
})
describe('Model filtering', () => {
describe('isSupportedModel', () => {
it('filters supported OpenAI catalog entries', () => {
expect(isSupportedModel({ id: 'gpt-4', object: 'model' } as any)).toBe(true)
})
it('filters unsupported OpenAI catalog entries', () => {
expect(isSupportedModel({ id: 'tts-1', object: 'model' } as any)).toBe(false)
})
})
describe('agentModelFilter', () => {
it('returns true for regular models', () => {
expect(agentModelFilter(createModel())).toBe(true)
})
it('filters out embedding models', () => {
embeddingMock.mockReturnValueOnce(true)
expect(agentModelFilter(createModel({ id: 'text-embedding' }))).toBe(false)
})
it('filters out rerank models', () => {
embeddingMock.mockReturnValue(false)
rerankMock.mockReturnValueOnce(true)
expect(agentModelFilter(createModel({ id: 'rerank' }))).toBe(false)
})
it('filters out text-to-image models', () => {
rerankMock.mockReturnValue(false)
textToImageMock.mockReturnValueOnce(true)
expect(agentModelFilter(createModel({ id: 'gpt-image-1' }))).toBe(false)
})
})
})
describe('Temperature limits', () => {
describe('isMaxTemperatureOneModel', () => {
it('returns true for Zhipu models', () => {
expect(isMaxTemperatureOneModel(createModel({ id: 'glm-4' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'GLM-4-Plus' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'glm-3-turbo' }))).toBe(true)
})
it('returns true for Anthropic models', () => {
expect(isMaxTemperatureOneModel(createModel({ id: 'claude-3.5-sonnet' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'Claude-3-opus' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'claude-2.1' }))).toBe(true)
})
it('returns true for Moonshot models', () => {
expect(isMaxTemperatureOneModel(createModel({ id: 'moonshot-1.0' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'kimi-k2-thinking' }))).toBe(true)
expect(isMaxTemperatureOneModel(createModel({ id: 'Moonshot-Pro' }))).toBe(true)
})
it('returns false for other models', () => {
expect(isMaxTemperatureOneModel(createModel({ id: 'gpt-4o' }))).toBe(false)
expect(isMaxTemperatureOneModel(createModel({ id: 'gpt-4-turbo' }))).toBe(false)
expect(isMaxTemperatureOneModel(createModel({ id: 'qwen-max' }))).toBe(false)
expect(isMaxTemperatureOneModel(createModel({ id: 'gemini-pro' }))).toBe(false)
})
})
// Other models should return false
expect(isMaxTemperatureOneModel(createModel({ id: 'gpt-4o' }))).toBe(false)
expect(isMaxTemperatureOneModel(createModel({ id: 'gpt-4-turbo' }))).toBe(false)
expect(isMaxTemperatureOneModel(createModel({ id: 'qwen-max' }))).toBe(false)
expect(isMaxTemperatureOneModel(createModel({ id: 'gemini-pro' }))).toBe(false)
})
})

View File

@@ -396,11 +396,7 @@ export function isClaude45ReasoningModel(model: Model): boolean {
export function isClaude4SeriesModel(model: Model): boolean {
const modelId = getLowerBaseModelName(model.id, '/')
// Supports various formats including:
// - Direct API: claude-sonnet-4, claude-opus-4-20250514
// - GCP Vertex AI: claude-sonnet-4@20250514
// - AWS Bedrock: anthropic.claude-sonnet-4-20250514-v1:0
const regex = /claude-(sonnet|opus|haiku)-4(?:[.-]\d+)?(?:[@\-:][\w\-:]+)?$/i
const regex = /claude-(sonnet|opus|haiku)-4(?:[.-]\d+)?(?:-[\w-]+)?$/i
return regex.test(modelId)
}
@@ -460,19 +456,16 @@ export const isSupportedThinkingTokenZhipuModel = (model: Model): boolean => {
}
export const isDeepSeekHybridInferenceModel = (model: Model) => {
const { idResult, nameResult } = withModelIdAndNameAsId(model, (model) => {
const modelId = getLowerBaseModelName(model.id)
// deepseek官方使用chat和reasoner做推理控制其他provider需要单独判断id可能会有所差别
// openrouter: deepseek/deepseek-chat-v3.1 不知道会不会有其他provider仿照ds官方分出一个同id的作为非思考模式的模型这里有风险
// Matches: "deepseek-v3" followed by ".digit" or "-digit".
// Optionally, this can be followed by ".alphanumeric_sequence" or "-alphanumeric_sequence"
// until the end of the string.
// Examples: deepseek-v3.1, deepseek-v3-1, deepseek-v3.1.2, deepseek-v3.1-alpha
// Does NOT match: deepseek-v3.123 (missing separator after '1'), deepseek-v3.x (x isn't a digit)
// TODO: move to utils and add test cases
return /deepseek-v3(?:\.\d|-\d)(?:(\.|-)\w+)?$/.test(modelId) || modelId.includes('deepseek-chat-v3.1')
})
return idResult || nameResult
const modelId = getLowerBaseModelName(model.id)
// deepseek官方使用chat和reasoner做推理控制其他provider需要单独判断id可能会有所差别
// openrouter: deepseek/deepseek-chat-v3.1 不知道会不会有其他provider仿照ds官方分出一个同id的作为非思考模式的模型这里有风险
// Matches: "deepseek-v3" followed by ".digit" or "-digit".
// Optionally, this can be followed by ".alphanumeric_sequence" or "-alphanumeric_sequence"
// until the end of the string.
// Examples: deepseek-v3.1, deepseek-v3-1, deepseek-v3.1.2, deepseek-v3.1-alpha
// Does NOT match: deepseek-v3.123 (missing separator after '1'), deepseek-v3.x (x isn't a digit)
// TODO: move to utils and add test cases
return /deepseek-v3(?:\.\d|-\d)(?:(\.|-)\w+)?$/.test(modelId) || modelId.includes('deepseek-chat-v3.1')
}
export const isLingReasoningModel = (model?: Model): boolean => {
@@ -526,6 +519,7 @@ export function isReasoningModel(model?: Model): boolean {
REASONING_REGEX.test(model.name) ||
isSupportedThinkingTokenDoubaoModel(model) ||
isDeepSeekHybridInferenceModel(model) ||
isDeepSeekHybridInferenceModel({ ...model, id: model.name }) ||
false
)
}

View File

@@ -4,14 +4,7 @@ import { type Model, SystemProviderIds } from '@renderer/types'
import type { OpenAIVerbosity, ValidOpenAIVerbosity } from '@renderer/types/aiCoreTypes'
import { getLowerBaseModelName } from '@renderer/utils'
import {
isGPT5ProModel,
isGPT5SeriesModel,
isGPT51SeriesModel,
isOpenAIChatCompletionOnlyModel,
isOpenAIOpenWeightModel,
isOpenAIReasoningModel
} from './openai'
import { isOpenAIChatCompletionOnlyModel, isOpenAIOpenWeightModel, isOpenAIReasoningModel } from './openai'
import { isQwenMTModel } from './qwen'
import { isGenerateImageModel, isTextToImageModel, isVisionModel } from './vision'
export const NOT_SUPPORTED_REGEX = /(?:^tts|whisper|speech)/i
@@ -130,46 +123,21 @@ export const isNotSupportSystemMessageModel = (model: Model): boolean => {
return isQwenMTModel(model) || isGemmaModel(model)
}
// Verbosity settings is only supported by GPT-5 and newer models
// Specifically, GPT-5 and GPT-5.1 for now
// GPT-5 verbosity configuration
// gpt-5-pro only supports 'high', other GPT-5 models support all levels
const MODEL_SUPPORTED_VERBOSITY: readonly {
readonly validator: (model: Model) => boolean
readonly values: readonly ValidOpenAIVerbosity[]
}[] = [
// gpt-5-pro
{ validator: isGPT5ProModel, values: ['high'] },
// gpt-5 except gpt-5-pro
{
validator: (model: Model) => isGPT5SeriesModel(model) && !isGPT5ProModel(model),
values: ['low', 'medium', 'high']
},
// gpt-5.1
{ validator: isGPT51SeriesModel, values: ['low', 'medium', 'high'] }
]
export const MODEL_SUPPORTED_VERBOSITY: Record<string, ValidOpenAIVerbosity[]> = {
'gpt-5-pro': ['high'],
default: ['low', 'medium', 'high']
} as const
/**
* Returns the list of supported verbosity levels for the given model.
* If the model is not recognized as a GPT-5 series model, only `undefined` is returned.
* For GPT-5-pro, only 'high' is supported; for other GPT-5 models, 'low', 'medium', and 'high' are supported.
* For GPT-5.1 series models, 'low', 'medium', and 'high' are supported.
* @param model - The model to check
* @returns An array of supported verbosity levels, always including `undefined` as the first element
*/
export const getModelSupportedVerbosity = (model: Model | undefined | null): OpenAIVerbosity[] => {
if (!model) {
return [undefined]
export const getModelSupportedVerbosity = (model: Model): OpenAIVerbosity[] => {
const modelId = getLowerBaseModelName(model.id)
let supportedValues: ValidOpenAIVerbosity[]
if (modelId.includes('gpt-5-pro')) {
supportedValues = MODEL_SUPPORTED_VERBOSITY['gpt-5-pro']
} else {
supportedValues = MODEL_SUPPORTED_VERBOSITY.default
}
let supportedValues: ValidOpenAIVerbosity[] = []
for (const { validator, values } of MODEL_SUPPORTED_VERBOSITY) {
if (validator(model)) {
supportedValues = [...values]
break
}
}
return [undefined, ...supportedValues]
}

View File

@@ -1,5 +1,5 @@
import { throttle } from 'lodash'
import { useEffect, useMemo, useRef } from 'react'
import { useEffect, useRef } from 'react'
import { useTimer } from './useTimer'
@@ -12,18 +12,13 @@ import { useTimer } from './useTimer'
*/
export default function useScrollPosition(key: string, throttleWait?: number) {
const containerRef = useRef<HTMLDivElement>(null)
const scrollKey = useMemo(() => `scroll:${key}`, [key])
const scrollKeyRef = useRef(scrollKey)
const scrollKey = `scroll:${key}`
const { setTimeoutTimer } = useTimer()
useEffect(() => {
scrollKeyRef.current = scrollKey
}, [scrollKey])
const handleScroll = throttle(() => {
const position = containerRef.current?.scrollTop ?? 0
window.requestAnimationFrame(() => {
window.keyv.set(scrollKeyRef.current, position)
window.keyv.set(scrollKey, position)
})
}, throttleWait ?? 100)
@@ -33,9 +28,5 @@ export default function useScrollPosition(key: string, throttleWait?: number) {
setTimeoutTimer('scrollEffect', scroll, 50)
}, [scrollKey, setTimeoutTimer])
useEffect(() => {
return () => handleScroll.cancel()
}, [handleScroll])
return { containerRef, handleScroll }
}

View File

@@ -1,4 +1,4 @@
import { useCallback, useEffect, useRef } from 'react'
import { useEffect, useRef } from 'react'
/**
* 定时器管理 Hook用于管理 setTimeout 和 setInterval 定时器,支持通过 key 来标识不同的定时器
@@ -43,38 +43,10 @@ export const useTimer = () => {
const timeoutMapRef = useRef(new Map<string, NodeJS.Timeout>())
const intervalMapRef = useRef(new Map<string, NodeJS.Timeout>())
/**
* 清除指定 key 的 setTimeout 定时器
* @param key - 定时器标识符
*/
const clearTimeoutTimer = useCallback((key: string) => {
clearTimeout(timeoutMapRef.current.get(key))
timeoutMapRef.current.delete(key)
}, [])
/**
* 清除指定 key 的 setInterval 定时器
* @param key - 定时器标识符
*/
const clearIntervalTimer = useCallback((key: string) => {
clearInterval(intervalMapRef.current.get(key))
intervalMapRef.current.delete(key)
}, [])
/**
* 清除所有定时器,包括 setTimeout 和 setInterval
*/
const clearAllTimers = useCallback(() => {
timeoutMapRef.current.forEach((timer) => clearTimeout(timer))
intervalMapRef.current.forEach((timer) => clearInterval(timer))
timeoutMapRef.current.clear()
intervalMapRef.current.clear()
}, [])
// 组件卸载时自动清理所有定时器
useEffect(() => {
return () => clearAllTimers()
}, [clearAllTimers])
}, [])
/**
* 设置一个 setTimeout 定时器
@@ -93,15 +65,12 @@ export const useTimer = () => {
* cleanup();
* ```
*/
const setTimeoutTimer = useCallback(
(key: string, ...args: Parameters<typeof setTimeout>) => {
clearTimeout(timeoutMapRef.current.get(key))
const timer = setTimeout(...args)
timeoutMapRef.current.set(key, timer)
return () => clearTimeoutTimer(key)
},
[clearTimeoutTimer]
)
const setTimeoutTimer = (key: string, ...args: Parameters<typeof setTimeout>) => {
clearTimeout(timeoutMapRef.current.get(key))
const timer = setTimeout(...args)
timeoutMapRef.current.set(key, timer)
return () => clearTimeoutTimer(key)
}
/**
* 设置一个 setInterval 定时器
@@ -120,31 +89,56 @@ export const useTimer = () => {
* cleanup();
* ```
*/
const setIntervalTimer = useCallback(
(key: string, ...args: Parameters<typeof setInterval>) => {
clearInterval(intervalMapRef.current.get(key))
const timer = setInterval(...args)
intervalMapRef.current.set(key, timer)
return () => clearIntervalTimer(key)
},
[clearIntervalTimer]
)
const setIntervalTimer = (key: string, ...args: Parameters<typeof setInterval>) => {
clearInterval(intervalMapRef.current.get(key))
const timer = setInterval(...args)
intervalMapRef.current.set(key, timer)
return () => clearIntervalTimer(key)
}
/**
* 清除指定 key 的 setTimeout 定时器
* @param key - 定时器标识符
*/
const clearTimeoutTimer = (key: string) => {
clearTimeout(timeoutMapRef.current.get(key))
timeoutMapRef.current.delete(key)
}
/**
* 清除指定 key 的 setInterval 定时器
* @param key - 定时器标识符
*/
const clearIntervalTimer = (key: string) => {
clearInterval(intervalMapRef.current.get(key))
intervalMapRef.current.delete(key)
}
/**
* 清除所有 setTimeout 定时器
*/
const clearAllTimeoutTimers = useCallback(() => {
const clearAllTimeoutTimers = () => {
timeoutMapRef.current.forEach((timer) => clearTimeout(timer))
timeoutMapRef.current.clear()
}, [])
}
/**
* 清除所有 setInterval 定时器
*/
const clearAllIntervalTimers = useCallback(() => {
const clearAllIntervalTimers = () => {
intervalMapRef.current.forEach((timer) => clearInterval(timer))
intervalMapRef.current.clear()
}, [])
}
/**
* 清除所有定时器,包括 setTimeout 和 setInterval
*/
const clearAllTimers = () => {
timeoutMapRef.current.forEach((timer) => clearTimeout(timer))
intervalMapRef.current.forEach((timer) => clearInterval(timer))
timeoutMapRef.current.clear()
intervalMapRef.current.clear()
}
return {
setTimeoutTimer,

View File

@@ -0,0 +1,26 @@
import store, { useAppSelector } from '@renderer/store'
import { setVolcengineProjectName, setVolcengineRegion } from '@renderer/store/llm'
import { useDispatch } from 'react-redux'
export function useVolcengineSettings() {
const settings = useAppSelector((state) => state.llm.settings.volcengine)
const dispatch = useDispatch()
return {
...settings,
setRegion: (region: string) => dispatch(setVolcengineRegion(region)),
setProjectName: (projectName: string) => dispatch(setVolcengineProjectName(projectName))
}
}
export function getVolcengineSettings() {
return store.getState().llm.settings.volcengine
}
export function getVolcengineRegion() {
return store.getState().llm.settings.volcengine.region
}
export function getVolcengineProjectName() {
return store.getState().llm.settings.volcengine.projectName
}

View File

@@ -280,7 +280,6 @@
"denied": "Tool request was denied.",
"timeout": "Tool request timed out before receiving approval."
},
"toolPendingFallback": "Tool",
"waiting": "Waiting for tool permission decision..."
},
"type": {
@@ -1209,7 +1208,7 @@
"endpoint_type": {
"anthropic": "Anthropic",
"gemini": "Gemini",
"image-generation": "Image Generation (OpenAI)",
"image-generation": "Image Generation",
"jina-rerank": "Jina Rerank",
"openai": "OpenAI",
"openai-response": "OpenAI-Response"
@@ -4511,6 +4510,23 @@
"private_key_placeholder": "Enter Service Account private key",
"title": "Service Account Configuration"
}
},
"volcengine": {
"access_key_id": "Access Key ID",
"access_key_id_help": "Your Volcengine Access Key ID",
"clear_credentials": "Clear Credentials",
"credentials_cleared": "Credentials cleared",
"credentials_required": "Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "Credentials saved",
"description": "Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Please use IAM user's Access Key for authentication, do not use the root user credentials.",
"project_name": "Project Name",
"project_name_help": "Project name for endpoint filtering, default is 'default'",
"region": "Region",
"region_help": "Service region, e.g., cn-beijing",
"save_credentials": "Save Credentials",
"secret_access_key": "Secret Access Key",
"secret_access_key_help": "Your Volcengine Secret Access Key, please keep it secure",
"title": "Volcengine Configuration"
}
},
"proxy": {

View File

@@ -280,7 +280,6 @@
"denied": "工具请求已被拒绝。",
"timeout": "工具请求在收到批准前超时。"
},
"toolPendingFallback": "工具",
"waiting": "等待工具权限决定..."
},
"type": {
@@ -1209,7 +1208,7 @@
"endpoint_type": {
"anthropic": "Anthropic",
"gemini": "Gemini",
"image-generation": "图生成 (OpenAI)",
"image-generation": "图生成",
"jina-rerank": "Jina 重排序",
"openai": "OpenAI",
"openai-response": "OpenAI-Response"
@@ -4511,6 +4510,23 @@
"private_key_placeholder": "请输入 Service Account 私钥",
"title": "Service Account 配置"
}
},
"volcengine": {
"access_key_id": "Access Key ID",
"access_key_id_help": "您的火山引擎 Access Key ID",
"clear_credentials": "清除凭证",
"credentials_cleared": "凭证已清除",
"credentials_required": "请填写 Access Key ID 和 Secret Access Key",
"credentials_saved": "凭证已保存",
"description": "火山引擎是字节跳动旗下的云服务平台,提供豆包等大语言模型服务。请使用 IAM 子用户的 Access Key 进行身份验证,不要使用主账号的根用户密钥。",
"project_name": "项目名称",
"project_name_help": "用于筛选推理接入点的项目名称,默认为 'default'",
"region": "地域",
"region_help": "服务地域,例如 cn-beijing",
"save_credentials": "保存凭证",
"secret_access_key": "Secret Access Key",
"secret_access_key_help": "您的火山引擎 Secret Access Key请妥善保管",
"title": "火山引擎配置"
}
},
"proxy": {

View File

@@ -280,7 +280,6 @@
"denied": "工具請求已被拒絕。",
"timeout": "工具請求在收到核准前逾時。"
},
"toolPendingFallback": "工具",
"waiting": "等待工具權限決定..."
},
"type": {
@@ -1209,7 +1208,7 @@
"endpoint_type": {
"anthropic": "Anthropic",
"gemini": "Gemini",
"image-generation": "圖生成 (OpenAI)",
"image-generation": "圖生成",
"jina-rerank": "Jina Rerank",
"openai": "OpenAI",
"openai-response": "OpenAI-Response"
@@ -4511,6 +4510,23 @@
"private_key_placeholder": "輸入服務帳戶私密金鑰",
"title": "服務帳戶設定"
}
},
"volcengine": {
"access_key_id": "Access Key ID",
"access_key_id_help": "您的火山引擎 Access Key ID",
"clear_credentials": "清除憑證",
"credentials_cleared": "憑證已清除",
"credentials_required": "請填寫 Access Key ID 和 Secret Access Key",
"credentials_saved": "憑證已儲存",
"description": "火山引擎是字節跳動旗下的雲端服務平台,提供豆包等大型語言模型服務。請使用 IAM 子用戶的 Access Key 進行身份驗證,不要使用主帳號的根用戶密鑰。",
"project_name": "專案名稱",
"project_name_help": "用於篩選推論接入點的專案名稱,預設為 'default'",
"region": "地區",
"region_help": "服務地區,例如 cn-beijing",
"save_credentials": "儲存憑證",
"secret_access_key": "Secret Access Key",
"secret_access_key_help": "您的火山引擎 Secret Access Key請妥善保管",
"title": "火山引擎設定"
}
},
"proxy": {

View File

@@ -280,7 +280,6 @@
"denied": "Tool-Anfrage wurde abgelehnt.",
"timeout": "Tool-Anfrage ist abgelaufen, bevor eine Genehmigung eingegangen ist."
},
"toolPendingFallback": "Werkzeug",
"waiting": "Warten auf Entscheidung über Tool-Berechtigung..."
},
"type": {
@@ -1209,7 +1208,7 @@
"endpoint_type": {
"anthropic": "Anthropic",
"gemini": "Gemini",
"image-generation": "Bilderzeugung (OpenAI)",
"image-generation": "Bildgenerierung",
"jina-rerank": "Jina Reranking",
"openai": "OpenAI",
"openai-response": "OpenAI-Response"
@@ -4511,6 +4510,23 @@
"private_key_placeholder": "Service Account-Privat-Schlüssel eingeben",
"title": "Service Account-Konfiguration"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -280,7 +280,6 @@
"denied": "Το αίτημα για εργαλείο απορρίφθηκε.",
"timeout": "Το αίτημα για το εργαλείο έληξε πριν λάβει έγκριση."
},
"toolPendingFallback": "Εργαλείο",
"waiting": "Αναμονή για απόφαση άδειας εργαλείου..."
},
"type": {
@@ -1209,7 +1208,7 @@
"endpoint_type": {
"anthropic": "Anthropic",
"gemini": "Gemini",
"image-generation": "Δημιουργία Εικόνων (OpenAI)",
"image-generation": "Δημιουργία Εικόνας",
"jina-rerank": "Επαναταξινόμηση Jina",
"openai": "OpenAI",
"openai-response": "Απάντηση OpenAI"
@@ -4511,6 +4510,23 @@
"private_key_placeholder": "Παρακαλώ εισάγετε το ιδιωτικό κλειδί του λογαριασμού υπηρεσίας",
"title": "Διαμόρφωση λογαριασμού υπηρεσίας"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -280,7 +280,6 @@
"denied": "La solicitud de herramienta fue denegada.",
"timeout": "La solicitud de herramienta expiró antes de recibir la aprobación."
},
"toolPendingFallback": "Herramienta",
"waiting": "Esperando la decisión de permiso de la herramienta..."
},
"type": {
@@ -1209,7 +1208,7 @@
"endpoint_type": {
"anthropic": "Anthropic",
"gemini": "Gemini",
"image-generation": "Generación de Imágenes (OpenAI)",
"image-generation": "Generación de imágenes",
"jina-rerank": "Reordenamiento Jina",
"openai": "OpenAI",
"openai-response": "Respuesta de OpenAI"
@@ -4511,6 +4510,23 @@
"private_key_placeholder": "Ingrese la clave privada de Service Account",
"title": "Configuración de Service Account"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -280,7 +280,6 @@
"denied": "La demande d'outil a été refusée.",
"timeout": "La demande d'outil a expiré avant d'obtenir l'approbation."
},
"toolPendingFallback": "Outil",
"waiting": "En attente de la décision d'autorisation de l'outil..."
},
"type": {
@@ -1209,7 +1208,7 @@
"endpoint_type": {
"anthropic": "Anthropic",
"gemini": "Gemini",
"image-generation": "Génération d'images (OpenAI)",
"image-generation": "Génération d'images",
"jina-rerank": "Reclassement Jina",
"openai": "OpenAI",
"openai-response": "Réponse OpenAI"
@@ -4511,6 +4510,23 @@
"private_key_placeholder": "Veuillez saisir la clé privée du compte de service",
"title": "Configuration du compte de service"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -280,7 +280,6 @@
"denied": "ツールリクエストは拒否されました。",
"timeout": "ツールリクエストは承認を受ける前にタイムアウトしました。"
},
"toolPendingFallback": "ツール",
"waiting": "ツールの許可決定を待っています..."
},
"type": {
@@ -1209,7 +1208,7 @@
"endpoint_type": {
"anthropic": "Anthropic",
"gemini": "Gemini",
"image-generation": "画像生成 (OpenAI)",
"image-generation": "画像生成",
"jina-rerank": "Jina Rerank",
"openai": "OpenAI",
"openai-response": "OpenAI-Response"
@@ -4511,6 +4510,23 @@
"private_key_placeholder": "サービスアカウントの秘密鍵を入力してください",
"title": "サービスアカウント設定"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -16,7 +16,7 @@
"error": {
"failed": "Falha ao excluir o agente"
},
"title": "Excluir Agente"
"title": "删除代理"
},
"edit": {
"title": "Agent Editor"
@@ -111,7 +111,7 @@
"label": "Modo de permissão",
"options": {
"acceptEdits": "Aceitar edições automaticamente",
"bypassPermissions": "Ignorar verificações de permissão",
"bypassPermissions": "忽略检查 de permissão",
"default": "Padrão (perguntar antes de continuar)",
"plan": "Modo de planejamento (plano sujeito a aprovação)"
},
@@ -150,7 +150,7 @@
},
"success": {
"install": "Plugin instalado com sucesso",
"uninstall": "Plugin desinstalado com sucesso"
"uninstall": "插件 desinstalado com sucesso"
},
"tab": "plug-in",
"type": {
@@ -280,7 +280,6 @@
"denied": "Solicitação de ferramenta foi negada.",
"timeout": "A solicitação da ferramenta expirou antes de receber aprovação."
},
"toolPendingFallback": "Ferramenta",
"waiting": "Aguardando decisão de permissão da ferramenta..."
},
"type": {
@@ -1135,7 +1134,7 @@
"duplicate": "Duplicar",
"edit": "Editar",
"enabled": "Ativado",
"error": "Erro",
"error": "错误",
"errors": {
"create_message": "Falha ao criar mensagem",
"validation": "Falha na verificação"
@@ -1209,7 +1208,7 @@
"endpoint_type": {
"anthropic": "Anthropic",
"gemini": "Gemini",
"image-generation": "Geração de Imagens (OpenAI)",
"image-generation": "Geração de Imagem",
"jina-rerank": "Jina Reordenar",
"openai": "OpenAI",
"openai-response": "Resposta OpenAI"
@@ -4511,6 +4510,23 @@
"private_key_placeholder": "Por favor, insira a chave privada da Conta de Serviço",
"title": "Configuração da Conta de Serviço"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -280,7 +280,6 @@
"denied": "Запрос на инструмент был отклонён.",
"timeout": "Запрос на инструмент превысил время ожидания до получения подтверждения."
},
"toolPendingFallback": "Инструмент",
"waiting": "Ожидание решения о разрешении на использование инструмента..."
},
"type": {
@@ -1209,7 +1208,7 @@
"endpoint_type": {
"anthropic": "Anthropic",
"gemini": "Gemini",
"image-generation": "Генерация изображений (OpenAI)",
"image-generation": "Изображение",
"jina-rerank": "Jina Rerank",
"openai": "OpenAI",
"openai-response": "OpenAI-Response"
@@ -4511,6 +4510,23 @@
"private_key_placeholder": "Введите приватный ключ Service Account",
"title": "Конфигурация Service Account"
}
},
"volcengine": {
"access_key_id": "[to be translated]:Access Key ID",
"access_key_id_help": "[to be translated]:Your Volcengine Access Key ID",
"clear_credentials": "[to be translated]:Clear Credentials",
"credentials_cleared": "[to be translated]:Credentials cleared",
"credentials_required": "[to be translated]:Please fill in Access Key ID and Secret Access Key",
"credentials_saved": "[to be translated]:Credentials saved",
"description": "[to be translated]:Volcengine is ByteDance's cloud service platform, providing Doubao and other large language model services. Use Access Key for authentication to fetch model list.",
"project_name": "[to be translated]:Project Name",
"project_name_help": "[to be translated]:Project name for endpoint filtering, default is 'default'",
"region": "[to be translated]:Region",
"region_help": "[to be translated]:Service region, e.g., cn-beijing",
"save_credentials": "[to be translated]:Save Credentials",
"secret_access_key": "[to be translated]:Secret Access Key",
"secret_access_key_help": "[to be translated]:Your Volcengine Secret Access Key, please keep it secure",
"title": "[to be translated]:Volcengine Configuration"
}
},
"proxy": {

View File

@@ -9,7 +9,6 @@ import { getModel } from '@renderer/hooks/useModel'
import { useSettings } from '@renderer/hooks/useSettings'
import { useTextareaResize } from '@renderer/hooks/useTextareaResize'
import { useTimer } from '@renderer/hooks/useTimer'
import { CacheService } from '@renderer/services/CacheService'
import { pauseTrace } from '@renderer/services/SpanManagerService'
import { estimateUserPromptUsage } from '@renderer/services/TokenService'
import { useAppDispatch, useAppSelector } from '@renderer/store'
@@ -42,10 +41,19 @@ import { getInputbarConfig } from './registry'
import { TopicType } from './types'
const logger = loggerService.withContext('AgentSessionInputbar')
const agentSessionDraftCache = new Map<string, string>()
const DRAFT_CACHE_TTL = 24 * 60 * 60 * 1000 // 24 hours
const readDraftFromCache = (key: string): string => {
return agentSessionDraftCache.get(key) ?? ''
}
const getAgentDraftCacheKey = (agentId: string) => `agent-session-draft-${agentId}`
const writeDraftToCache = (key: string, value: string) => {
if (!value) {
agentSessionDraftCache.delete(key)
} else {
agentSessionDraftCache.set(key, value)
}
}
type Props = {
agentId: string
@@ -162,15 +170,16 @@ const AgentSessionInputbarInner: FC<InnerProps> = ({ assistant, agentId, session
const scope = TopicType.Session
const config = getInputbarConfig(scope)
// Use shared hooks for text and textarea management with draft persistence
const draftCacheKey = getAgentDraftCacheKey(agentId)
// Use shared hooks for text and textarea management
const initialDraft = useMemo(() => readDraftFromCache(agentId), [agentId])
const persistDraft = useCallback((next: string) => writeDraftToCache(agentId, next), [agentId])
const {
text,
setText,
isEmpty: inputEmpty
} = useInputText({
initialValue: CacheService.get<string>(draftCacheKey) ?? '',
onChange: (value) => CacheService.set(draftCacheKey, value, DRAFT_CACHE_TTL)
initialValue: initialDraft,
onChange: persistDraft
})
const {
textareaRef,
@@ -422,7 +431,6 @@ const AgentSessionInputbarInner: FC<InnerProps> = ({ assistant, agentId, session
})
)
// Clear text after successful send (draft is cleared automatically via onChange)
setText('')
setTimeoutTimer('agentSession_sendMessage', () => setText(''), 500)
} catch (error) {

View File

@@ -14,6 +14,7 @@ import { useInputText } from '@renderer/hooks/useInputText'
import { useMessageOperations, useTopicLoading } from '@renderer/hooks/useMessageOperations'
import { useSettings } from '@renderer/hooks/useSettings'
import { useShortcut } from '@renderer/hooks/useShortcuts'
import { useSidebarIconShow } from '@renderer/hooks/useSidebarIcon'
import { useTextareaResize } from '@renderer/hooks/useTextareaResize'
import { useTimer } from '@renderer/hooks/useTimer'
import {
@@ -23,7 +24,6 @@ import {
useInputbarToolsState
} from '@renderer/pages/home/Inputbar/context/InputbarToolsProvider'
import { getDefaultTopic } from '@renderer/services/AssistantService'
import { CacheService } from '@renderer/services/CacheService'
import { EVENT_NAMES, EventEmitter } from '@renderer/services/EventService'
import FileManager from '@renderer/services/FileManager'
import { checkRateLimit, getUserMessage } from '@renderer/services/MessagesService'
@@ -39,7 +39,7 @@ import { getSendMessageShortcutLabel } from '@renderer/utils/input'
import { documentExts, imageExts, textExts } from '@shared/config/constant'
import { debounce } from 'lodash'
import type { FC } from 'react'
import React, { useCallback, useEffect, useEffectEvent, useMemo, useRef, useState } from 'react'
import React, { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import { useTranslation } from 'react-i18next'
import { InputbarCore } from './components/InputbarCore'
@@ -51,17 +51,6 @@ import TokenCount from './TokenCount'
const logger = loggerService.withContext('Inputbar')
const INPUTBAR_DRAFT_CACHE_KEY = 'inputbar-draft'
const DRAFT_CACHE_TTL = 24 * 60 * 60 * 1000 // 24 hours
const getMentionedModelsCacheKey = (assistantId: string) => `inputbar-mentioned-models-${assistantId}`
const getValidatedCachedModels = (assistantId: string): Model[] => {
const cached = CacheService.get<Model[]>(getMentionedModelsCacheKey(assistantId))
if (!Array.isArray(cached)) return []
return cached.filter((model) => model?.id && model?.name)
}
interface Props {
assistant: Assistant
setActiveTopic: (topic: Topic) => void
@@ -91,18 +80,16 @@ const Inputbar: FC<Props> = ({ assistant: initialAssistant, setActiveTopic, topi
toggleExpanded: () => {}
})
const [initialMentionedModels] = useState(() => getValidatedCachedModels(initialAssistant.id))
const initialState = useMemo(
() => ({
files: [] as FileType[],
mentionedModels: initialMentionedModels,
mentionedModels: [] as Model[],
selectedKnowledgeBases: initialAssistant.knowledge_bases ?? [],
isExpanded: false,
couldAddImageFile: false,
extensions: [] as string[]
}),
[initialMentionedModels, initialAssistant.knowledge_bases]
[initialAssistant.knowledge_bases]
)
return (
@@ -134,10 +121,7 @@ const InputbarInner: FC<InputbarInnerProps> = ({ assistant: initialAssistant, se
const { setFiles, setMentionedModels, setSelectedKnowledgeBases } = useInputbarToolsDispatch()
const { setCouldAddImageFile } = useInputbarToolsInternalDispatch()
const { text, setText } = useInputText({
initialValue: CacheService.get<string>(INPUTBAR_DRAFT_CACHE_KEY) ?? '',
onChange: (value) => CacheService.set(INPUTBAR_DRAFT_CACHE_KEY, value, DRAFT_CACHE_TTL)
})
const { text, setText } = useInputText()
const {
textareaRef,
resize: resizeTextArea,
@@ -149,6 +133,7 @@ const InputbarInner: FC<InputbarInnerProps> = ({ assistant: initialAssistant, se
minHeight: 30
})
const showKnowledgeIcon = useSidebarIconShow('knowledge')
const { assistant, addTopic, model, setModel, updateAssistant } = useAssistant(initialAssistant.id)
const { sendMessageShortcut, showInputEstimatedTokens, enableQuickPanelTriggers } = useSettings()
const [estimateTokenCount, setEstimateTokenCount] = useState(0)
@@ -205,15 +190,6 @@ const InputbarInner: FC<InputbarInnerProps> = ({ assistant: initialAssistant, se
setCouldAddImageFile(canAddImageFile)
}, [canAddImageFile, setCouldAddImageFile])
const onUnmount = useEffectEvent((id: string) => {
CacheService.set(getMentionedModelsCacheKey(id), mentionedModels, DRAFT_CACHE_TTL)
})
useEffect(() => {
return () => onUnmount(assistant.id)
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [assistant.id])
const placeholderText = enableQuickPanelTriggers
? t('chat.input.placeholder', { key: getSendMessageShortcutLabel(sendMessageShortcut) })
: t('chat.input.placeholder_without_triggers', {
@@ -405,10 +381,9 @@ const InputbarInner: FC<InputbarInnerProps> = ({ assistant: initialAssistant, se
focusTextarea
])
// TODO: Just use assistant.knowledge_bases as selectedKnowledgeBases. context state is overdesigned.
useEffect(() => {
setSelectedKnowledgeBases(assistant.knowledge_bases ?? [])
}, [assistant.knowledge_bases, setSelectedKnowledgeBases])
setSelectedKnowledgeBases(showKnowledgeIcon ? (assistant.knowledge_bases ?? []) : [])
}, [assistant.knowledge_bases, setSelectedKnowledgeBases, showKnowledgeIcon])
useEffect(() => {
// Disable web search if model doesn't support it

View File

@@ -156,8 +156,11 @@ export const InputbarCore: FC<InputbarCoreProps> = ({
const setText = useCallback<React.Dispatch<React.SetStateAction<string>>>(
(value) => {
const newText = typeof value === 'function' ? value(textRef.current) : value
onTextChange(newText)
if (typeof value === 'function') {
onTextChange(value(textRef.current))
} else {
onTextChange(value)
}
},
[onTextChange]
)

View File

@@ -1,4 +1,5 @@
import { useAssistant } from '@renderer/hooks/useAssistant'
import { useSidebarIconShow } from '@renderer/hooks/useSidebarIcon'
import { defineTool, registerTool, TopicType } from '@renderer/pages/home/Inputbar/types'
import type { KnowledgeBase } from '@renderer/types'
import { isPromptToolUse, isSupportedToolUse } from '@renderer/utils/mcp-tools'
@@ -29,6 +30,7 @@ const knowledgeBaseTool = defineTool({
render: function KnowledgeBaseToolRender(context) {
const { assistant, state, actions, quickPanel } = context
const knowledgeSidebarEnabled = useSidebarIconShow('knowledge')
const { updateAssistant } = useAssistant(assistant.id)
const handleSelect = useCallback(
@@ -39,6 +41,10 @@ const knowledgeBaseTool = defineTool({
[updateAssistant, actions]
)
if (!knowledgeSidebarEnabled) {
return null
}
return (
<KnowledgeBaseButton
quickPanel={quickPanel}

View File

@@ -102,12 +102,10 @@ const ThinkingBlock: React.FC<Props> = ({ block }) => {
)
}
const normalizeThinkingTime = (value?: number) => (typeof value === 'number' && Number.isFinite(value) ? value : 0)
const ThinkingTimeSeconds = memo(
({ blockThinkingTime, isThinking }: { blockThinkingTime: number; isThinking: boolean }) => {
const { t } = useTranslation()
const [displayTime, setDisplayTime] = useState(normalizeThinkingTime(blockThinkingTime))
const [displayTime, setDisplayTime] = useState(blockThinkingTime)
const timer = useRef<NodeJS.Timeout | null>(null)
@@ -123,7 +121,7 @@ const ThinkingTimeSeconds = memo(
clearInterval(timer.current)
timer.current = null
}
setDisplayTime(normalizeThinkingTime(blockThinkingTime))
setDisplayTime(blockThinkingTime)
}
return () => {
@@ -134,10 +132,10 @@ const ThinkingTimeSeconds = memo(
}
}, [isThinking, blockThinkingTime])
const thinkingTimeSeconds = useMemo(() => {
const safeTime = normalizeThinkingTime(displayTime)
return ((safeTime < 1000 ? 100 : safeTime) / 1000).toFixed(1)
}, [displayTime])
const thinkingTimeSeconds = useMemo(
() => ((displayTime < 1000 ? 100 : displayTime) / 1000).toFixed(1),
[displayTime]
)
return isThinking
? t('chat.thinking', {

View File

@@ -255,20 +255,6 @@ describe('ThinkingBlock', () => {
unmount()
})
})
it('should clamp invalid thinking times to a safe default', () => {
const testCases = [undefined, Number.NaN, Number.POSITIVE_INFINITY]
testCases.forEach((thinking_millsec) => {
const block = createThinkingBlock({
thinking_millsec: thinking_millsec as any,
status: MessageBlockStatus.SUCCESS
})
const { unmount } = renderThinkingBlock(block)
expect(getThinkingTimeText()).toHaveTextContent('0.1s')
unmount()
})
})
})
describe('collapse behavior', () => {

View File

@@ -10,7 +10,6 @@ import { useSettings } from '@renderer/hooks/useSettings'
import { useTimer } from '@renderer/hooks/useTimer'
import type { RootState } from '@renderer/store'
// import { selectCurrentTopicId } from '@renderer/store/newMessage'
import { scrollIntoView } from '@renderer/utils/dom'
import { Button, Drawer, Tooltip } from 'antd'
import type { FC } from 'react'
import { useCallback, useEffect, useRef, useState } from 'react'
@@ -119,8 +118,7 @@ const ChatNavigation: FC<ChatNavigationProps> = ({ containerId }) => {
}
const scrollToMessage = (element: HTMLElement) => {
// Use container: 'nearest' to keep scroll within the chat pane (Chromium-only, see #11565, #11567)
scrollIntoView(element, { behavior: 'smooth', block: 'start', container: 'nearest' })
element.scrollIntoView({ behavior: 'smooth', block: 'start' })
}
const scrollToTop = () => {

View File

@@ -15,7 +15,6 @@ import { estimateMessageUsage } from '@renderer/services/TokenService'
import type { Assistant, Topic } from '@renderer/types'
import type { Message, MessageBlock } from '@renderer/types/newMessage'
import { classNames, cn } from '@renderer/utils'
import { scrollIntoView } from '@renderer/utils/dom'
import { isMessageProcessing } from '@renderer/utils/messageUtils/is'
import { Divider } from 'antd'
import type { Dispatch, FC, SetStateAction } from 'react'
@@ -80,10 +79,9 @@ const MessageItem: FC<Props> = ({
useEffect(() => {
if (isEditing && messageContainerRef.current) {
scrollIntoView(messageContainerRef.current, {
messageContainerRef.current.scrollIntoView({
behavior: 'smooth',
block: 'center',
container: 'nearest'
block: 'center'
})
}
}, [isEditing])
@@ -126,7 +124,7 @@ const MessageItem: FC<Props> = ({
const messageHighlightHandler = useCallback(
(highlight: boolean = true) => {
if (messageContainerRef.current) {
scrollIntoView(messageContainerRef.current, { behavior: 'smooth', block: 'center', container: 'nearest' })
messageContainerRef.current.scrollIntoView({ behavior: 'smooth' })
if (highlight) {
setTimeoutTimer(
'messageHighlightHandler',

View File

@@ -12,7 +12,6 @@ import { newMessagesActions } from '@renderer/store/newMessage'
// import { updateMessageThunk } from '@renderer/store/thunk/messageThunk'
import type { Message } from '@renderer/types/newMessage'
import { isEmoji, removeLeadingEmoji } from '@renderer/utils'
import { scrollIntoView } from '@renderer/utils/dom'
import { getMainTextContent } from '@renderer/utils/messageUtils/find'
import { Avatar } from 'antd'
import { CircleChevronDown } from 'lucide-react'
@@ -120,7 +119,7 @@ const MessageAnchorLine: FC<MessageLineProps> = ({ messages }) => {
() => {
const messageElement = document.getElementById(`message-${message.id}`)
if (messageElement) {
scrollIntoView(messageElement, { behavior: 'auto', block: 'start', container: 'nearest' })
messageElement.scrollIntoView({ behavior: 'auto', block: 'start' })
}
},
100
@@ -142,7 +141,7 @@ const MessageAnchorLine: FC<MessageLineProps> = ({ messages }) => {
return
}
scrollIntoView(messageElement, { behavior: 'smooth', block: 'start', container: 'nearest' })
messageElement.scrollIntoView({ behavior: 'smooth', block: 'start' })
},
[setSelectedMessage]
)

View File

@@ -10,7 +10,6 @@ import type { MultiModelMessageStyle } from '@renderer/store/settings'
import type { Topic } from '@renderer/types'
import type { Message } from '@renderer/types/newMessage'
import { classNames } from '@renderer/utils'
import { scrollIntoView } from '@renderer/utils/dom'
import { Popover } from 'antd'
import type { ComponentProps } from 'react'
import { memo, useCallback, useEffect, useMemo, useState } from 'react'
@@ -74,7 +73,7 @@ const MessageGroup = ({ messages, topic, registerMessageElement }: Props) => {
() => {
const messageElement = document.getElementById(`message-${message.id}`)
if (messageElement) {
scrollIntoView(messageElement, { behavior: 'smooth', block: 'start', container: 'nearest' })
messageElement.scrollIntoView({ behavior: 'smooth', block: 'start' })
}
},
200
@@ -133,7 +132,7 @@ const MessageGroup = ({ messages, topic, registerMessageElement }: Props) => {
setSelectedMessage(message)
} else {
// 直接滚动
scrollIntoView(element, { behavior: 'smooth', block: 'start', container: 'nearest' })
element.scrollIntoView({ behavior: 'smooth', block: 'start' })
}
}
}

View File

@@ -3,7 +3,6 @@ import type { RootState } from '@renderer/store'
import { messageBlocksSelectors } from '@renderer/store/messageBlock'
import type { Message } from '@renderer/types/newMessage'
import { MessageBlockType } from '@renderer/types/newMessage'
import { scrollIntoView } from '@renderer/utils/dom'
import type { FC } from 'react'
import React, { useMemo, useRef } from 'react'
import { useSelector } from 'react-redux'
@@ -73,10 +72,10 @@ const MessageOutline: FC<MessageOutlineProps> = ({ message }) => {
const parent = messageOutlineContainerRef.current?.parentElement
const messageContentContainer = parent?.querySelector('.message-content-container')
if (messageContentContainer) {
const headingElement = messageContentContainer.querySelector<HTMLElement>(`#${id}`)
const headingElement = messageContentContainer.querySelector(`#${id}`)
if (headingElement) {
const scrollBlock = ['horizontal', 'grid'].includes(message.multiModelMessageStyle ?? '') ? 'nearest' : 'start'
scrollIntoView(headingElement, { behavior: 'smooth', block: scrollBlock, container: 'nearest' })
headingElement.scrollIntoView({ behavior: 'smooth', block: scrollBlock })
}
}
}

View File

@@ -76,7 +76,7 @@ export function BashOutputTool({
input,
output
}: {
input?: BashOutputToolInput
input: BashOutputToolInput
output?: BashOutputToolOutput
}): NonNullable<CollapseProps['items']>[number] {
const parsedOutput = parseBashOutput(output)
@@ -144,7 +144,7 @@ export function BashOutputTool({
label="Bash Output"
params={
<div className="flex items-center gap-2">
<Tag className="py-0 font-mono text-xs">{input?.bash_id}</Tag>
<Tag className="py-0 font-mono text-xs">{input.bash_id}</Tag>
{statusConfig && (
<Tag
color={statusConfig.color}

View File

@@ -5,20 +5,24 @@ import { Terminal } from 'lucide-react'
import { ToolTitle } from './GenericTools'
import type { BashToolInput as BashToolInputType, BashToolOutput as BashToolOutputType } from './types'
const MAX_TAG_LENGTH = 100
export function BashTool({
input,
output
}: {
input?: BashToolInputType
input: BashToolInputType
output?: BashToolOutputType
}): NonNullable<CollapseProps['items']>[number] {
// 如果有输出,计算输出行数
const outputLines = output ? output.split('\n').length : 0
// 处理命令字符串,添加空值检查
const command = input?.command ?? ''
// 处理命令字符串的截断
const command = input.command
const needsTruncate = command.length > MAX_TAG_LENGTH
const displayCommand = needsTruncate ? `${command.slice(0, MAX_TAG_LENGTH)}...` : command
const tagContent = <Tag className="!m-0 max-w-full truncate font-mono">{command}</Tag>
const tagContent = <Tag className="whitespace-pre-wrap break-all font-mono">{displayCommand}</Tag>
return {
key: 'tool',
@@ -27,15 +31,19 @@ export function BashTool({
<ToolTitle
icon={<Terminal className="h-4 w-4" />}
label="Bash"
params={input?.description}
params={input.description}
stats={output ? `${outputLines} ${outputLines === 1 ? 'line' : 'lines'}` : undefined}
/>
<div className="mt-1 max-w-full">
<Popover
content={<div className="max-w-xl whitespace-pre-wrap break-all font-mono text-xs">{command}</div>}
trigger="hover">
{tagContent}
</Popover>
<div className="mt-1">
{needsTruncate ? (
<Popover
content={<div className="max-w-xl whitespace-pre-wrap break-all font-mono">{command}</div>}
trigger="hover">
{tagContent}
</Popover>
) : (
tagContent
)}
</div>
</>
),

View File

@@ -32,19 +32,19 @@ export function EditTool({
input,
output
}: {
input?: EditToolInput
input: EditToolInput
output?: EditToolOutput
}): NonNullable<CollapseProps['items']>[number] {
return {
key: AgentToolsType.Edit,
label: <ToolTitle icon={<FileEdit className="h-4 w-4" />} label="Edit" params={input?.file_path} />,
label: <ToolTitle icon={<FileEdit className="h-4 w-4" />} label="Edit" params={input.file_path} />,
children: (
<>
{/* Diff View */}
{/* Old Content */}
{renderCodeBlock(input?.old_string ?? '', 'old')}
{renderCodeBlock(input.old_string, 'old')}
{/* New Content */}
{renderCodeBlock(input?.new_string ?? '', 'new')}
{renderCodeBlock(input.new_string, 'new')}
{/* Output */}
{output}
</>

View File

@@ -10,19 +10,18 @@ export function ExitPlanModeTool({
input,
output
}: {
input?: ExitPlanModeToolInput
input: ExitPlanModeToolInput
output?: ExitPlanModeToolOutput
}): NonNullable<CollapseProps['items']>[number] {
const plan = input?.plan ?? ''
return {
key: AgentToolsType.ExitPlanMode,
label: (
<ToolTitle
icon={<DoorOpen className="h-4 w-4" />}
label="ExitPlanMode"
stats={`${plan.split('\n\n').length} plans`}
stats={`${input.plan.split('\n\n').length} plans`}
/>
),
children: <ReactMarkdown>{plan + '\n\n' + (output ?? '')}</ReactMarkdown>
children: <ReactMarkdown>{input.plan + '\n\n' + (output ?? '')}</ReactMarkdown>
}
}

View File

@@ -18,9 +18,9 @@ export function ToolTitle({
}) {
return (
<div className={`flex items-center gap-1 ${className}`}>
{icon && <span className="flex flex-shrink-0">{icon}</span>}
{label && <span className="flex-shrink-0 font-medium text-sm">{label}</span>}
{params && <span className="min-w-0 truncate text-muted-foreground text-xs">{params}</span>}
{icon}
{label && <span className="font-medium text-sm">{label}</span>}
{params && <span className="flex-shrink-0 text-muted-foreground text-xs">{params}</span>}
{stats && <span className="flex-shrink-0 text-muted-foreground text-xs">{stats}</span>}
</div>
)

View File

@@ -8,7 +8,7 @@ export function GlobTool({
input,
output
}: {
input?: GlobToolInputType
input: GlobToolInputType
output?: GlobToolOutputType
}): NonNullable<CollapseProps['items']>[number] {
// 如果有输出,计算文件数量
@@ -20,7 +20,7 @@ export function GlobTool({
<ToolTitle
icon={<FolderSearch className="h-4 w-4" />}
label="Glob"
params={input?.pattern}
params={input.pattern}
stats={output ? `${lineCount} ${lineCount === 1 ? 'file' : 'files'}` : undefined}
/>
),

View File

@@ -8,7 +8,7 @@ export function GrepTool({
input,
output
}: {
input?: GrepToolInput
input: GrepToolInput
output?: GrepToolOutput
}): NonNullable<CollapseProps['items']>[number] {
// 如果有输出,计算结果行数
@@ -22,8 +22,8 @@ export function GrepTool({
label="Grep"
params={
<>
{input?.pattern}
{input?.output_mode && <span className="ml-1">({input.output_mode})</span>}
{input.pattern}
{input.output_mode && <span className="ml-1">({input.output_mode})</span>}
</>
}
stats={output ? `${resultLines} ${resultLines === 1 ? 'line' : 'lines'}` : undefined}

View File

@@ -9,19 +9,18 @@ import { AgentToolsType } from './types'
export function MultiEditTool({
input
}: {
input?: MultiEditToolInput
input: MultiEditToolInput
output?: MultiEditToolOutput
}): NonNullable<CollapseProps['items']>[number] {
const edits = Array.isArray(input?.edits) ? input.edits : []
return {
key: AgentToolsType.MultiEdit,
label: <ToolTitle icon={<FileText className="h-4 w-4" />} label="MultiEdit" params={input?.file_path} />,
label: <ToolTitle icon={<FileText className="h-4 w-4" />} label="MultiEdit" params={input.file_path} />,
children: (
<div>
{edits.map((edit, index) => (
{input.edits.map((edit, index) => (
<div key={index}>
{renderCodeBlock(edit.old_string ?? '', 'old')}
{renderCodeBlock(edit.new_string ?? '', 'new')}
{renderCodeBlock(edit.old_string, 'old')}
{renderCodeBlock(edit.new_string, 'new')}
</div>
))}
</div>

View File

@@ -11,7 +11,7 @@ export function NotebookEditTool({
input,
output
}: {
input?: NotebookEditToolInput
input: NotebookEditToolInput
output?: NotebookEditToolOutput
}): NonNullable<CollapseProps['items']>[number] {
return {
@@ -20,10 +20,10 @@ export function NotebookEditTool({
<>
<ToolTitle icon={<FileText className="h-4 w-4" />} label="NotebookEdit" />
<Tag className="mt-1" color="blue">
{input?.notebook_path}{' '}
{input.notebook_path}{' '}
</Tag>
</>
),
children: <ReactMarkdown>{output ?? ''}</ReactMarkdown>
children: <ReactMarkdown>{output}</ReactMarkdown>
}
}

View File

@@ -46,7 +46,7 @@ export function ReadTool({
input,
output
}: {
input?: ReadToolInputType
input: ReadToolInputType
output?: ReadToolOutputType
}): NonNullable<CollapseProps['items']>[number] {
const outputString = normalizeOutputString(output)
@@ -58,7 +58,7 @@ export function ReadTool({
<ToolTitle
icon={<FileText className="h-4 w-4" />}
label="Read File"
params={input?.file_path?.split('/').pop()}
params={input.file_path.split('/').pop()}
stats={stats ? `${stats.lineCount} lines, ${stats.formatSize(stats.fileSize)}` : undefined}
/>
),

View File

@@ -8,7 +8,7 @@ export function SearchTool({
input,
output
}: {
input?: SearchToolInputType
input: SearchToolInputType
output?: SearchToolOutputType
}): NonNullable<CollapseProps['items']>[number] {
// 如果有输出,计算结果数量
@@ -20,13 +20,13 @@ export function SearchTool({
<ToolTitle
icon={<Search className="h-4 w-4" />}
label="Search"
params={input ? `"${input}"` : undefined}
params={`"${input}"`}
stats={output ? `${resultCount} ${resultCount === 1 ? 'result' : 'results'}` : undefined}
/>
),
children: (
<div>
{input && <StringInputTool input={input} label="Search Query" />}
<StringInputTool input={input} label="Search Query" />
{output && (
<div>
<StringOutputTool output={output} label="Search Results" textColor="text-yellow-600 dark:text-yellow-400" />

View File

@@ -8,12 +8,12 @@ export function SkillTool({
input,
output
}: {
input?: SkillToolInput
input: SkillToolInput
output?: SkillToolOutput
}): NonNullable<CollapseProps['items']>[number] {
return {
key: 'tool',
label: <ToolTitle icon={<PencilRuler className="h-4 w-4" />} label="Skill" params={input?.command} />,
label: <ToolTitle icon={<PencilRuler className="h-4 w-4" />} label="Skill" params={input.command} />,
children: <div>{output}</div>
}
}

View File

@@ -9,20 +9,19 @@ export function TaskTool({
input,
output
}: {
input?: TaskToolInputType
input: TaskToolInputType
output?: TaskToolOutputType
}): NonNullable<CollapseProps['items']>[number] {
return {
key: 'tool',
label: <ToolTitle icon={<Bot className="h-4 w-4" />} label="Task" params={input?.description} />,
label: <ToolTitle icon={<Bot className="h-4 w-4" />} label="Task" params={input.description} />,
children: (
<div>
{Array.isArray(output) &&
output.map((item) => (
<div key={item.type}>
<div>{item.type === 'text' ? <Markdown>{item.text}</Markdown> : item.text}</div>
</div>
))}
{output?.map((item) => (
<div key={item.type}>
<div>{item.type === 'text' ? <Markdown>{item.text}</Markdown> : item.text}</div>
</div>
))}
</div>
)
}

View File

@@ -38,10 +38,9 @@ const getStatusConfig = (status: TodoItem['status']) => {
export function TodoWriteTool({
input
}: {
input?: TodoWriteToolInputType
input: TodoWriteToolInputType
}): NonNullable<CollapseProps['items']>[number] {
const todos = Array.isArray(input?.todos) ? input.todos : []
const doneCount = todos.filter((todo) => todo.status === 'completed').length
const doneCount = input.todos.filter((todo) => todo.status === 'completed').length
return {
key: AgentToolsType.TodoWrite,
@@ -50,12 +49,12 @@ export function TodoWriteTool({
icon={<ListTodo className="h-4 w-4" />}
label="Todo Write"
params={`${doneCount} Done`}
stats={`${todos.length} ${todos.length === 1 ? 'item' : 'items'}`}
stats={`${input.todos.length} ${input.todos.length === 1 ? 'item' : 'items'}`}
/>
),
children: (
<div className="space-y-3">
{todos.map((todo, index) => {
{input.todos.map((todo, index) => {
const statusConfig = getStatusConfig(todo.status)
return (
<div key={index}>

View File

@@ -8,12 +8,12 @@ export function WebFetchTool({
input,
output
}: {
input?: WebFetchToolInput
input: WebFetchToolInput
output?: WebFetchToolOutput
}): NonNullable<CollapseProps['items']>[number] {
return {
key: 'tool',
label: <ToolTitle icon={<Globe className="h-4 w-4" />} label="Web Fetch" params={input?.url} />,
label: <ToolTitle icon={<Globe className="h-4 w-4" />} label="Web Fetch" params={input.url} />,
children: <div>{output}</div>
}
}

View File

@@ -8,7 +8,7 @@ export function WebSearchTool({
input,
output
}: {
input?: WebSearchToolInput
input: WebSearchToolInput
output?: WebSearchToolOutput
}): NonNullable<CollapseProps['items']>[number] {
// 如果有输出,计算结果数量
@@ -20,7 +20,7 @@ export function WebSearchTool({
<ToolTitle
icon={<Globe className="h-4 w-4" />}
label="Web Search"
params={input?.query}
params={input.query}
stats={output ? `${resultCount} ${resultCount === 1 ? 'result' : 'results'}` : undefined}
/>
),

View File

@@ -7,12 +7,12 @@ import type { WriteToolInput, WriteToolOutput } from './types'
export function WriteTool({
input
}: {
input?: WriteToolInput
input: WriteToolInput
output?: WriteToolOutput
}): NonNullable<CollapseProps['items']>[number] {
return {
key: 'tool',
label: <ToolTitle icon={<FileText className="h-4 w-4" />} label="Write" params={input?.file_path} />,
children: <div>{input?.content}</div>
label: <ToolTitle icon={<FileText className="h-4 w-4" />} label="Write" params={input.file_path} />,
children: <div>{input.content}</div>
}
}

View File

@@ -1,10 +1,7 @@
import { loggerService } from '@logger'
import { useAppSelector } from '@renderer/store'
import { selectPendingPermission } from '@renderer/store/toolPermissions'
import type { NormalToolResponse } from '@renderer/types'
import type { CollapseProps } from 'antd'
import { Collapse, Spin } from 'antd'
import { useTranslation } from 'react-i18next'
import { Collapse } from 'antd'
// 导出所有类型
export * from './types'
@@ -86,41 +83,17 @@ function ToolContent({ toolName, input, output }: { toolName: AgentToolsType; in
// 统一的组件渲染入口
export function MessageAgentTools({ toolResponse }: { toolResponse: NormalToolResponse }) {
const { arguments: args, response, tool, status } = toolResponse
logger.debug('Rendering agent tool response', {
logger.info('Rendering agent tool response', {
tool: tool,
arguments: args,
status,
response
})
const pendingPermission = useAppSelector((state) =>
selectPendingPermission(state.toolPermissions, toolResponse.toolCallId)
)
if (status === 'pending') {
if (pendingPermission) {
return <ToolPermissionRequestCard toolResponse={toolResponse} />
}
return <ToolPendingIndicator toolName={tool?.name} description={tool?.description} />
return <ToolPermissionRequestCard toolResponse={toolResponse} />
}
return (
<ToolContent toolName={tool.name as AgentToolsType} input={args as ToolInput} output={response as ToolOutput} />
)
}
function ToolPendingIndicator({ toolName, description }: { toolName?: string; description?: string }) {
const { t } = useTranslation()
const label = toolName || t('agent.toolPermission.toolPendingFallback', 'Tool')
const detail = description?.trim() || t('agent.toolPermission.executing')
return (
<div className="flex w-full max-w-xl items-center gap-3 rounded-xl border border-default-200 bg-default-100 px-4 py-3 shadow-sm">
<Spin size="small" />
<div className="flex flex-col gap-1">
<span className="font-semibold text-default-700 text-sm">{label}</span>
<span className="text-default-500 text-xs">{detail}</span>
</div>
</div>
)
}

View File

@@ -9,7 +9,7 @@ import {
DEFAULT_TEMPERATURE,
MAX_CONTEXT_COUNT
} from '@renderer/config/constant'
import { isOpenAIModel, isSupportVerbosityModel } from '@renderer/config/models'
import { isOpenAIModel } from '@renderer/config/models'
import { UNKNOWN } from '@renderer/config/translate'
import { useCodeStyle } from '@renderer/context/CodeStyleProvider'
import { useTheme } from '@renderer/context/ThemeProvider'
@@ -56,7 +56,7 @@ import type { Assistant, AssistantSettings, CodeStyleVarious, MathEngine } from
import { isGroqSystemProvider, ThemeMode } from '@renderer/types'
import { modalConfirm } from '@renderer/utils'
import { getSendMessageShortcutLabel } from '@renderer/utils/input'
import { isSupportServiceTierProvider, isSupportVerbosityProvider } from '@renderer/utils/provider'
import { isSupportServiceTierProvider } from '@renderer/utils/provider'
import { Button, Col, InputNumber, Row, Slider, Switch } from 'antd'
import { Settings2 } from 'lucide-react'
import type { FC } from 'react'
@@ -183,10 +183,7 @@ const SettingsTab: FC<Props> = (props) => {
const model = assistant.model || getDefaultModel()
const showOpenAiSettings =
isOpenAIModel(model) ||
isSupportServiceTierProvider(provider) ||
(isSupportVerbosityModel(model) && isSupportVerbosityProvider(provider))
const showOpenAiSettings = isOpenAIModel(model) || isSupportServiceTierProvider(provider)
return (
<Container className="settings-tab">

View File

@@ -404,11 +404,11 @@ const UpdateNotesWrapper = styled.div`
margin: 8px 0;
background-color: var(--color-bg-2);
border-radius: 6px;
color: var(--color-text-2);
font-size: 14px;
p {
margin: 0;
color: var(--color-text-2);
font-size: 14px;
}
`

View File

@@ -135,18 +135,12 @@ const AssistantModelSettings: FC<Props> = ({ assistant, updateAssistant, updateA
<Input
value={typeof param.value === 'string' ? param.value : JSON.stringify(param.value, null, 2)}
onChange={(e) => {
// For JSON type parameters, always store the value as a STRING
//
// Data Flow:
// 1. UI stores: { name: "config", value: '{"key":"value"}', type: "json" } ← STRING format
// 2. API parses: getCustomParameters() in src/renderer/src/aiCore/utils/reasoning.ts:687-696
// calls JSON.parse() to convert string to object
// 3. Request sends: The parsed object is sent to the AI provider
//
// Previously this code was parsing JSON here and storing
// the object directly, which caused getCustomParameters() to fail when trying
// to JSON.parse() an already-parsed object.
onUpdateCustomParameter(index, 'value', e.target.value)
try {
const jsonValue = JSON.parse(e.target.value)
onUpdateCustomParameter(index, 'value', jsonValue)
} catch {
onUpdateCustomParameter(index, 'value', e.target.value)
}
}}
/>
)

View File

@@ -67,6 +67,7 @@ import OVMSSettings from './OVMSSettings'
import ProviderOAuth from './ProviderOAuth'
import SelectProviderModelPopup from './SelectProviderModelPopup'
import VertexAISettings from './VertexAISettings'
import VolcengineSettings from './VolcengineSettings'
interface Props {
providerId: string
@@ -593,6 +594,7 @@ const ProviderSetting: FC<Props> = ({ providerId }) => {
{provider.id === 'copilot' && <GithubCopilotSettings providerId={provider.id} />}
{provider.id === 'aws-bedrock' && <AwsBedrockSettings />}
{provider.id === 'vertexai' && <VertexAISettings />}
{provider.id === 'doubao' && <VolcengineSettings />}
<ModelList providerId={provider.id} />
</SettingContainer>
)

View File

@@ -0,0 +1,149 @@
import { HStack } from '@renderer/components/Layout'
import { PROVIDER_URLS } from '@renderer/config/providers'
import { useVolcengineSettings } from '@renderer/hooks/useVolcengine'
import { Alert, Button, Input, Space } from 'antd'
import type { FC } from 'react'
import { useCallback, useEffect, useState } from 'react'
import { useTranslation } from 'react-i18next'
import { SettingHelpLink, SettingHelpText, SettingHelpTextRow, SettingSubtitle } from '..'
const VolcengineSettings: FC = () => {
const { t } = useTranslation()
const { region, projectName, setRegion, setProjectName } = useVolcengineSettings()
const providerConfig = PROVIDER_URLS['doubao']
const apiKeyWebsite = providerConfig?.websites?.apiKey
const [localAccessKeyId, setLocalAccessKeyId] = useState('')
const [localSecretAccessKey, setLocalSecretAccessKey] = useState('')
const [localRegion, setLocalRegion] = useState(region)
const [localProjectName, setLocalProjectName] = useState(projectName)
const [saving, setSaving] = useState(false)
const [hasCredentials, setHasCredentials] = useState(false)
// Check if credentials exist on mount
useEffect(() => {
window.api.volcengine.hasCredentials().then(setHasCredentials)
}, [])
// Sync local state with store (only for region and projectName)
useEffect(() => {
setLocalRegion(region)
setLocalProjectName(projectName)
}, [region, projectName])
const handleSaveCredentials = useCallback(async () => {
if (!localAccessKeyId || !localSecretAccessKey) {
window.toast.error(t('settings.provider.volcengine.credentials_required'))
return
}
setSaving(true)
try {
// Save credentials to secure storage via IPC first
await window.api.volcengine.saveCredentials(localAccessKeyId, localSecretAccessKey)
// Only update Redux after IPC success (for region and projectName only)
setRegion(localRegion)
setProjectName(localProjectName)
setHasCredentials(true)
// Clear local credential state after successful save (they're now in secure storage)
setLocalAccessKeyId('')
setLocalSecretAccessKey('')
window.toast.success(t('settings.provider.volcengine.credentials_saved'))
} catch (error) {
window.toast.error(String(error))
} finally {
setSaving(false)
}
}, [localAccessKeyId, localSecretAccessKey, localRegion, localProjectName, setRegion, setProjectName, t])
const handleClearCredentials = useCallback(async () => {
try {
await window.api.volcengine.clearCredentials()
setLocalAccessKeyId('')
setLocalSecretAccessKey('')
setHasCredentials(false)
window.toast.success(t('settings.provider.volcengine.credentials_cleared'))
} catch (error) {
window.toast.error(String(error))
}
}, [t])
return (
<>
<SettingSubtitle style={{ marginTop: 5 }}>{t('settings.provider.volcengine.title')}</SettingSubtitle>
<Alert type="info" style={{ marginTop: 5 }} message={t('settings.provider.volcengine.description')} showIcon />
<SettingSubtitle style={{ marginTop: 15 }}>{t('settings.provider.volcengine.access_key_id')}</SettingSubtitle>
<Input
value={localAccessKeyId}
placeholder="Access Key ID"
onChange={(e) => setLocalAccessKeyId(e.target.value)}
style={{ marginTop: 5 }}
spellCheck={false}
/>
<SettingHelpTextRow>
<SettingHelpText>{t('settings.provider.volcengine.access_key_id_help')}</SettingHelpText>
</SettingHelpTextRow>
<SettingSubtitle style={{ marginTop: 15 }}>{t('settings.provider.volcengine.secret_access_key')}</SettingSubtitle>
<Input.Password
value={localSecretAccessKey}
placeholder="Secret Access Key"
onChange={(e) => setLocalSecretAccessKey(e.target.value)}
style={{ marginTop: 5 }}
spellCheck={false}
/>
<SettingHelpTextRow style={{ justifyContent: 'space-between' }}>
<HStack>
{apiKeyWebsite && (
<SettingHelpLink target="_blank" href={apiKeyWebsite}>
{t('settings.provider.get_api_key')}
</SettingHelpLink>
)}
</HStack>
<SettingHelpText>{t('settings.provider.volcengine.secret_access_key_help')}</SettingHelpText>
</SettingHelpTextRow>
<SettingSubtitle style={{ marginTop: 15 }}>{t('settings.provider.volcengine.region')}</SettingSubtitle>
<Input
value={localRegion}
placeholder="cn-beijing"
onChange={(e) => setLocalRegion(e.target.value)}
onBlur={() => setRegion(localRegion)}
style={{ marginTop: 5 }}
/>
<SettingHelpTextRow>
<SettingHelpText>{t('settings.provider.volcengine.region_help')}</SettingHelpText>
</SettingHelpTextRow>
<SettingSubtitle style={{ marginTop: 15 }}>{t('settings.provider.volcengine.project_name')}</SettingSubtitle>
<Input
value={localProjectName}
placeholder="default"
onChange={(e) => setLocalProjectName(e.target.value)}
onBlur={() => setProjectName(localProjectName)}
style={{ marginTop: 5 }}
/>
<SettingHelpTextRow>
<SettingHelpText>{t('settings.provider.volcengine.project_name_help')}</SettingHelpText>
</SettingHelpTextRow>
<Space style={{ marginTop: 15 }}>
<Button type="primary" onClick={handleSaveCredentials} loading={saving}>
{t('settings.provider.volcengine.save_credentials')}
</Button>
{hasCredentials && (
<Button danger onClick={handleClearCredentials}>
{t('settings.provider.volcengine.clear_credentials')}
</Button>
)}
</Space>
</>
)
}
export default VolcengineSettings

View File

@@ -12,7 +12,7 @@ import type { FetchChatCompletionParams } from '@renderer/types'
import type { Assistant, MCPServer, MCPTool, Model, Provider } from '@renderer/types'
import type { StreamTextParams } from '@renderer/types/aiCoreTypes'
import { type Chunk, ChunkType } from '@renderer/types/chunk'
import type { Message, ResponseError } from '@renderer/types/newMessage'
import type { Message } from '@renderer/types/newMessage'
import type { SdkModel } from '@renderer/types/sdk'
import { removeSpecialCharactersForTopicName, uuid } from '@renderer/utils'
import { abortCompletion, readyToAbort } from '@renderer/utils/abortController'
@@ -476,7 +476,7 @@ export async function checkApi(provider: Provider, model: Model, timeout = 15000
} else {
const abortId = uuid()
const signal = readyToAbort(abortId)
let streamError: ResponseError | undefined
let chunkError
const params: StreamTextParams = {
system: assistant.prompt,
prompt: 'hi',
@@ -495,18 +495,19 @@ export async function checkApi(provider: Provider, model: Model, timeout = 15000
callType: 'check',
onChunk: (chunk: Chunk) => {
if (chunk.type === ChunkType.ERROR) {
streamError = chunk.error
chunkError = chunk.error
} else {
abortCompletion(abortId)
}
}
}
// Try streaming check
try {
await ai.completions(model.id, params, config)
} catch (e) {
if (!isAbortError(e) && !isAbortError(streamError)) {
throw streamError ?? e
if (!isAbortError(e) && !isAbortError(chunkError)) {
throw e
}
}
}

View File

@@ -242,6 +242,10 @@ vi.mock('@renderer/store/llm.ts', () => {
secretAccessKey: '',
apiKey: '',
region: ''
},
volcengine: {
region: 'cn-beijing',
projectName: 'default'
}
}
} satisfies LlmState

View File

@@ -216,7 +216,7 @@ const assistantsSlice = createSlice({
if (agent.id === action.payload.assistantId) {
for (const key in settings) {
if (!agent.settings) {
agent.settings = { ...DEFAULT_ASSISTANT_SETTINGS }
agent.settings = DEFAULT_ASSISTANT_SETTINGS
}
agent.settings[key] = settings[key]
}

View File

@@ -67,7 +67,7 @@ const persistedReducer = persistReducer(
{
key: 'cherry-studio',
storage,
version: 179,
version: 180,
blacklist: ['runtime', 'messages', 'messageBlocks', 'tabs', 'toolPermissions'],
migrate
},

Some files were not shown because too many files have changed in this diff Show More