Compare commits

..

82 Commits

Author SHA1 Message Date
dev
86b7eecd09 chore: update release notes for v1.6.6 2025-11-01 19:18:54 +08:00
SuYao
acb88bbb5b perf: optimize QR code generation and connection info for phone LAN export (#11086)
* Increase QR code margin for better scanning reliability

- Change QRCodeSVG marginSize from 2 to 4 pixels
- Maintains same QR code size (160px) and error correction level (Q)
- Improves readability and scanning success rate on mobile devices

* Optimize QR code generation and connection info for phone LAN export

- Increase QR code size to 180px and reduce error correction to 'L' for better mobile scanning
- Replace hardcoded logo path with AppLogo config and increase logo size to 60px
- Simplify connection info by removing candidates array and using only essential IP/port data

* Optimize QR code data structure for LAN connection

- Compress IP addresses to numeric format to reduce QR code complexity
- Use compact array format instead of verbose JSON object structure
- Remove debug logging to streamline connection flow

* feat: 更新 WebSocket 状态和候选者响应类型,优化连接信息处理

* Increase QR code size and error correction for better scanning

- Increase QR code size from 180px to 300px for improved readability
- Change error correction level from L (low) to H (high) for better reliability
- Reduce logo size from 60px to 40px to accommodate larger QR data
- Increase margin size from 1 to 2 for better border clearance

* 调整二维码大小和图标尺寸以优化扫描体验

* fix(i18n): Auto update translations for PR #11086

* fix(i18n): Auto update translations for PR #11086

* fix(i18n): Auto update translations for PR #11086

---------

Co-authored-by: GitHub Action <action@github.com>
2025-11-01 19:18:54 +08:00
kangfenmao
d07ed1576d feat: add enterprise section and remove license from AboutSettings
- Introduced an "Enterprise" section in the i18n files for English, Simplified Chinese, and Traditional Chinese.
- Removed the "License" section from the AboutSettings component, replacing it with a link to the enterprise website.
- Updated icons in the AboutSettings component to reflect the new structure.
2025-11-01 19:18:54 +08:00
dev
c265577330 Revert "fix(knowledge): force choose knowledge aisdk error (#11006)"
This reverts commit e5f000733f.
2025-10-31 18:22:11 +08:00
槑囿脑袋
521051877d feat: restore data to mobile App (#10108)
* feat: restore data to App

* fix: i18n check

* fix: lint

* Change WebSocket service port to 11451

- Update default port from 3000 to 11451 for WebSocket connections
- Maintain existing service structure and client connection handling

* Add local IP address to WebSocket server configuration

- Set server path using local IP address for improved network accessibility
- Maintain existing CORS policy with wildcard origin
- Keep backward compatibility with current connection handling

* Remove local IP path and enforce WebSocket transport

- Replace dynamic local IP path with static WebSocket transport configuration
- Maintain CORS policy with wildcard origin for cross-origin connections
- Ensure reliable WebSocket-only communication by disabling fallback transports

* Add detailed logging to WebSocket connection flow

- Enhance WebSocketService with verbose connection logging including transport type and client count
- Add comprehensive logging in ExportToPhoneLanPopup for WebSocket initialization and status tracking
- Improve error handling with null checks for main window before sending events

* Add engine-level WebSocket connection monitoring

- Add initial_headers event listener to log connection attempts with URL and headers
- Add engine connection event to log established connections with remote addresses
- Add startup logs for server binding and allowed transports

* chore: change to use 7017 port

* Improve local IP address selection with interface priority system

- Implement network interface priority ranking to prefer Ethernet/Wi-Fi over virtual/VPN interfaces
- Add detailed logging for interface discovery and selection process
- Remove websocket-only transport restriction for broader client compatibility
- Clean up unused parameter in initial_headers event handler

* Add VPN interface patterns for Tailscale and WireGuard

- Include Tailscale VPN interfaces in network interface filtering
- Add WireGuard VPN interfaces to low-priority network candidates
- Maintain existing VPN tunnel interface patterns for compatibility

* Add network interface prioritization for QR code generation

- Implement `getAllCandidates()` method to scan and prioritize network interfaces by type (Ethernet/Wi-Fi over VPN/virtual interfaces)
- Update QR code payload to include all candidate IPs with priority rankings instead of single host
- Add comprehensive interface pattern matching for macOS, Windows, and Linux systems

* Add WebSocket getAllCandidates IPC channel

- Add new WebSocket_GetAllCandidates enum value to IpcChannel
- Register getAllCandidates handler in main process IPC
- Expose getAllCandidates method in preload script API

* Add WebSocket connection logging and temporary test button

- Add URL and method logging to WebSocket engine connection events
- Implement Socket.IO connect and connect_error event handlers with logging
- Add temporary test button to force connection status for debugging

* Clean up WebSocket logging and remove debug code

- Remove verbose debug logs from WebSocket service and connection handling
- Consolidate connection logging into single informative messages
- Remove temporary test button and force connection functionality from UI
- Add missing "sending" translation key for export button loading state

* Enhance file transfer with progress tracking and improved UI

- Add transfer speed monitoring and formatted file size display in WebSocket service
- Implement detailed connection and transfer state management in UI component
- Improve visual feedback with status indicators, progress bars, and error handling

* Enhance WebSocket service and LAN export UI with improved logging and user experience

- Add detailed WebSocket server configuration with transports, CORS, and timeout settings
- Implement comprehensive connection logging at both Socket.IO and Engine.IO levels
- Refactor export popup with modular components, status indicators, and i18n support

* 移除 WebSocket 连接时的冗余日志记录

* Remove dot indicator from connection status component

- Simplify status style map by removing unused dot color properties
- Delete dot indicator element from connection status display
- Maintain existing border and background color styling for status states

* Refactor ExportToPhoneLanPopup with dedicated UI components and improved UX

- Extract QR code display states into separate components (LoadingQRCode, ScanQRCode, ConnectingAnimation, ConnectedDisplay, ErrorQRCode)
- Add confirmation dialog when attempting to close during active file transfer
- Improve WebSocket cleanup and modal dismissal behavior with proper connection handling

* Remove close button hiding during QR code generation

- Eliminate `hideCloseButton={isSending}` prop to keep close button visible
- Maintain consistent modal behavior throughout export process
- Prevent user confusion by ensuring close option remains available

* auto close

* Extract auto-close countdown into separate component

- Move auto-close countdown logic from TransferProgress to dedicated AutoCloseCountdown component
- Update styling to use paddingTop instead of marginTop for better spacing
- Clean up TransferProgress dependencies by removing autoCloseCountdown

* 添加局域网传输相关的翻译文本,包括自动关闭提示和确认关闭消息

---------

Co-authored-by: suyao <sy20010504@gmail.com>

(cherry picked from commit 2a06c606e1)
2025-10-31 17:39:57 +08:00
defi-failure
16252a6263 feat: add confirmation modal for activating protocol-installed MCP (#11070)
* feat: add confirmation modal for activating protocol-installed MCP

* fix: sync i18n

* fix(i18n): Auto update translations for PR #11070

* chore: verify ci is working

* Revert "chore: verify ci is working"

This reverts commit a2434a397d.

---------

Co-authored-by: GitHub Action <action@github.com>

(cherry picked from commit 68e0d8b0f1)
2025-10-31 16:32:38 +08:00
亢奋猫
8e51f6c598 feat(useAppInit): implement automatic update checks with interval sup… (#11063)
feat(useAppInit): implement automatic update checks with interval support

- Added a function to check for updates, which is called initially and set to run every 6 hours if the app is packaged and auto-update is enabled.
- Refactored the initial update check to utilize the new function for better code organization and clarity.
2025-10-31 13:35:52 +08:00
ABucket
1981248b42 docs: fix invalid link in the contributing guide (#11038)
docs: fix invalid link
(cherry picked from commit 0767952a6f)
2025-10-30 14:02:14 +08:00
槑囿脑袋
e5f000733f fix(knowledge): force choose knowledge aisdk error (#11006)
fix: aisdk error
2025-10-29 14:43:58 +08:00
Chen Tao
cafd40bc1c fix: support toolchoice for knowledge (#10763)
* fix: support toolchoice for knowledge

* fix: ci
2025-10-29 14:43:40 +08:00
George·Dong
e46a45f409 chore(ci): exempt all milestones and assignee from staling (#11008)
(cherry picked from commit 5986800c9d)
2025-10-28 20:50:41 +08:00
Jake Jia
790bb923e8 fix: align and unify LocalBackupManager footer layout (#10985)
* fix: align and unify LocalBackupManager footer layout

- Use Space component to wrap footer buttons, consistent with S3BackupManager
- Optimize delete button i18n text by using count parameter instead of hardcoded concatenation

* fix: fix the i18n issue in the  delete button text

(cherry picked from commit c7ceb3035d)
2025-10-28 20:50:11 +08:00
Phantom
2529662a29 fix(Navbar): adjust min-height calculation for fullscreen mode on Mac (#10990)
Ensure the navbar height is correctly calculated when toggling fullscreen mode on macOS by considering the $isFullScreen prop

(cherry picked from commit 7bcae6fba2)
2025-10-28 20:50:03 +08:00
SuYao
cd26190fe9 feat: add isClaude45ReasoningModel function and update getTopP logic (#10988)
* feat: add isClaude45ReasoningModel function and update getTopP logic

* fix: update getTopP logic to correctly handle Claude45 model support

* fix: update getTemperature and getTopP logic to handle Claude45 model conditions

* fix: update getTemperature logic to correctly handle Claude45 model conditions
fix: refine isClaude45ReasoningModel regex pattern for better matching

(cherry picked from commit 250f59234b)
2025-10-28 10:05:33 +08:00
fullex
c2a78b4129 chore: add CODEOWNERS for databases directories
(cherry picked from commit 44e01e5ad4)
2025-10-28 10:05:22 +08:00
Xin Rui
1c3e1b5954 fix: up-down button does not hide properly in some cases (#10693)
* fix: simplify navigation button auto-hide logic

Remove complex state management (isNearButtons, resetHideTimer) and rely directly
on isInTriggerArea to control button visibility. This fixes the issue where buttons
don't properly auto-hide by using mouse position detection instead of fragile state tracking.

- Simplify showNavigation to just show and clear timers
- Remove resetHideTimer function and use showNavigation directly
- Simplify handleNavigationMouseLeave to always schedule hide after 500ms
- Update all button handlers to call showNavigation() instead of resetHideTimer()
- Rely on mouse enter/leave events to control visibility state

* refactor(ChatNavigation): replace native setTimeout with custom useTimer hook

Use custom useTimer hook for better timer management and cleanup

---------

Co-authored-by: icarus <eurfelux@gmail.com>
(cherry picked from commit c5ce0b763b)
2025-10-28 10:05:19 +08:00
Phantom
261cfab2f0 fix(hooks): prevent save on composing enter key in useInPlaceEdit (#10972)
(cherry picked from commit f5a1d3f8d0)
2025-10-28 10:05:16 +08:00
Phantom
36b67c6917 ci(auto-i18n): disable package manager cache for node setup (#10957)
* ci(github): disable package manager cache for node setup

* refactor(i18n): translate sync script comments to english

Update all Chinese comments and log messages in sync-i18n.ts to English for better international collaboration

* style(scripts): format error message string in sync-i18n.ts

(cherry picked from commit dedfc79406)
2025-10-28 10:04:43 +08:00
Phantom
ac319b3574 docs: update PR template and README with feature PR restrictions (#10955)
* docs: update PR template and README with feature PR restrictions

Add temporary hold notice for Redux/IndexedDB feature PRs in both pull request template and README
Fix whitespace and formatting inconsistencies in README

* docs: update contributing guidelines with temporary PR restrictions

Add important notice about temporary restrictions on data-changing feature PRs
Clarify acceptable contribution types during v2.0.0 development phase

* docs: remove warning about feature PR restrictions

The warning about temporary restrictions on feature PRs involving Redux or IndexedDB changes has been removed as it is no longer relevant

* docs: remove core developer membership section from contributing guides

(cherry picked from commit 1f0fd8215a)
2025-10-28 10:04:38 +08:00
SuYao
91050899c4 fix: optimize excluded websites handling in xai provider configuration (#10894)
(cherry picked from commit 13093bb821)
2025-10-24 18:35:20 +08:00
Phantom
156ceca4a7 fix: use system prompt variables in quick assistant (#10925)
* feat: replace prompt variables in assistant before chat completion

* refactor(home-window): reorder prompt variable replacement for clarity

Move prompt variable replacement before message preparation to improve logical flow

(cherry picked from commit c7c9e1ee44)
2025-10-24 18:35:17 +08:00
Phantom
c3b0beb37f fix(InputbarTools): allow url context for gemini endpoint type model (#10926)
fix(InputbarTools): allow url context for gemini endpoint type

Add condition to check for gemini endpoint type when determining URL context support

(cherry picked from commit 0081a0740f)
2025-10-24 18:35:04 +08:00
Phantom
f71ce7fe3d fix: silicon reasoning (#10932)
* refactor(aiCore): reorganize reasoning effort logic for different providers

Restructure the reasoning effort calculation logic to handle different model providers more clearly. Move OpenRouter and SiliconFlow specific logic to dedicated sections and remove duplicate checks. Improve maintainability by grouping related provider logic together.

* refactor(sdk): update thinking config type and property names

- Replace inline thinking config type with imported ThinkingConfig type
- Update property names from snake_case to camelCase for consistency
- Add null checks for token limit calculations
- Clarify hard-coded maximum for silicon provider in comments

* refactor(openai): standardize property names to camelCase in thinking_config

Update property names in thinking_config object from snake_case to camelCase for consistency with codebase conventions

(cherry picked from commit 4dfb73c982)
2025-10-24 18:34:53 +08:00
Jake Jia
7b10ff5010 fix: align S3 backup manager action buttons horizontally (#10922)
(cherry picked from commit d184f7a24b)
2025-10-24 18:34:10 +08:00
Pleasure1234
e4036b6991 fix: use nullish coalescing for advanced property updates (#10921)
Replaces logical OR with nullish coalescing when updating advanced server properties to allow empty string values, enabling users to clear fields instead of preserving previous values.

(cherry picked from commit 1ac746a40e)
2025-10-24 18:34:06 +08:00
Phantom
701903d1e0 ci: update OpenAI dependency in i18n workflow (#10914)
* ci: update OpenAI dependency in i18n workflow

Use @cherrystudio/openai instead of openai package for translation dependencies

* ci(workflows): allow workflow dispatch for auto-i18n job

(cherry picked from commit 53881c5824)
2025-10-24 18:33:43 +08:00
Zhaokun
a2004082af fix: topic branch incomplete copy - split ID mapping into two passes (#10900)
Fix the bug where topic branching would not copy all message relationships completely.The issue was that askId mapping lookup happened in the same loop as ID generation, causing later messages' askIds to fail mapping when they referenced messages that hadn't been processed yet.

Solution: Split into two passes:
 1. First pass: Generate new IDs for all messages and build complete mapping
 2. Second pass: Clone messages and blocks using the complete ID mapping

This ensures all message relationships (especially assistant message askId references)are properly maintained in the new topic.

(cherry picked from commit 35c15cd02c)
2025-10-24 18:33:30 +08:00
SuYao
b8c435138b ci: add GitHub issue tracker workflow with Feishu notifications (#10895)
* feat: add GitHub issue tracker workflow with Feishu notifications

* fix: add missing environment variable for Claude translator in GitHub issue tracker workflow

* fix: update environment variable for Claude translator in GitHub issue tracker workflow

* Add quiet hours handling and scheduled processing for GitHub issue notifications

- Implement quiet hours detection (00:00-08:30 Beijing Time) with delayed notifications
- Add scheduled workflow to process pending issues daily at 08:30 Beijing Time
- Create new script to batch process and summarize multiple pending issues with Claude

* Replace custom Node.js script with Claude Code Action for issue processing

- Migrate from custom JavaScript implementation to Claude Code Action for AI-powered issue summarization and processing
- Simplify workflow by leveraging Claude's built-in GitHub API integration and tool usage capabilities
- Maintain same functionality: fetch pending issues, generate Chinese summaries, send Feishu notifications, and clean up labels
- Update Claude action reference from version pin to main branch for latest features

* Remove GitHub issue comment functionality

- Delete automated AI summary comments on issues after processing
- Remove documentation for manual issue commenting workflow
- Keep Feishu notification system intact while streamlining issue interactions

* feat: add GitHub issue tracker workflow with Feishu notifications

* feat: add GitHub issue tracker workflow with Feishu notifications

* fix: add missing environment variable for Claude translator in GitHub issue tracker workflow

* fix: update environment variable for Claude translator in GitHub issue tracker workflow

* Add quiet hours handling and scheduled processing for GitHub issue notifications

- Implement quiet hours detection (00:00-08:30 Beijing Time) with delayed notifications
- Add scheduled workflow to process pending issues daily at 08:30 Beijing Time
- Create new script to batch process and summarize multiple pending issues with Claude

* Replace custom Node.js script with Claude Code Action for issue processing

- Migrate from custom JavaScript implementation to Claude Code Action for AI-powered issue summarization and processing
- Simplify workflow by leveraging Claude's built-in GitHub API integration and tool usage capabilities
- Maintain same functionality: fetch pending issues, generate Chinese summaries, send Feishu notifications, and clean up labels
- Update Claude action reference from version pin to main branch for latest features

* Remove GitHub issue comment functionality

- Delete automated AI summary comments on issues after processing
- Remove documentation for manual issue commenting workflow
- Keep Feishu notification system intact while streamlining issue interactions

* Add OIDC token permissions and GitHub token to Claude workflow

- Add `id-token: write` permission for OIDC authentication in both jobs
- Pass `github_token` to Claude action for proper GitHub API access
- Maintain existing issue write and contents read permissions

fix: add GitHub issue tracker workflow with Feishu notifications

* feat: add GitHub issue tracker workflow with Feishu notifications

* fix: add missing environment variable for Claude translator in GitHub issue tracker workflow

* fix: update environment variable for Claude translator in GitHub issue tracker workflow

* Add quiet hours handling and scheduled processing for GitHub issue notifications

- Implement quiet hours detection (00:00-08:30 Beijing Time) with delayed notifications
- Add scheduled workflow to process pending issues daily at 08:30 Beijing Time
- Create new script to batch process and summarize multiple pending issues with Claude

* Replace custom Node.js script with Claude Code Action for issue processing

- Migrate from custom JavaScript implementation to Claude Code Action for AI-powered issue summarization and processing
- Simplify workflow by leveraging Claude's built-in GitHub API integration and tool usage capabilities
- Maintain same functionality: fetch pending issues, generate Chinese summaries, send Feishu notifications, and clean up labels
- Update Claude action reference from version pin to main branch for latest features

* Remove GitHub issue comment functionality

- Delete automated AI summary comments on issues after processing
- Remove documentation for manual issue commenting workflow
- Keep Feishu notification system intact while streamlining issue interactions

* Add OIDC token permissions and GitHub token to Claude workflow

- Add `id-token: write` permission for OIDC authentication in both jobs
- Pass `github_token` to Claude action for proper GitHub API access
- Maintain existing issue write and contents read permissions

* Enhance GitHub issue automation workflow with Claude integration

- Refactor Claude action to handle issue analysis, Feishu notification, and comment creation in single step
- Add tool permissions for Bash commands and custom notification script execution
- Update prompt with detailed task instructions including summary generation and automated actions
- Remove separate notification step by integrating all operations into Claude action workflow

* fix

* 删除AI总结评论的添加步骤和注意事项

(cherry picked from commit 3c8b61e268)
2025-10-24 18:33:25 +08:00
kangfenmao
d869c5750a feat: enhance model capabilities with endpoint type validation and add 'gemini' to supported providers 2025-10-24 00:45:58 +08:00
ABucket
3f4d34f6ae fix: deep research model only support medium search context and reasoning effort (#10676)
Co-authored-by: ABucket <abucket@github.com>
(cherry picked from commit 6f63eefa86)
2025-10-23 10:01:25 +08:00
defi-failure
1da1f10462 feat: add cherryin in provider type options (#10891)
(cherry picked from commit 4a38f2e8b1)
2025-10-23 09:55:52 +08:00
beyondkmp
9e65eae81a feat: support germen (#10879)
* feat: support germen

* format code

* translate

* update trans

* format

* add de

---------

Co-authored-by: Payne Fu <payne@Paynes-MBP.rcoffice.ringcentral.com>
(cherry picked from commit 4063c20505)
2025-10-22 16:09:21 +08:00
chenxue
a9a16ceb3e fix(aihubmix): fix model route rules (#10878)
Update aihubmix.ts

(cherry picked from commit 50798280db)
2025-10-22 15:49:38 +08:00
dependabot[bot]
a91c35de32 build(deps-dev): bump playwright from 1.52.0 to 1.55.1 (#10850)
Bumps [playwright](https://github.com/microsoft/playwright) from 1.52.0 to 1.55.1.
- [Release notes](https://github.com/microsoft/playwright/releases)
- [Commits](https://github.com/microsoft/playwright/compare/v1.52.0...v1.55.1)

---
updated-dependencies:
- dependency-name: playwright
  dependency-version: 1.55.1
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
(cherry picked from commit 5c7b81569e)
2025-10-22 15:49:21 +08:00
kangfenmao
9f5355b455 chore: update LICENSE file to include full text of GNU AGPL-3.0
- Replaced previous licensing information with the complete text of the GNU Affero General Public License v3.0 (AGPL-3.0).
- Clarified terms regarding commercial use and compliance with AGPL-3.0.
- Added detailed definitions and conditions for users and organizations regarding licensing options.

This update ensures that users have clear access to the licensing terms governing the Cherry Studio Community Edition.

(cherry picked from commit 81ac77e988)
2025-10-22 15:48:50 +08:00
beyondkmp
1ee2a98e5f feat: enhance proxy bypass rules with comprehensive matching (#10817)
* feat: enhance proxy bypass rules with comprehensive matching

- Add support for wildcard domains (*.example.com, .example.com)
- Add CIDR notation support for IPv4 and IPv6 (192.168.0.0/16, 2001:db8::/32)
- Add wildcard IP matching (192.168.1.*)
- Add <local> keyword for local network hostnames
- Support both semicolon and comma separators in bypass rules
- Add comprehensive unit tests with 22 test cases
- Export matchWildcardDomain and matchIpRule for testability

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* move to devDeps

* delete logs

* feat: enhance ProxyManager with advanced proxy bypass rule handling

- Introduced comprehensive parsing and matching for proxy bypass rules, including support for wildcard domains, CIDR notation, and local network addresses.
- Refactored existing functions and added new utility methods for improved clarity and maintainability.
- Updated unit tests to cover new functionality and ensure robust validation of bypass rules.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

* update proxy rules

* fix lint

* add tips

* delete hostname rule

* add logs

---------

Co-authored-by: Claude <noreply@anthropic.com>
(cherry picked from commit a5049d8872)
2025-10-22 15:48:43 +08:00
Zhaokun
ac2a2b2e3a fix: capture detailed error response body for reranker API failures (#10839)
* fix: capture detailed error response body for reranker API failures

Previously, when reranker API returned 400 or other error status codes,
only the HTTP status and status text were captured, without reading the
actual error response body that contains detailed error information.

This commit fixes the issue by:
- Reading the error response body (as JSON or text) before throwing error
- Attaching the response details to the error object
- Including responseBody in formatErrorMessage output

This will help diagnose issues like "qwen3-reranker not available" by
showing the actual error message from the API provider.

* fix: enhance error handling in GeneralReranker for API failures

This update improves the error handling in the GeneralReranker class by ensuring that the response body is properly cloned and read when an API call fails. The detailed error information, including the status, status text, and body, is now attached to the error object. This change aids in diagnosing issues by providing more context in error messages.

(cherry picked from commit bf35228b49)
2025-10-22 15:48:30 +08:00
beyondkmp
a03c1346a4 fix: Support right-click to paste file content into inputbar (#10730)
* feat: add right-click to paste text file content into input

Implemented context menu functionality for text file attachments that allows users to right-click on a text file attachment to paste its content directly into the input field.

Changes:
- Added onContextMenu prop to CustomTag component for handling right-click events
- Extended AttachmentPreview with onAttachmentContextMenu callback
- Implemented appendTxtContentToInput function to read and paste text file content
- Added clipboard support for copying file content
- Integrated context menu handler in Inputbar component

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* use real path

* 🐛 fix: clear txt attachment after paste

*  fix: improve attachment confirm flow

* update i18n

* 🎨 refactor: restyle confirm dialog

* format code

* refactor(ConfirmDialog): replace text buttons with icon buttons and remove i18n

- Replace text-based cancel/confirm buttons with icon buttons for better visual clarity
- Remove unused i18n translation hook as it's no longer needed
- Adjust styling to accommodate new button layout

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: icarus <eurfelux@gmail.com>
(cherry picked from commit 528524b075)
2025-10-22 15:45:50 +08:00
Pleasure1234
b310527210 fix: add continue-on-error & remove unused issue checker (#10821)
(cherry picked from commit b26df0e614)
2025-10-22 15:45:09 +08:00
Phantom
ab78ef71d9 feat(models): add doubao_after_251015 reasoning model type and support (#10826)
* feat(models): add doubao_after_251015 model type and support

Add new model type 'doubao_after_251015' with reasoning effort levels and update regex patterns to handle version 251015 and later

* fix(sdk): add warning for reasoning_effort field and update reasoning logic

Add warning comment about reasoning_effort field being overwritten for openai-compatible providers
Update reasoning effort logic to handle Doubao seed models after 251015 and standardize field naming

* fix(reasoning): update Doubao model regex patterns and tests

Update regex patterns for Doubao model validation to correctly handle version constraints
Add comprehensive test cases for model validation functions

(cherry picked from commit 06b1ae0cb8)
2025-10-22 15:44:04 +08:00
kangfenmao
0d86e16e28 chore: bump version to v1.6.5 2025-10-18 23:28:18 +08:00
kangfenmao
a8600a5426 feat: add CherryIN provider option to AddProviderPopup (#10803) 2025-10-18 23:20:39 +08:00
kangfenmao
e208d60c10 feat: add SWR library for data fetching (#10802) 2025-10-18 23:04:58 +08:00
SuYao
6968302961 feat: add Claude Haiku 4.5 model support and update related regex patterns (#10800)
* feat: add Claude Haiku 4.5 model support and update related regex patterns

* fix: update Claude model token limits for consistency
2025-10-18 23:04:58 +08:00
SuYao
dd65fa2f71 fix: handle AISDKError in chunk processing (#10801) 2025-10-18 23:04:58 +08:00
SuYao
679043f26b feat: add Mistral provider configuration to AI Providers (#10795) 2025-10-18 23:04:58 +08:00
Phantom
6a1fb9bc7e fix(message): adjust layout and overflow properties for better display (#10746)
* style(CodeBlockView): reduce min-width from 45ch to 35ch to fix layout issues

* style(messages): adjust overflow properties and clean up commented code

Remove commented overflow properties and adjust overflow behavior for better scroll handling in message containers

* style: remove commented overflow css properties
2025-10-18 23:04:58 +08:00
Kejiang Ma
07619c11e5 feat: new build-in OCR provider -> intel OV(NPU) OCR (#10737)
* new build-in ocr provider intel ov

Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
Signed-off-by: Kejiang Ma <kj.ma@intel.com>

* updated base on PR's commnets

Signed-off-by: Kejiang Ma <kj.ma@intel.com>

* feat(OcrImageSettings): use swr to fetch available providers

Add loading state and error handling when fetching available OCR providers. Display an alert when provider loading fails, showing the error message. Also optimize provider filtering logic using useMemo.

* refactor(ocr): rename providers to listProviders for consistency

Update method name to better reflect its functionality and maintain naming consistency across the codebase

---------

Signed-off-by: Ma, Kejiang <kj.ma@intel.com>
Signed-off-by: Kejiang Ma <kj.ma@intel.com>
Co-authored-by: icarus <eurfelux@gmail.com>
2025-10-18 23:04:58 +08:00
beyondkmp
b71ea99738 feat: add Greek language option to spell checker options (#10793)
feat: add Greek language option to GeneralSettings component

- Added support for Greek (Ελληνικά) language in the language selection dropdown of the GeneralSettings component.
2025-10-18 23:04:58 +08:00
beyondkmp
5a29e45b88 fix: prevent default behavior for Cmd/Ctrl+F in WebviewService (#10783)
fix: prevent default behavior for Cmd/Ctrl+F in WebviewService (#10800)

Updated the keyboard handler in WebviewService to always prevent the default action for the Cmd/Ctrl+F shortcut, ensuring it overrides the guest page's native find dialog. This change allows the renderer to manage the behavior of Escape and Enter keys based on the visibility of the search bar.
2025-10-18 23:04:50 +08:00
beyondkmp
45195bb57d fix: guard webview search against destroyed webviews (#10704)
* 🐛 fix: guard webview search against destroyed webviews

* delete code

* delete code
2025-10-18 23:04:50 +08:00
beyondkmp
bb003c071c fix: intercept webview keyboard shortcuts for search functionality (#10641)
* feat: intercept webview keyboard shortcuts for search functionality

Implemented keyboard shortcut interception in webview to enable search functionality (Ctrl/Cmd+F) and navigation (Enter/Escape) within mini app pages. Previously, these shortcuts were consumed by the webview content and not propagated to the host application.

Changes:
- Added Webview_SearchHotkey IPC channel for forwarding keyboard events
- Implemented before-input-event handler in WebviewService to intercept Ctrl/Cmd+F, Escape, and Enter
- Extended preload API with onFindShortcut callback for webview shortcut events
- Updated WebviewSearch component to handle shortcuts from both window and webview
- Added comprehensive test coverage for webview shortcut handling

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix lint

* refactor: improve webview hotkey initialization and error handling

Refactored webview keyboard shortcut handler for better code organization and reliability.

Changes:
- Extracted keyboard handler logic into reusable attachKeyboardHandler function
- Added initWebviewHotkeys() to initialize handlers for existing webviews on startup
- Integrated initialization in main app entry point
- Added explanatory comment for event.preventDefault() behavior
- Added warning log when webContentsId is unavailable in WebviewSearch

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: add WebviewKeyEvent type and update related components

- Introduced WebviewKeyEvent type to standardize keyboard event handling for webviews.
- Updated preload index to utilize the new WebviewKeyEvent type in the onFindShortcut callback.
- Refactored WebviewSearch component and its tests to accommodate the new type, enhancing type safety and clarity.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

* fix lint

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-18 22:51:34 +08:00
Kejiang Ma
80c0fca963 feat: update and download ovms to 2025.3 official release from offici… (#10603)
* feat: update and download ovms to 2025.3 official release from official site.

Signed-off-by: Kejiang Ma <kj.ma@intel.com>

* fix UI text

Signed-off-by: Kejiang Ma <kj.ma@intel.com>

---------

Signed-off-by: Kejiang Ma <kj.ma@intel.com>
2025-10-18 22:50:15 +08:00
defi-failure
c5629da46d feat: notes full text search (#10640)
* feat: notes full text search initial commit

* fix: update highlight overlay when scroll

* fix: reset note search result properly

* refactor: extract scrollToLine logic from CodeEditor into a custom hook

* fix: hide match overlay when overlap

* fix: truncate line with ellipsis around search match for better visibility

* fix: unified note search match highlight style
2025-10-18 22:50:04 +08:00
SongSong
8b16170b16 feat: add built-in DiDi MCP server integration (#10318)
* feat: add built-in DiDi MCP server integration

- Add DiDi MCP server implementation with ride-hailing services
- Support map search, price estimation, order management, and driver tracking
- Add multilingual translations for DiDi MCP server descriptions
- Available only in mainland China, requires DIDI_API_KEY environment variable

* fix: resolve code formatting issues in DiDi MCP server

fixes code formatting issues in the DiDi MCP server implementation to resolve CI format check failures.

---------

Co-authored-by: BillySong <billysongli@didiglobal.com>
2025-10-18 22:49:50 +08:00
亢奋猫
182321a1bd fix: update default enableTopP setting to false in AssistantModelSett… (#10754)
fix: update default enableTopP setting to false in AssistantModelSettings and DefaultAssistantSettings

- Changed default value of enableTopP from true to false in AssistantModelSettings and DefaultAssistantSettings components.
- Updated related logic to ensure consistent behavior across settings.
2025-10-18 22:49:38 +08:00
Pleasure1234
cacabfa56d fix: add array checks for knowledge and memories in citations (#10778)
Updated formatCitationsFromBlock to verify that 'knowledge' and 'memories' are arrays before accessing their length and mapping over them. This prevents potential runtime errors if these properties are not arrays.
2025-10-18 22:49:24 +08:00
Shemol
aaef458010 fix: preserve spaces in API keys; update i18n tips to use commas or newlines (#10751)
fix(api): preserve spaces in API keys; i18n: clarify tips

Tips now say "Use commas to separate multiple keys." Full-width commas are auto-normalized.
2025-10-18 22:49:16 +08:00
Pleasure1234
43dff80211 fix: ensure API key rotation for each request (#10776)
Updated ModernAiProvider to regenerate config on every request, ensuring API key rotation is effective. Refactored BaseApiClient to use an API key getter for dynamic key retrieval, supporting key rotation when multiple keys are configured.
2025-10-18 22:49:02 +08:00
kangfenmao
1dfae45a12 fix: update Aihubmix auth URL to use console domain 2025-10-18 22:48:48 +08:00
George·Dong
26c7dd9976 fix(minapps): can't open links in external broswer when using tab navigation (#10669)
* fix(minapps): can't open links in external broswer when using tab navigation

* fix(minapps): stabilize webview navigation and add logging

* fix(minapps): debounce nav updates and robust webview attach
2025-10-18 22:48:41 +08:00
Phantom
5ce9209334 fix(translate): auto copy failed (#10745)
* fix(translate): auto copy failed

Because translatedContent may be stale

* refactor(translate): improve copy functionality dependency handling

Update copy callback dependencies to include setCopied and ensure proper memoization
Fix onCopy and translateText dependencies to include copy function
2025-10-18 22:47:51 +08:00
Calcium-Ion
0502ff48f1 feat: support NewAPI as a generic provider type (#10696)
* feat: add support for New API providerType

* feat: support New API as a generic painting provider

* refactor: update styling in painting pages to use Tailwind classes

- Replaced inline styles with Tailwind CSS classes for margin adjustments in AihubmixPage, DmxapiPage, SiliconPage, TokenFluxPage, and ZhipuPage.
- Enhanced consistency and maintainability of the codebase by standardizing styling approach across components.
- Minor refactor in ProviderSelect component to support className prop for better styling flexibility.
2025-10-18 22:47:26 +08:00
SuYao
726b2570e2 Fix/aisdk error (#10563)
* Add syntax highlighting to AI SDK error cause display

- Parse and format error cause as JSON with syntax highlighting
- Use CodeStyleProvider context for consistent code styling
- Maintain plain text fallback for non-JSON content

* fix patch

* chore: yarn lock

* feat: provider-specific-error

* chore

* chore

* fix: handle JSON parsing errors in AiSdkErrorBase component

* fix: improve error message formatting in AiSdkToChunkAdapter

* fix: remove unused MarkdownContainer and update AiSdkErrorBase to use styled div
2025-10-18 22:31:20 +08:00
Chen Tao
a33e3971d9 fix: support gemini-2.5-image-flash (#10683) 2025-10-13 19:33:36 +08:00
defi-failure
98faa80835 chore: update SiliconFlow logo (#10684) 2025-10-13 19:33:36 +08:00
George·Dong
49deece835 feat(reasoning): add special handling for Grok 4 fast models & qwen3-omni/qwen3-vl (#10367)
* feat(reasoning): add special handling for Grok 4 fast models

* feat(models): add grok4_fast model and refine grok reasoning

* feat(reasoning): unify Grok reasoning handling and XAI params

* feat(models): Grok/qwen handling and XAI

* feat(models): recognize qwen3-vl thinking models and add sizes

* fix(reasoning): reasoning enabled for QwenAlwaysThink models

* feat(reasoning): enable reasoning for Grok 4 Fast models

* fix(reasoning): rename and correct Grok 4 Fast model checks

* fix: adjust Grok-4 Fast reasoning detection for OpenRouter

* fix(reasoning): exclude non-reasoning models from reasoning detection
2025-10-13 19:20:15 +08:00
Chen Tao
eccdd7643e fix: remove LRU for websearch rag (#10631) 2025-10-13 19:20:02 +08:00
kangfenmao
7c8894616c refactor: remove unused minapp configurations and logos
- Deleted references to HuggingChat, NamiAiSearch, and Hika from the minapps configuration.
- Cleaned up imports by removing associated logo imports for the removed apps.
2025-10-11 15:34:20 +08:00
kangfenmao
170632a199 chore: bump version to 1.6.4 and update release notes
- Updated version in package.json to 1.6.4.
- Revised release notes to reflect new features, bug fixes, and technical updates.
- Added new features including CherryIN provider, right-click context menu for notes, and search functionality in the mini app page.
- Fixed issues related to reasoning block insertion order, knowledge base deletion, and Qwen model URL configuration.
2025-10-11 14:21:06 +08:00
ABucket
cd5841cdd4 fix: Provider icons are not displayed after selecting SiliconFlow in the "images" page (#10620) 2025-10-11 14:21:06 +08:00
ABucket
763afc5ca2 fix: Quick Assistant fails to correctly inject variables in prompts (#10617) 2025-10-11 14:21:06 +08:00
ABucket
45f033ff4e fix: AI_TypeValidationError when calling Ling-1T model (#10622) 2025-10-11 14:21:06 +08:00
kangfenmao
f8fadcc73f fix: adjust overflow properties in MessageGroup component
- Changed overflow properties in the GridContainer styled component to improve layout handling. Overflow is now set to hidden for vertical alignment.
2025-10-11 14:01:32 +08:00
kangfenmao
a94e5dad5f feat: remove some minapp and update related configurations
- Introduced new app icon for Stepfun.
- Updated minapps configuration to include Stepfun with its logo and URL.
- Removed Yuewen app from configurations and translations.
- Updated translations for multiple languages to reflect the addition of Stepfun and removal of Yuewen.
- Incremented version in the store configuration and added migration logic for new provider integration.
2025-10-11 11:54:37 +08:00
kangfenmao
632fd4c567 chore: update @ai-sdk/google to version 2.0.17 and add corresponding patch 2025-10-11 11:43:29 +08:00
ABucket
401e17eb0e feat: allow right click to create note and folder (#10523)
* feat: allow right click to create note and folder

* fix: duplicate menu for notes or folder

* fix: create notes in folder when a folder is selected
2025-10-11 10:29:00 +08:00
beyondkmp
80fc118465 feat: support search in mini app page (#10609)
*  feat: add webview find-in-page overlay

* 🐛 fix: reset webview search on tab change

* fix clear search issue

* 🐛 fix: rebind webview search events

* 🐛 fix: disable spellcheck in search input

* fix spellcheck

* 🐛 fix: webview search can now reopen after closing

Fixed an issue where the search overlay couldn't be reopened after closing.
The openSearch callback was unnecessarily depending on webviewRef.current,
causing event listener rebinding issues. Removed the redundant webviewRef
check as isWebviewReady is sufficient to ensure webview readiness.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Payne Fu <payne@Paynes-Mac-mini.rcoffice.ringcentral.com>
Co-authored-by: Payne Fu <payne@Paynes-MBP.rcoffice.ringcentral.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-11 10:28:51 +08:00
Tristan Zhang
9a8d7640f5 fix: insert reasoning block before the content block (#10545)
fix: always insert reasoning block before the content block
2025-10-11 10:28:43 +08:00
Chen Tao
2b3f6d5640 fix: knowledge base not delete and websearch rag error (#10595)
* fix: knowledge base not  delete

* fix: websearch rag error

* chore: add comment
2025-10-11 10:28:29 +08:00
beyondkmp
a2d81e6204 feat: add updating dialog in render (#10569)
* feat: replace update dialog handling with quit and install functionality

* refactor: remove App_ShowUpdateDialog and implement App_QuitAndInstall in IpcChannel
* update ipc.ts to handle quit and install action
* modify AppUpdater to include quitAndInstall method
* adjust preload index to invoke new quit and install action
* enhance AboutSettings to manage update dialog state and trigger quit and install

* fix(AboutSettings): handle null update info in update dialog state management

* fix(UpdateDialog): improve error handling during update installation and enhance release notes processing

* fix(AppUpdater): remove redundant assignment of releaseInfo after update download

* fix(IpcChannel): remove UpdateDownloadedCancelled enum value

* format code

* fix(UpdateDialog): enhance installation process with loading state and error handling

* update i18n

* fix(i18n): Auto update translations for PR #10569

* feat(UpdateAppButton): integrate UpdateDialog and update button functionality for better user experience

* fix(UpdateDialog): update installation handler to support async operation and ensure modal closes after installation

* refactor(AppUpdater.test): remove deprecated formatReleaseNotes tests to streamline test suite

* refactor(update-dialog): simplify dialog close handling

Replace onOpenChange with onClose prop to directly handle dialog closing
Remove redundant handleClose function and simplify button onPress handler

---------

Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: icarus <eurfelux@gmail.com>
2025-10-11 10:27:52 +08:00
Tristan Zhang
b6107c5fb1 fix: change the url for qwen (#10584) 2025-10-11 10:27:42 +08:00
385 changed files with 13670 additions and 28503 deletions

3
.github/CODEOWNERS vendored
View File

@@ -1,4 +1,5 @@
/src/renderer/src/store/ @0xfullex
/src/renderer/src/databases/ @0xfullex
/src/main/services/ConfigManager.ts @0xfullex
/packages/shared/IpcChannel.ts @0xfullex
/src/main/ipc.ts @0xfullex
/src/main/ipc.ts @0xfullex

View File

@@ -1,252 +0,0 @@
default-mode:
add:
remove: [pull_request_target, issues]
labels:
# <!-- [Ss]kip `LABEL` --> 跳过一个 label
# <!-- [Rr]emove `LABEL` --> 去掉一个 label
# skips and removes
- name: skip all
content:
regexes: '[Ss]kip (?:[Aa]ll |)[Ll]abels?'
- name: remove all
content:
regexes: '[Rr]emove (?:[Aa]ll |)[Ll]abels?'
- name: skip kind/bug
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)kind/bug(?:`|)'
- name: remove kind/bug
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)kind/bug(?:`|)'
- name: skip kind/enhancement
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)kind/enhancement(?:`|)'
- name: remove kind/enhancement
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)kind/enhancement(?:`|)'
- name: skip kind/question
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)kind/question(?:`|)'
- name: remove kind/question
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)kind/question(?:`|)'
- name: skip area/Connectivity
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)area/Connectivity(?:`|)'
- name: remove area/Connectivity
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)area/Connectivity(?:`|)'
- name: skip area/UI/UX
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)area/UI/UX(?:`|)'
- name: remove area/UI/UX
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)area/UI/UX(?:`|)'
- name: skip kind/documentation
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)kind/documentation(?:`|)'
- name: remove kind/documentation
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)kind/documentation(?:`|)'
- name: skip client:linux
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)client:linux(?:`|)'
- name: remove client:linux
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)client:linux(?:`|)'
- name: skip client:mac
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)client:mac(?:`|)'
- name: remove client:mac
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)client:mac(?:`|)'
- name: skip client:win
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)client:win(?:`|)'
- name: remove client:win
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)client:win(?:`|)'
- name: skip sig/Assistant
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)sig/Assistant(?:`|)'
- name: remove sig/Assistant
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)sig/Assistant(?:`|)'
- name: skip sig/Data
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)sig/Data(?:`|)'
- name: remove sig/Data
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)sig/Data(?:`|)'
- name: skip sig/MCP
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)sig/MCP(?:`|)'
- name: remove sig/MCP
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)sig/MCP(?:`|)'
- name: skip sig/RAG
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)sig/RAG(?:`|)'
- name: remove sig/RAG
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)sig/RAG(?:`|)'
- name: skip lgtm
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)lgtm(?:`|)'
- name: remove lgtm
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)lgtm(?:`|)'
- name: skip License
content:
regexes: '[Ss]kip (?:[Ll]abels? |)(?:`|)License(?:`|)'
- name: remove License
content:
regexes: '[Rr]emove (?:[Ll]abels? |)(?:`|)License(?:`|)'
# `Dev Team`
- name: Dev Team
mode:
add: [pull_request_target, issues]
author_association:
- COLLABORATOR
# Area labels
- name: area/Connectivity
content: area/Connectivity
regexes: '代理|[Pp]roxy'
skip-if:
- skip all
- skip area/Connectivity
remove-if:
- remove all
- remove area/Connectivity
- name: area/UI/UX
content: area/UI/UX
regexes: '界面|[Uu][Ii]|重叠|按钮|图标|组件|渲染|菜单|栏目|头像|主题|样式|[Cc][Ss][Ss]'
skip-if:
- skip all
- skip area/UI/UX
remove-if:
- remove all
- remove area/UI/UX
# Kind labels
- name: kind/documentation
content: kind/documentation
regexes: '文档|教程|[Dd]oc(s|umentation)|[Rr]eadme'
skip-if:
- skip all
- skip kind/documentation
remove-if:
- remove all
- remove kind/documentation
# Client labels
- name: client:linux
content: client:linux
regexes: '(?:[Ll]inux|[Uu]buntu|[Dd]ebian)'
skip-if:
- skip all
- skip client:linux
remove-if:
- remove all
- remove client:linux
- name: client:mac
content: client:mac
regexes: '(?:[Mm]ac|[Mm]acOS|[Oo]SX)'
skip-if:
- skip all
- skip client:mac
remove-if:
- remove all
- remove client:mac
- name: client:win
content: client:win
regexes: '(?:[Ww]in|[Ww]indows)'
skip-if:
- skip all
- skip client:win
remove-if:
- remove all
- remove client:win
# SIG labels
- name: sig/Assistant
content: sig/Assistant
regexes: '快捷助手|[Aa]ssistant'
skip-if:
- skip all
- skip sig/Assistant
remove-if:
- remove all
- remove sig/Assistant
- name: sig/Data
content: sig/Data
regexes: '[Ww]ebdav|坚果云|备份|同步|数据|Obsidian|Notion|Joplin|思源'
skip-if:
- skip all
- skip sig/Data
remove-if:
- remove all
- remove sig/Data
- name: sig/MCP
content: sig/MCP
regexes: '[Mm][Cc][Pp]'
skip-if:
- skip all
- skip sig/MCP
remove-if:
- remove all
- remove sig/MCP
- name: sig/RAG
content: sig/RAG
regexes: '知识库|[Rr][Aa][Gg]'
skip-if:
- skip all
- skip sig/RAG
remove-if:
- remove all
- remove sig/RAG
# Other labels
- name: lgtm
content: lgtm
regexes: '(?:[Ll][Gg][Tt][Mm]|[Ll]ooks [Gg]ood [Tt]o [Mm]e)'
skip-if:
- skip all
- skip lgtm
remove-if:
- remove all
- remove lgtm
- name: License
content: License
regexes: '(?:[Ll]icense|[Cc]opyright|[Mm][Ii][Tt]|[Aa]pache)'
skip-if:
- skip all
- skip License
remove-if:
- remove all
- remove License

View File

@@ -3,6 +3,18 @@
1. Consider creating this PR as draft: https://github.com/CherryHQ/cherry-studio/blob/main/CONTRIBUTING.md
-->
<!--
⚠️ Important: Redux/IndexedDB Data-Changing Feature PRs Temporarily On Hold ⚠️
Please note: For our current development cycle, we are not accepting feature Pull Requests that introduce changes to Redux data models or IndexedDB schemas.
While we value your contributions, PRs of this nature will be blocked without merge. We welcome all other contributions (bug fixes, perf enhancements, docs, etc.). Thank you!
Once version 2.0.0 is released, we will resume reviewing feature PRs.
-->
### What this PR does
Before this PR:

View File

@@ -13,7 +13,7 @@ on:
jobs:
auto-i18n:
runs-on: ubuntu-latest
if: github.event.pull_request.head.repo.full_name == 'CherryHQ/cherry-studio'
if: github.event_name == 'workflow_dispatch' || github.event.pull_request.head.repo.full_name == 'CherryHQ/cherry-studio'
name: Auto I18N
permissions:
contents: write
@@ -29,13 +29,14 @@ jobs:
uses: actions/setup-node@v5
with:
node-version: 20
package-manager-cache: false
- name: 📦 Install dependencies in isolated directory
run: |
# 在临时目录安装依赖
mkdir -p /tmp/translation-deps
cd /tmp/translation-deps
echo '{"dependencies": {"openai": "^5.12.2", "cli-progress": "^3.12.0", "tsx": "^4.20.3", "@biomejs/biome": "2.2.4"}}' > package.json
echo '{"dependencies": {"@cherrystudio/openai": "^6.5.0", "cli-progress": "^3.12.0", "tsx": "^4.20.3", "@biomejs/biome": "2.2.4"}}' > package.json
npm install --no-package-lock
# 设置 NODE_PATH 让项目能找到这些依赖

View File

@@ -16,13 +16,10 @@ on:
jobs:
translate:
if: |
(github.event_name == 'issues')
|| (github.event_name == 'issue_comment' && github.event.sender.type != 'Bot')
|| (
(github.event_name == 'pull_request_review' || github.event_name == 'pull_request_review_comment')
&& github.event.sender.type != 'Bot'
&& github.event.pull_request.head.repo.fork == false
)
(github.event_name == 'issues') ||
(github.event_name == 'issue_comment' && github.event.sender.type != 'Bot') ||
(github.event_name == 'pull_request_review' && github.event.sender.type != 'Bot') ||
(github.event_name == 'pull_request_review_comment' && github.event.sender.type != 'Bot')
runs-on: ubuntu-latest
permissions:
contents: read
@@ -45,7 +42,7 @@ jobs:
# See: https://github.com/anthropics/claude-code-action/blob/main/docs/security.md
github_token: ${{ secrets.TOKEN_GITHUB_WRITE }}
allowed_non_write_users: "*"
anthropic_api_key: ${{ secrets.CLAUDE_TRANSLATOR_APIKEY }}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: "--allowed-tools Bash(gh issue:*),Bash(gh api:repos/*/issues:*),Bash(gh api:repos/*/pulls/*/reviews/*),Bash(gh api:repos/*/pulls/comments/*)"
prompt: |
你是一个多语言翻译助手。你需要响应 GitHub Webhooks 中的以下四种事件:
@@ -108,5 +105,3 @@ jobs:
使用以下命令获取完整信息:
gh issue view ${{ github.event.issue.number }} --json title,body,comments
env:
ANTHROPIC_BASE_URL: ${{ secrets.CLAUDE_TRANSLATOR_BASEURL }}

View File

@@ -13,6 +13,7 @@ jobs:
steps:
- name: Delete merged branch
uses: actions/github-script@v8
continue-on-error: true
with:
script: |
github.rest.git.deleteRef({

View File

@@ -0,0 +1,187 @@
name: GitHub Issue Tracker with Feishu Notification
on:
issues:
types: [opened]
schedule:
# Run every day at 8:30 Beijing Time (00:30 UTC)
- cron: '30 0 * * *'
workflow_dispatch:
jobs:
process-new-issue:
if: github.event_name == 'issues'
runs-on: ubuntu-latest
permissions:
issues: write
contents: read
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Check Beijing Time
id: check_time
run: |
# Get current time in Beijing timezone (UTC+8)
BEIJING_HOUR=$(TZ='Asia/Shanghai' date +%H)
BEIJING_MINUTE=$(TZ='Asia/Shanghai' date +%M)
echo "Beijing Time: ${BEIJING_HOUR}:${BEIJING_MINUTE}"
# Check if time is between 00:00 and 08:30
if [ $BEIJING_HOUR -lt 8 ] || ([ $BEIJING_HOUR -eq 8 ] && [ $BEIJING_MINUTE -le 30 ]); then
echo "should_delay=true" >> $GITHUB_OUTPUT
echo "⏰ Issue created during quiet hours (00:00-08:30 Beijing Time)"
echo "Will schedule notification for 08:30"
else
echo "should_delay=false" >> $GITHUB_OUTPUT
echo "✅ Issue created during active hours, will notify immediately"
fi
- name: Add pending label if in quiet hours
if: steps.check_time.outputs.should_delay == 'true'
uses: actions/github-script@v7
with:
script: |
github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['pending-feishu-notification']
});
- name: Setup Node.js
if: steps.check_time.outputs.should_delay == 'false'
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Process issue with Claude
if: steps.check_time.outputs.should_delay == 'false'
uses: anthropics/claude-code-action@main
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
allowed_non_write_users: "*"
anthropic_api_key: ${{ secrets.CLAUDE_TRANSLATOR_APIKEY }}
claude_args: "--allowed-tools Bash(gh issue:*),Bash(node scripts/feishu-notify.js)"
prompt: |
你是一个GitHub Issue自动化处理助手。请完成以下任务
## 当前Issue信息
- Issue编号#${{ github.event.issue.number }}
- 标题:${{ github.event.issue.title }}
- 作者:${{ github.event.issue.user.login }}
- URL${{ github.event.issue.html_url }}
- 内容:${{ github.event.issue.body }}
- 标签:${{ join(github.event.issue.labels.*.name, ', ') }}
## 任务步骤
1. **分析并总结issue**
用中文简体提供简洁的总结2-3句话包括
- 问题的主要内容
- 核心诉求
- 重要的技术细节
2. **发送飞书通知**
使用以下命令发送飞书通知注意ISSUE_SUMMARY需要用引号包裹
```bash
ISSUE_URL="${{ github.event.issue.html_url }}" \
ISSUE_NUMBER="${{ github.event.issue.number }}" \
ISSUE_TITLE="${{ github.event.issue.title }}" \
ISSUE_AUTHOR="${{ github.event.issue.user.login }}" \
ISSUE_LABELS="${{ join(github.event.issue.labels.*.name, ',') }}" \
ISSUE_SUMMARY="<你生成的中文总结>" \
node scripts/feishu-notify.js
```
## 注意事项
- 总结必须使用简体中文
- ISSUE_SUMMARY 在传递给 node 命令时需要正确转义特殊字符
- 如果issue内容为空也要提供一个简短的说明
请开始执行任务!
env:
ANTHROPIC_BASE_URL: ${{ secrets.CLAUDE_TRANSLATOR_BASEURL }}
FEISHU_WEBHOOK_URL: ${{ secrets.FEISHU_WEBHOOK_URL }}
FEISHU_WEBHOOK_SECRET: ${{ secrets.FEISHU_WEBHOOK_SECRET }}
process-pending-issues:
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
permissions:
issues: write
contents: read
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Process pending issues with Claude
uses: anthropics/claude-code-action@main
with:
anthropic_api_key: ${{ secrets.CLAUDE_TRANSLATOR_APIKEY }}
allowed_non_write_users: "*"
github_token: ${{ secrets.GITHUB_TOKEN }}
claude_args: "--allowed-tools Bash(gh issue:*),Bash(gh api:*),Bash(node scripts/feishu-notify.js)"
prompt: |
你是一个GitHub Issue自动化处理助手。请完成以下任务
## 任务说明
处理所有待发送飞书通知的GitHub Issues标记为 `pending-feishu-notification` 的issues
## 步骤
1. **获取待处理的issues**
使用以下命令获取所有带 `pending-feishu-notification` 标签的issues
```bash
gh api repos/${{ github.repository }}/issues?labels=pending-feishu-notification&state=open
```
2. **总结每个issue**
对于每个找到的issue用中文提供简洁的总结2-3句话包括
- 问题的主要内容
- 核心诉求
- 重要的技术细节
3. **发送飞书通知**
对于每个issue使用以下命令发送飞书通知
```bash
ISSUE_URL="<issue的html_url>" \
ISSUE_NUMBER="<issue编号>" \
ISSUE_TITLE="<issue标题>" \
ISSUE_AUTHOR="<issue作者>" \
ISSUE_LABELS="<逗号分隔的标签列表排除pending-feishu-notification>" \
ISSUE_SUMMARY="<你生成的中文总结>" \
node scripts/feishu-notify.js
```
4. **移除标签**
成功发送后,使用以下命令移除 `pending-feishu-notification` 标签:
```bash
gh api -X DELETE repos/${{ github.repository }}/issues/<issue编号>/labels/pending-feishu-notification
```
## 环境变量
- Repository: ${{ github.repository }}
- Feishu webhook URL和密钥已在环境变量中配置好
## 注意事项
- 如果没有待处理的issues输出提示信息后直接结束
- 处理多个issues时每个issue之间等待2-3秒避免API限流
- 如果某个issue处理失败继续处理下一个不要中断整个流程
- 所有总结必须使用中文(简体中文)
请开始执行任务!
env:
ANTHROPIC_BASE_URL: ${{ secrets.CLAUDE_TRANSLATOR_BASEURL }}
FEISHU_WEBHOOK_URL: ${{ secrets.FEISHU_WEBHOOK_URL }}
FEISHU_WEBHOOK_SECRET: ${{ secrets.FEISHU_WEBHOOK_SECRET }}

View File

@@ -1,25 +0,0 @@
name: 'Issue Checker'
on:
issues:
types: [opened, edited]
pull_request_target:
types: [opened, edited]
issue_comment:
types: [created, edited]
permissions:
contents: read
issues: write
pull-requests: write
jobs:
triage:
runs-on: ubuntu-latest
steps:
- uses: MaaAssistantArknights/issue-checker@v1.14
with:
repo-token: '${{ secrets.GITHUB_TOKEN }}'
configuration-path: .github/issue-checker.yml
not-before: 2022-08-05T00:00:00Z
include-title: 1

View File

@@ -29,8 +29,10 @@ jobs:
days-before-close: 0 # Close immediately after stale
stale-issue-label: 'inactive'
close-issue-label: 'closed:no-response'
exempt-all-milestones: true
exempt-all-assignees: true
stale-issue-message: |
This issue has been labeled as needing more information and has been inactive for ${{ env.daysBeforeStale }} days.
This issue has been labeled as needing more information and has been inactive for ${{ env.daysBeforeStale }} days.
It will be closed now due to lack of additional information.
该问题被标记为"需要更多信息"且已经 ${{ env.daysBeforeStale }} 天没有任何活动,将立即关闭。
@@ -46,6 +48,8 @@ jobs:
days-before-stale: ${{ env.daysBeforeStale }}
days-before-close: ${{ env.daysBeforeClose }}
stale-issue-label: 'inactive'
exempt-all-milestones: true
exempt-all-assignees: true
stale-issue-message: |
This issue has been inactive for a prolonged period and will be closed automatically in ${{ env.daysBeforeClose }} days.
该问题已长时间处于闲置状态,${{ env.daysBeforeClose }} 天后将自动关闭。

2
.gitignore vendored
View File

@@ -71,5 +71,3 @@ playwright-report
test-results
YOUR_MEMORY_FILE_PATH
.sessions/

View File

@@ -117,7 +117,7 @@
"no-unused-expressions": "off", // this rule disallow us to use expression to call function, like `condition && fn()`
"no-unused-labels": "error",
"no-unused-private-class-members": "error",
"no-unused-vars": ["warn", { "caughtErrors": "none" }],
"no-unused-vars": ["error", { "caughtErrors": "none" }],
"no-useless-backreference": "error",
"no-useless-catch": "error",
"no-useless-escape": "error",

10
.vscode/settings.json vendored
View File

@@ -34,10 +34,10 @@
"*.css": "tailwindcss"
},
"files.eol": "\n",
// "i18n-ally.displayLanguage": "zh-cn", // 界面显示语言
"i18n-ally.displayLanguage": "zh-cn",
"i18n-ally.enabledFrameworks": ["react-i18next", "i18next"],
"i18n-ally.enabledParsers": ["ts", "js", "json"], // 解析语言
"i18n-ally.fullReloadOnChanged": true,
"i18n-ally.fullReloadOnChanged": true, // 界面显示语言
"i18n-ally.keystyle": "nested", // 翻译路径格式
"i18n-ally.localesPaths": ["src/renderer/src/i18n/locales"],
// "i18n-ally.namespace": true, // 开启命名空间
@@ -47,9 +47,5 @@
"search.exclude": {
"**/dist/**": true,
".yarn/releases/**": true
},
"tailwindCSS.classAttributes": [
"className",
"classNames",
]
}
}

View File

@@ -1,8 +1,8 @@
diff --git a/dist/index.mjs b/dist/index.mjs
index 69ab1599c76801dc1167551b6fa283dded123466..f0af43bba7ad1196fe05338817e65b4ebda40955 100644
index b957cb824faa79cf01ba3a504f221870bd8e306a..4d71d30f655775d61537d9d8b73f6e17d41fa67e 100644
--- a/dist/index.mjs
+++ b/dist/index.mjs
@@ -477,7 +477,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
@@ -452,7 +452,7 @@ function convertToGoogleGenerativeAIMessages(prompt, options) {
// src/get-model-path.ts
function getModelPath(modelId) {

View File

@@ -1,31 +0,0 @@
diff --git a/sdk.mjs b/sdk.mjs
index 461e9a2ba246778261108a682762ffcf26f7224e..44bd667d9f591969d36a105ba5eb8b478c738dd8 100644
--- a/sdk.mjs
+++ b/sdk.mjs
@@ -6215,7 +6215,7 @@ function createAbortController(maxListeners = DEFAULT_MAX_LISTENERS) {
}
// ../src/transport/ProcessTransport.ts
-import { spawn } from "child_process";
+import { fork } from "child_process";
import { createInterface } from "readline";
// ../src/utils/fsOperations.ts
@@ -6473,14 +6473,11 @@ class ProcessTransport {
const errorMessage = isNativeBinary(pathToClaudeCodeExecutable) ? `Claude Code native binary not found at ${pathToClaudeCodeExecutable}. Please ensure Claude Code is installed via native installer or specify a valid path with options.pathToClaudeCodeExecutable.` : `Claude Code executable not found at ${pathToClaudeCodeExecutable}. Is options.pathToClaudeCodeExecutable set?`;
throw new ReferenceError(errorMessage);
}
- const isNative = isNativeBinary(pathToClaudeCodeExecutable);
- const spawnCommand = isNative ? pathToClaudeCodeExecutable : executable;
- const spawnArgs = isNative ? args : [...executableArgs, pathToClaudeCodeExecutable, ...args];
- this.logForDebugging(isNative ? `Spawning Claude Code native binary: ${pathToClaudeCodeExecutable} ${args.join(" ")}` : `Spawning Claude Code process: ${executable} ${[...executableArgs, pathToClaudeCodeExecutable, ...args].join(" ")}`);
+ this.logForDebugging(`Forking Claude Code Node.js process: ${pathToClaudeCodeExecutable} ${args.join(" ")}`);
const stderrMode = env.DEBUG || stderr ? "pipe" : "ignore";
- this.child = spawn(spawnCommand, spawnArgs, {
+ this.child = fork(pathToClaudeCodeExecutable, args, {
cwd,
- stdio: ["pipe", "pipe", stderrMode],
+ stdio: stderrMode === "pipe" ? ["pipe", "pipe", "pipe", "ipc"] : ["pipe", "pipe", "ignore", "ipc"],
signal: this.abortController.signal,
env
});

View File

@@ -1,44 +0,0 @@
diff --git a/dist/index.js b/dist/index.js
index 53f411e55a4c9a06fd29bb4ab8161c4ad15980cd..71b91f196c8b886ed90dd237dec5625d79d5677e 100644
--- a/dist/index.js
+++ b/dist/index.js
@@ -12676,10 +12676,13 @@ var OpenAIResponsesLanguageModel = class {
}
});
} else if (value.item.type === "message") {
- controller.enqueue({
- type: "text-end",
- id: value.item.id
- });
+ // Fix for gpt-5-codex: use currentTextId to ensure text-end matches text-start
+ if (currentTextId) {
+ controller.enqueue({
+ type: "text-end",
+ id: currentTextId
+ });
+ }
currentTextId = null;
} else if (isResponseOutputItemDoneReasoningChunk(value)) {
const activeReasoningPart = activeReasoning[value.item.id];
diff --git a/dist/index.mjs b/dist/index.mjs
index 7719264da3c49a66c2626082f6ccaae6e3ef5e89..090fd8cf142674192a826148428ed6a0c4a54e35 100644
--- a/dist/index.mjs
+++ b/dist/index.mjs
@@ -12670,10 +12670,13 @@ var OpenAIResponsesLanguageModel = class {
}
});
} else if (value.item.type === "message") {
- controller.enqueue({
- type: "text-end",
- id: value.item.id
- });
+ // Fix for gpt-5-codex: use currentTextId to ensure text-end matches text-start
+ if (currentTextId) {
+ controller.enqueue({
+ type: "text-end",
+ id: currentTextId
+ });
+ }
currentTextId = null;
} else if (isResponseOutputItemDoneReasoningChunk(value)) {
const activeReasoningPart = activeReasoning[value.item.id];

View File

@@ -1 +0,0 @@
CLAUDE.md

152
CLAUDE.md
View File

@@ -1,51 +1,127 @@
# AI Assistant Guide
# CLAUDE.md
This file provides guidance to AI coding assistants when working with code in this repository. Adherence to these guidelines is crucial for maintaining code quality and consistency.
## Guiding Principles (MUST FOLLOW)
- **Keep it clear**: Write code that is easy to read, maintain, and explain.
- **Match the house style**: Reuse existing patterns, naming, and conventions.
- **Search smart**: Prefer `ast-grep` for semantic queries; fall back to `rg`/`grep` when needed.
- **Build with HeroUI**: Use HeroUI for every new UI component; never add `antd` or `styled-components`.
- **Log centrally**: Route all logging through `loggerService` with the right context—no `console.log`.
- **Research via subagent**: Lean on `subagent` for external docs, APIs, news, and references.
- **Seek review**: Ask a human developer to review substantial changes before merging.
- **Commit in rhythm**: Keep commits small, conventional, and emoji-tagged.
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Development Commands
- **Install**: `yarn install` - Install all project dependencies
- **Development**: `yarn dev` - Runs Electron app in development mode with hot reload
- **Debug**: `yarn debug` - Starts with debugging enabled, use `chrome://inspect` to attach debugger
- **Build Check**: `yarn build:check` - **REQUIRED** before commits (lint + test + typecheck)
- If having i18n sort issues, run `yarn sync:i18n` first to sync template
- If having formatting issues, run `yarn format` first
- **Test**: `yarn test` - Run all tests (Vitest) across main and renderer processes
- **Single Test**:
- `yarn test:main` - Run tests for main process only
- `yarn test:renderer` - Run tests for renderer process only
- **Lint**: `yarn lint` - Fix linting issues and run TypeScript type checking
- **Format**: `yarn format` - Auto-format code using Biome
### Environment Setup
## Project Architecture
- **Prerequisites**: Node.js v22.x.x or higher, Yarn 4.9.1
- **Setup Yarn**: `corepack enable && corepack prepare yarn@4.9.1 --activate`
- **Install Dependencies**: `yarn install`
- **Add New Dependencies**: `yarn add -D` for renderer-specific dependencies, `yarn add` for others.
### Electron Structure
- **Main Process** (`src/main/`): Node.js backend with services (MCP, Knowledge, Storage, etc.)
- **Renderer Process** (`src/renderer/`): React UI with Redux state management
- **Preload Scripts** (`src/preload/`): Secure IPC bridge
### Development
### Key Components
- **AI Core** (`src/renderer/src/aiCore/`): Middleware pipeline for multiple AI providers.
- **Services** (`src/main/services/`): MCPService, KnowledgeService, WindowService, etc.
- **Build System**: Electron-Vite with experimental rolldown-vite, yarn workspaces.
- **State Management**: Redux Toolkit (`src/renderer/src/store/`) for predictable state.
- **UI Components**: HeroUI (`@heroui/*`) for all new UI elements.
- **Start Development**: `yarn dev` - Runs Electron app in development mode
- **Debug Mode**: `yarn debug` - Starts with debugging enabled, use chrome://inspect
### Testing & Quality
- **Run Tests**: `yarn test` - Runs all tests (Vitest)
- **Run E2E Tests**: `yarn test:e2e` - Playwright end-to-end tests
- **Type Check**: `yarn typecheck` - Checks TypeScript for both node and web
- **Lint**: `yarn lint` - ESLint with auto-fix
- **Format**: `yarn format` - Biome formatting
### Build & Release
- **Build**: `yarn build` - Builds for production (includes typecheck)
- **Platform-specific builds**:
- Windows: `yarn build:win`
- macOS: `yarn build:mac`
- Linux: `yarn build:linux`
## Architecture Overview
### Electron Multi-Process Architecture
- **Main Process** (`src/main/`): Node.js backend handling system integration, file operations, and services
- **Renderer Process** (`src/renderer/`): React-based UI running in Chromium
- **Preload Scripts** (`src/preload/`): Secure bridge between main and renderer processes
### Key Architectural Components
#### Main Process Services (`src/main/services/`)
- **MCPService**: Model Context Protocol server management
- **KnowledgeService**: Document processing and knowledge base management
- **FileStorage/S3Storage/WebDav**: Multiple storage backends
- **WindowService**: Multi-window management (main, mini, selection windows)
- **ProxyManager**: Network proxy handling
- **SearchService**: Full-text search capabilities
#### AI Core (`src/renderer/src/aiCore/`)
- **Middleware System**: Composable pipeline for AI request processing
- **Client Factory**: Supports multiple AI providers (OpenAI, Anthropic, Gemini, etc.)
- **Stream Processing**: Real-time response handling
#### State Management (`src/renderer/src/store/`)
- **Redux Toolkit**: Centralized state management
- **Persistent Storage**: Redux-persist for data persistence
- **Thunks**: Async actions for complex operations
#### Knowledge Management
- **Embeddings**: Vector search with multiple providers (OpenAI, Voyage, etc.)
- **OCR**: Document text extraction (system OCR, Doc2x, Mineru)
- **Preprocessing**: Document preparation pipeline
- **Loaders**: Support for various file formats (PDF, DOCX, EPUB, etc.)
### Build System
- **Electron-Vite**: Development and build tooling (v4.0.0)
- **Rolldown-Vite**: Using experimental rolldown-vite instead of standard vite
- **Workspaces**: Monorepo structure with `packages/` directory
- **Multiple Entry Points**: Main app, mini window, selection toolbar
- **Styled Components**: CSS-in-JS styling with SWC optimization
### Testing Strategy
- **Vitest**: Unit and integration testing
- **Playwright**: End-to-end testing
- **Component Testing**: React Testing Library
- **Coverage**: Available via `yarn test:coverage`
### Key Patterns
- **IPC Communication**: Secure main-renderer communication via preload scripts
- **Service Layer**: Clear separation between UI and business logic
- **Plugin Architecture**: Extensible via MCP servers and middleware
- **Multi-language Support**: i18n with dynamic loading
- **Theme System**: Light/dark themes with custom CSS variables
### UI Design
The project is in the process of migrating from antd & styled-components to HeroUI. Please use HeroUI to build UI components. The use of antd and styled-components is prohibited.
HeroUI Docs: https://www.heroui.com/docs/guide/introduction
## Logging Standards
### Usage
### Logging
```typescript
// Main process
import { loggerService } from '@logger'
const logger = loggerService.withContext('moduleName')
// Renderer: loggerService.initWindowSource('windowName') first
// Renderer process (set window source first)
loggerService.initWindowSource('windowName')
const logger = loggerService.withContext('moduleName')
// Logging
logger.info('message', CONTEXT)
logger.error('message', new Error('error'), CONTEXT)
```
### Log Levels (highest to lowest)
- `error` - Critical errors causing crash/unusable functionality
- `warn` - Potential issues that don't affect core functionality
- `info` - Application lifecycle and key user actions
- `verbose` - Detailed flow information for feature tracing
- `debug` - Development diagnostic info (not for production)
- `silly` - Extreme debugging, low-level information

View File

@@ -65,7 +65,28 @@ The Test Plan aims to provide users with a more stable application experience an
### Other Suggestions
- **Contact Developers**: Before submitting a PR, you can contact the developers first to discuss or get help.
- **Become a Core Developer**: If you contribute to the project consistently, congratulations, you can become a core developer and gain project membership status. Please check our [Membership Guide](https://github.com/CherryHQ/community/blob/main/docs/membership.en.md).
## Important Contribution Guidelines & Focus Areas
Please review the following critical information before submitting your Pull Request:
### Temporary Restriction on Data-Changing Feature PRs 🚫
**Currently, we are NOT accepting feature Pull Requests that introduce changes to our Redux data models or IndexedDB schemas.**
Our core team is currently focused on significant architectural updates that involve these data structures. To ensure stability and focus during this period, contributions of this nature will be temporarily managed internally.
* **PRs that require changes to Redux state shape or IndexedDB schemas will be closed.**
* **This restriction is temporary and will be lifted with the release of `v2.0.0`.** You can track the progress of `v2.0.0` and its related discussions on issue [#10162](https://github.com/CherryHQ/cherry-studio/pull/10162).
We highly encourage contributions for:
* Bug fixes 🐞
* Performance improvements 🚀
* Documentation updates 📚
* Features that **do not** alter Redux data models or IndexedDB schemas (e.g., UI enhancements, new components, minor refactors). ✨
We appreciate your understanding and continued support during this important development phase. Thank you!
## Contact Us

677
LICENSE
View File

@@ -1,42 +1,661 @@
**Licensing**
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
This project employs a **User-Segmented Dual Licensing** model.
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
**Core Principle:**
Preamble
* **Individual Users and Organizations with 10 or Fewer Individuals:** Governed by default under the **GNU Affero General Public License v3.0 (AGPLv3)**.
* **Organizations with More Than 10 Individuals:** **Must** obtain a **Commercial License**.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
Definition: "10 or Fewer Individuals"
Refers to any organization (including companies, non-profits, government agencies, educational institutions, etc.) where the total number of individuals who can access, use, or in any way directly or indirectly benefit from the functionality of this software (Cherry Studio) does not exceed 10. This includes, but is not limited to, developers, testers, operations staff, end-users, and indirect users via integrated systems.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
---
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
**1. Open Source License: AGPLv3 - For Individuals and Organizations of 10 or Fewer**
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
* If you are an individual user, or if your organization meets the "10 or Fewer Individuals" definition above, you are free to use, modify, and distribute Cherry Studio under the terms of the **AGPLv3**. The full text of the AGPLv3 can be found in the LICENSE file at [https://www.gnu.org/licenses/agpl-3.0.html](https://www.gnu.org/licenses/agpl-3.0.html).
* **Core Obligation:** A key requirement of the AGPLv3 is that if you modify Cherry Studio and make it available over a network, or distribute the modified version, you must provide the **complete corresponding source code** under the AGPLv3 license to the recipients. Even if you qualify under the "10 or Fewer Individuals" rule, if you wish to avoid this source code disclosure obligation, you will need to obtain a Commercial License (see below).
* Please read and understand the full terms of the AGPLv3 carefully before use.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
**2. Commercial License - For Organizations with More Than 10 Individuals, or Users Needing to Avoid AGPLv3 Obligations**
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
* **Mandatory Requirement:** If your organization does **not** meet the "10 or Fewer Individuals" definition above (i.e., 11 or more individuals can access, use, or benefit from the software), you **must** contact us to obtain and execute a Commercial License to use Cherry Studio.
* **Voluntary Option:** Even if your organization meets the "10 or Fewer Individuals" condition, if your intended use case **cannot comply with the terms of the AGPLv3** (particularly the obligations regarding **source code disclosure**), or if you require specific commercial terms **not offered** by the AGPLv3 (such as warranties, indemnities, or freedom from copyleft restrictions), you also **must** contact us to obtain and execute a Commercial License.
* **Common scenarios requiring a Commercial License include (but are not limited to):**
* Your organization has more than 10 individuals who can access, use, or benefit from the software.
* (Regardless of organization size) You wish to distribute a modified version of Cherry Studio but **do not want** to disclose the source code of your modifications under AGPLv3.
* (Regardless of organization size) You wish to provide a network service (SaaS) based on a modified version of Cherry Studio but **do not want** to provide the modified source code to users of the service under AGPLv3.
* (Regardless of organization size) Your corporate policies, client contracts, or project requirements prohibit the use of AGPLv3-licensed software or mandate closed-source distribution and confidentiality.
* The Commercial License grants you rights exempting you from AGPLv3 obligations (like source code disclosure) and may include additional commercial assurances.
* **Obtaining a Commercial License:** Please contact the Cherry Studio development team via email at **bd@cherry-ai.com** to discuss commercial licensing options.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
**3. Contributions**
The precise terms and conditions for copying, distribution and
modification follow.
* We welcome community contributions to Cherry Studio. All contributions submitted to this project are considered to be offered under the **AGPLv3** license.
* By submitting a contribution to this project (e.g., via a Pull Request), you agree to license your code under the AGPLv3 to the project and all its downstream users (regardless of whether those users ultimately operate under AGPLv3 or a Commercial License).
* You also understand and agree that your contribution may be included in distributions of Cherry Studio offered under our commercial license.
TERMS AND CONDITIONS
**4. Other Terms**
0. Definitions.
* The specific terms and conditions of the Commercial License are governed by the formal commercial license agreement signed by both parties.
* The project maintainers reserve the right to update this licensing policy (including the definition and threshold for user count) as needed. Updates will be communicated through official project channels (e.g., code repository, official website).
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

View File

@@ -37,7 +37,7 @@
<p align="center">English | <a href="./docs/README.zh.md">中文</a> | <a href="https://cherry-ai.com">Official Site</a> | <a href="https://docs.cherry-ai.com/cherry-studio-wen-dang/en-us">Documents</a> | <a href="./docs/dev.md">Development</a> | <a href="https://github.com/CherryHQ/cherry-studio/issues">Feedback</a><br></p>
<div align="center">
[![][deepwiki-shield]][deepwiki-link]
[![][twitter-shield]][twitter-link]
[![][discord-shield]][discord-link]
@@ -45,7 +45,7 @@
</div>
<div align="center">
[![][github-release-shield]][github-release-link]
[![][github-nightly-shield]][github-nightly-link]
[![][github-contributors-shield]][github-contributors-link]
@@ -141,6 +141,7 @@ We're actively working on the following features and improvements:
- iOS App (Phase 1)
- Multi-Window support
- Window Pinning functionality
- Intel AI PC (Core Ultra) Support
4. 🔌 **Advanced Features**
@@ -247,10 +248,10 @@ The Enterprise Edition addresses core challenges in team collaboration by centra
| Feature | Community Edition | Enterprise Edition |
| :---------------- | :----------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------- |
| **Open Source** | ✅ Yes | ⭕️ Partially released to customers |
| **Open Source** | ✅ Yes | ⭕️ Partially released to customers |
| **Cost** | Free for Personal Use / Commercial License | Buyout / Subscription Fee |
| **Admin Backend** | — | ● Centralized **Model** Access<br>**Employee** Management<br>● Shared **Knowledge Base**<br>**Access** Control<br>**Data** Backup |
| **Server** | — | ✅ Dedicated Private Deployment |
| **Server** | — | ✅ Dedicated Private Deployment |
## Get the Enterprise Edition
@@ -286,6 +287,14 @@ We believe the Enterprise Edition will become your team's AI productivity engine
</picture>
</a>
# 📜 License
The Cherry Studio Community Edition is governed by the standard GNU Affero General Public License v3.0 (AGPL-3.0), available at https://www.gnu.org/licenses/agpl-3.0.html.
Use of the Cherry Studio Community Edition for commercial purposes is permitted, subject to full compliance with the terms and conditions of the AGPL-3.0 license.
Should you require a commercial license that provides an exemption from the AGPL-3.0 requirements, please contact us at bd@cherry-ai.com.
<!-- Links & Images -->
[deepwiki-shield]: https://img.shields.io/badge/Deepwiki-CherryHQ-0088CC?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNy45MyAzMiI+PHBhdGggZD0iTTE5LjMzIDE0LjEyYy42Ny0uMzkgMS41LS4zOSAyLjE4IDBsMS43NCAxYy4wNi4wMy4xMS4wNi4xOC4wN2guMDRjLjA2LjAzLjEyLjAzLjE4LjAzaC4wMmMuMDYgMCAuMTEgMCAuMTctLjAyaC4wM2MuMDYtLjAyLjEyLS4wNS4xNy0uMDhoLjAybDMuNDgtMi4wMWMuMjUtLjE0LjQtLjQxLjQtLjdWOC40YS44MS44MSAwIDAgMC0uNC0uN2wtMy40OC0yLjAxYS44My44MyAwIDAgMC0uODEgMEwxOS43NyA3LjdoLS4wMWwtLjE1LjEyLS4wMi4wMnMtLjA3LjA5LS4xLjE0VjhhLjQuNCAwIDAgMC0uMDguMTd2LjA0Yy0uMDMuMDYtLjAzLjEyLS4wMy4xOXYyLjAxYzAgLjc4LS40MSAxLjQ5LTEuMDkgMS44OC0uNjcuMzktMS41LjM5LTIuMTggMGwtMS43NC0xYS42LjYgMCAwIDAtLjIxLS4wOGMtLjA2LS4wMS0uMTItLjAyLS4xOC0uMDJoLS4wM2MtLjA2IDAtLjExLjAxLS4xNy4wMmgtLjAzYy0uMDYuMDItLjEyLjA0LS4xNy4wN2gtLjAybC0zLjQ3IDIuMDFjLS4yNS4xNC0uNC40MS0uNC43VjE4YzAgLjI5LjE1LjU1LjQuN2wzLjQ4IDIuMDFoLjAyYy4wNi4wNC4xMS4wNi4xNy4wOGguMDNjLjA1LjAyLjExLjAzLjE3LjAzaC4wMmMuMDYgMCAuMTIgMCAuMTgtLjAyaC4wNGMuMDYtLjAzLjEyLS4wNS4xOC0uMDhsMS43NC0xYy42Ny0uMzkgMS41LS4zOSAyLjE3IDBzMS4wOSAxLjExIDEuMDkgMS44OHYyLjAxYzAgLjA3IDAgLjEzLjAyLjE5di4wNGMuMDMuMDYuMDUuMTIuMDguMTd2LjAycy4wOC4wOS4xMi4xM2wuMDIuMDJzLjA5LjA4LjE1LjExYzAgMCAuMDEgMCAuMDEuMDFsMy40OCAyLjAxYy4yNS4xNC41Ni4xNC44MSAwbDMuNDgtMi4wMWMuMjUtLjE0LjQtLjQxLjQtLjd2LTQuMDFhLjgxLjgxIDAgMCAwLS40LS43bC0zLjQ4LTIuMDFoLS4wMmMtLjA1LS4wNC0uMTEtLjA2LS4xNy0uMDhoLS4wM2EuNS41IDAgMCAwLS4xNy0uMDNoLS4wM2MtLjA2IDAtLjEyIDAtLjE4LjAyLS4wNy4wMi0uMTUuMDUtLjIxLjA4bC0xLjc0IDFjLS42Ny4zOS0xLjUuMzktMi4xNyAwYTIuMTkgMi4xOSAwIDAgMS0xLjA5LTEuODhjMC0uNzguNDItMS40OSAxLjA5LTEuODhaIiBzdHlsZT0iZmlsbDojNWRiZjlkIi8+PHBhdGggZD0ibS40IDEzLjExIDMuNDcgMi4wMWMuMjUuMTQuNTYuMTQuOCAwbDMuNDctMi4wMWguMDFsLjE1LS4xMi4wMi0uMDJzLjA3LS4wOS4xLS4xNGwuMDItLjAyYy4wMy0uMDUuMDUtLjExLjA3LS4xN3YtLjA0Yy4wMy0uMDYuMDMtLjEyLjAzLS4xOVYxMC40YzAtLjc4LjQyLTEuNDkgMS4wOS0xLjg4czEuNS0uMzkgMi4xOCAwbDEuNzQgMWMuMDcuMDQuMTQuMDcuMjEuMDguMDYuMDEuMTIuMDIuMTguMDJoLjAzYy4wNiAwIC4xMS0uMDEuMTctLjAyaC4wM2MuMDYtLjAyLjEyLS4wNC4xNy0uMDdoLjAybDMuNDctMi4wMmMuMjUtLjE0LjQtLjQxLjQtLjd2LTRhLjgxLjgxIDAgMCAwLS40LS43bC0zLjQ2LTJhLjgzLjgzIDAgMCAwLS44MSAwbC0zLjQ4IDIuMDFoLS4wMWwtLjE1LjEyLS4wMi4wMi0uMS4xMy0uMDIuMDJjLS4wMy4wNS0uMDUuMTEtLjA3LjE3di4wNGMtLjAzLjA2LS4wMy4xMi0uMDMuMTl2Mi4wMWMwIC43OC0uNDIgMS40OS0xLjA5IDEuODhzLTEuNS4zOS0yLjE4IDBsLTEuNzQtMWEuNi42IDAgMCAwLS4yMS0uMDhjLS4wNi0uMDEtLjEyLS4wMi0uMTgtLjAyaC0uMDNjLS4wNiAwLS4xMS4wMS0uMTcuMDJoLS4wM2MtLjA2LjAyLS4xMi4wNS0uMTcuMDhoLS4wMkwuNCA3LjcxYy0uMjUuMTQtLjQuNDEtLjQuNjl2NC4wMWMwIC4yOS4xNS41Ni40LjciIHN0eWxlPSJmaWxsOiM0NDY4YzQiLz48cGF0aCBkPSJtMTcuODQgMjQuNDgtMy40OC0yLjAxaC0uMDJjLS4wNS0uMDQtLjExLS4wNi0uMTctLjA4aC0uMDNhLjUuNSAwIDAgMC0uMTctLjAzaC0uMDNjLS4wNiAwLS4xMiAwLS4xOC4wMmgtLjA0Yy0uMDYuMDMtLjEyLjA1LS4xOC4wOGwtMS43NCAxYy0uNjcuMzktMS41LjM5LTIuMTggMGEyLjE5IDIuMTkgMCAwIDEtMS4wOS0xLjg4di0yLjAxYzAtLjA2IDAtLjEzLS4wMi0uMTl2LS4wNGMtLjAzLS4wNi0uMDUtLjExLS4wOC0uMTdsLS4wMi0uMDJzLS4wNi0uMDktLjEtLjEzTDguMjkgMTlzLS4wOS0uMDgtLjE1LS4xMWgtLjAxbC0zLjQ3LTIuMDJhLjgzLjgzIDAgMCAwLS44MSAwTC4zNyAxOC44OGEuODcuODcgMCAwIDAtLjM3LjcxdjQuMDFjMCAuMjkuMTUuNTUuNC43bDMuNDcgMi4wMWguMDJjLjA1LjA0LjExLjA2LjE3LjA4aC4wM2MuMDUuMDIuMTEuMDMuMTYuMDNoLjAzYy4wNiAwIC4xMiAwIC4xOC0uMDJoLjA0Yy4wNi0uMDMuMTItLjA1LjE4LS4wOGwxLjc0LTFjLjY3LS4zOSAxLjUtLjM5IDIuMTcgMHMxLjA5IDEuMTEgMS4wOSAxLjg4djIuMDFjMCAuMDcgMCAuMTMuMDIuMTl2LjA0Yy4wMy4wNi4wNS4xMS4wOC4xN2wuMDIuMDJzLjA2LjA5LjEuMTRsLjAyLjAycy4wOS4wOC4xNS4xMWguMDFsMy40OCAyLjAyYy4yNS4xNC41Ni4xNC44MSAwbDMuNDgtMi4wMWMuMjUtLjE0LjQtLjQxLjQtLjdWMjUuMmEuODEuODEgMCAwIDAtLjQtLjdaIiBzdHlsZT0iZmlsbDojNDI5M2Q5Ii8+PC9zdmc+

View File

@@ -69,7 +69,28 @@ git commit --signoff -m "Your commit message"
### 其他建议
- **联系开发者**:在提交 PR 之前,您可以先和开发者进行联系,共同探讨或者获取帮助。
- **成为核心开发者**:如果您能够稳定为项目贡献,恭喜您可以成为项目核心开发者,获取到项目成员身份。请查看我们的[成员指南](https://github.com/CherryHQ/community/blob/main/membership.md)
## 重要贡献指南与关注点
在提交 Pull Request 之前,请务必阅读以下关键信息:
### 🚫 暂时限制涉及数据更改的功能性 PR
**目前,我们不接受涉及 Redux 数据模型或 IndexedDB schema 变更的功能性 Pull Request。**
我们的核心团队目前正专注于涉及这些数据结构的关键架构更新和基础工作。为确保在此期间的稳定性与专注,此类贡献将暂时由内部进行管理。
* **需要更改 Redux 状态结构或 IndexedDB schema 的 PR 将会被关闭。**
* **此限制是临时性的,并将在 `v2.0.0` 版本发布后解除。** 您可以通过 Issue [#10162](https://github.com/CherryHQ/cherry-studio/pull/10162) 跟踪 `v2.0.0` 的进展及相关讨论。
我们非常鼓励以下类型的贡献:
* 错误修复 🐞
* 性能改进 🚀
* 文档更新 📚
* 不改变 Redux 数据模型或 IndexedDB schema 的功能例如UI 增强、新组件、小型重构)。✨
感谢您在此重要开发阶段的理解与持续支持。谢谢!
## 联系我们

View File

@@ -9,6 +9,7 @@ electronLanguages:
- zh_CN # for macOS
- zh_TW # for macOS
- en # for macOS
- de
directories:
buildResources: build
@@ -126,60 +127,70 @@ artifactBuildCompleted: scripts/artifact-build-completed.js
releaseInfo:
releaseNotes: |
<!--LANG:en-->
What's New in v1.7.0-beta.2
What's New in v1.6.6
New Features:
- Session Settings: Manage session-specific settings and model configurations independently
- Notes Full-Text Search: Search across all notes with match highlighting
- Built-in DiDi MCP Server: Integration with DiDi ride-hailing services (China only)
- Intel OV OCR: Hardware-accelerated OCR using Intel NPU
- Auto-start API Server: Automatically starts when agents exist
Improvements:
- Agent model selection now requires explicit user choice
- Added Mistral AI provider support
- Added NewAPI generic provider support
- Improved navbar layout consistency across different modes
- Enhanced chat component responsiveness
- Better code block display on small screens
- Updated OVMS to 2025.3 official release
- Added Greek language support
Features:
- Add automatic update checks with interval support
- Add confirmation modal for activating protocol-installed MCP servers
- Add mobile app data restore functionality
- Add doubao_after_251015 reasoning model support
- Add cherryin provider type option
- Add German language support
- Enhance proxy bypass rules with comprehensive matching
- Enhance model capabilities with endpoint type validation for Gemini provider
Bug Fixes:
- Fixed GitHub Copilot gpt-5-codex streaming issues
- Fixed assistant creation failures
- Fixed translate auto-copy functionality
- Fixed miniapps external link opening
- Fixed message layout and overflow issues
- Fixed API key parsing to preserve spaces
- Fixed agent display in different navbar layouts
- Fix knowledge base AISDK error handling
- Fix toolchoice support for knowledge features
- Fix Claude 4.5 reasoning model getTopP logic
- Fix up-down button visibility issues
- Fix in-place editing save behavior
- Fix system prompt variables in quick assistant
- Fix URL context support for Gemini endpoint models
- Fix Silicon reasoning model handling
- Fix deep research model context and reasoning effort settings
- Fix file content paste via right-click
- Fix reranker API error response handling
- Fix UI layout for backup managers and navbar
- Fix aihubmix model routing rules
Improvements:
- Update LICENSE file with full GNU AGPL-3.0 text
- Improve GitHub workflows and CI/CD processes
- Update dependencies including Playwright testing framework
<!--LANG:zh-CN-->
v1.7.0-beta.2 新特性
v1.6.6 版本更新
新功能:
- 会话设置:独立管理会话特定的设置和模型配置
- 笔记全文搜索:跨所有笔记搜索并高亮匹配内容
- 内置滴滴 MCP 服务器:集成滴滴打车服务(仅限中国地区)
- Intel OV OCR使用 Intel NPU 的硬件加速 OCR
- 自动启动 API 服务器:当存在 Agent 时自动启动
改进:
- Agent 模型选择现在需要用户显式选择
- 添加 Mistral AI 提供商支持
- 添加 NewAPI 通用提供商支持
- 改进不同模式下的导航栏布局一致性
- 增强聊天组件响应式设计
- 优化小屏幕代码块显示
- 更新 OVMS 至 2025.3 正式版
- 添加希腊语支持
功能:
- 新增自动更新检查和间隔支持
- 新增协议安装 MCP 服务器激活确认弹窗
- 新增移动应用数据恢复功能
- 新增 doubao_after_251015 推理模型支持
- 新增 cherryin 提供商类型选项
- 新增德语语言支持
- 增强代理绕过规则的全面匹配
- 增强 Gemini 提供商的端点类型验证和模型能力
问题修复:
- 修复 GitHub Copilot gpt-5-codex 流式传输问题
- 修复助手创建失败
- 修复翻译自动复制功能
- 修复小程序外部链接打开
- 修复消息布局和溢出问题
- 修复 API 密钥解析以保留空格
- 修复不同导航栏布局中的 Agent 显示
- 修复知识库 AISDK 错误处理
- 修复知识功能的工具选择支持
- 修复 Claude 4.5 推理模型的 getTopP 逻辑
- 修复上下按钮可见性问题
- 修复就地编辑保存行为
- 修复快速助手中的系统提示变量
- 修复 Gemini 端点模型的 URL 上下文支持
- 修复 Silicon 推理模型处理
- 修复深度研究模型的上下文和推理努力设置
- 修复右键粘贴文件内容功能
- 修复重排序器 API 错误响应处理
- 修复备份管理器和导航栏的 UI 布局
- 修复 aihubmix 模型路由规则
改进优化:
- 更新 LICENSE 文件为完整 GNU AGPL-3.0 文本
- 改进 GitHub 工作流和 CI/CD 流程
- 更新依赖项包括 Playwright 测试框架
<!--LANG:END-->

View File

@@ -88,7 +88,6 @@ export default defineConfig({
alias: {
'@renderer': resolve('src/renderer/src'),
'@shared': resolve('packages/shared'),
'@types': resolve('src/renderer/src/types'),
'@logger': resolve('src/renderer/src/services/LoggerService'),
'@mcp-trace/trace-core': resolve('packages/mcp-trace/trace-core'),
'@mcp-trace/trace-web': resolve('packages/mcp-trace/trace-web'),

View File

@@ -2,7 +2,6 @@ import tseslint from '@electron-toolkit/eslint-config-ts'
import eslint from '@eslint/js'
import eslintReact from '@eslint-react/eslint-plugin'
import { defineConfig } from 'eslint/config'
import importZod from 'eslint-plugin-import-zod'
import oxlint from 'eslint-plugin-oxlint'
import reactHooks from 'eslint-plugin-react-hooks'
import simpleImportSort from 'eslint-plugin-simple-import-sort'
@@ -12,12 +11,11 @@ export default defineConfig([
eslint.configs.recommended,
tseslint.configs.recommended,
eslintReact.configs['recommended-typescript'],
reactHooks.configs.flat.recommended,
reactHooks.configs['recommended-latest'],
{
plugins: {
'simple-import-sort': simpleImportSort,
'unused-imports': unusedImports,
'import-zod': importZod
'unused-imports': unusedImports
},
rules: {
'@typescript-eslint/explicit-function-return-type': 'off',
@@ -27,7 +25,6 @@ export default defineConfig([
'simple-import-sort/exports': 'error',
'unused-imports/no-unused-imports': 'error',
'@eslint-react/no-prop-types': 'error',
'import-zod/prefer-zod-namespace': 'error'
}
},
// Configuration for ensuring compatibility with the original ESLint(8.x) rules

View File

@@ -1,6 +1,6 @@
{
"name": "CherryStudio",
"version": "1.7.0-beta.2",
"version": "1.6.6",
"private": true,
"description": "A powerful AI assistant for producer.",
"main": "./out/main/index.js",
@@ -43,18 +43,15 @@
"release": "node scripts/version.js",
"publish": "yarn build:check && yarn release patch push",
"pulish:artifacts": "cd packages/artifacts && npm publish && cd -",
"agents:generate": "NODE_ENV='development' drizzle-kit generate --config src/main/services/agents/drizzle.config.ts",
"agents:push": "NODE_ENV='development' drizzle-kit push --config src/main/services/agents/drizzle.config.ts",
"agents:studio": "NODE_ENV='development' drizzle-kit studio --config src/main/services/agents/drizzle.config.ts",
"agents:drop": "NODE_ENV='development' drizzle-kit drop --config src/main/services/agents/drizzle.config.ts",
"generate:agents": "yarn workspace @cherry-studio/database agents",
"generate:icons": "electron-icon-builder --input=./build/logo.png --output=build",
"analyze:renderer": "VISUALIZER_RENDERER=true yarn build",
"analyze:main": "VISUALIZER_MAIN=true yarn build",
"typecheck": "concurrently -n \"node,web\" -c \"cyan,magenta\" \"npm run typecheck:node\" \"npm run typecheck:web\"",
"typecheck:node": "tsgo --noEmit -p tsconfig.node.json --composite false",
"typecheck:web": "tsgo --noEmit -p tsconfig.web.json --composite false",
"check:i18n": "dotenv -e .env -- tsx scripts/check-i18n.ts",
"sync:i18n": "dotenv -e .env -- tsx scripts/sync-i18n.ts",
"check:i18n": "tsx scripts/check-i18n.ts",
"sync:i18n": "tsx scripts/sync-i18n.ts",
"update:i18n": "dotenv -e .env -- tsx scripts/update-i18n.ts",
"auto:i18n": "dotenv -e .env -- tsx scripts/auto-translate-i18n.ts",
"update:languages": "tsx scripts/update-languages.ts",
@@ -68,7 +65,7 @@
"test:e2e": "yarn playwright test",
"test:lint": "oxlint --deny-warnings && eslint . --ext .js,.jsx,.cjs,.mjs,.ts,.tsx,.cts,.mts --cache",
"test:scripts": "vitest scripts",
"lint": "oxlint --fix && eslint . --ext .js,.jsx,.cjs,.mjs,.ts,.tsx,.cts,.mts --fix --cache && yarn typecheck && yarn check:i18n && yarn format:check",
"lint": "oxlint --fix && eslint . --ext .js,.jsx,.cjs,.mjs,.ts,.tsx,.cts,.mts --fix --cache && yarn typecheck && yarn check:i18n",
"format": "biome format --write && biome lint --write",
"format:check": "biome format && biome lint",
"prepare": "git config blame.ignoreRevsFile .git-blame-ignore-revs && husky",
@@ -78,7 +75,6 @@
"release:aicore": "yarn workspace @cherrystudio/ai-core version patch --immediate && yarn workspace @cherrystudio/ai-core npm publish --access public"
},
"dependencies": {
"@anthropic-ai/claude-agent-sdk": "patch:@anthropic-ai/claude-agent-sdk@npm%3A0.1.1#~/.yarn/patches/@anthropic-ai-claude-agent-sdk-npm-0.1.1-d937b73fed.patch",
"@libsql/client": "0.14.0",
"@libsql/win32-x64-msvc": "^0.4.7",
"@napi-rs/system-ocr": "patch:@napi-rs/system-ocr@npm%3A1.0.2#~/.yarn/patches/@napi-rs-system-ocr-npm-1.0.2-59e7a78e8b.patch",
@@ -90,8 +86,10 @@
"node-stream-zip": "^1.15.0",
"officeparser": "^4.2.0",
"os-proxy-config": "^1.1.2",
"qrcode.react": "^4.2.0",
"selection-hook": "^1.0.12",
"sharp": "^0.34.3",
"socket.io": "^4.8.1",
"swagger-jsdoc": "^6.2.8",
"swagger-ui-express": "^5.0.1",
"tesseract.js": "patch:tesseract.js@npm%3A6.0.1#~/.yarn/patches/tesseract.js-npm-6.0.1-2562a7e46d.patch",
@@ -101,10 +99,10 @@
"@agentic/exa": "^7.3.3",
"@agentic/searxng": "^7.3.3",
"@agentic/tavily": "^7.3.3",
"@ai-sdk/amazon-bedrock": "^3.0.35",
"@ai-sdk/google-vertex": "^3.0.40",
"@ai-sdk/mistral": "^2.0.19",
"@ai-sdk/perplexity": "^2.0.13",
"@ai-sdk/amazon-bedrock": "^3.0.29",
"@ai-sdk/google-vertex": "^3.0.33",
"@ai-sdk/mistral": "^2.0.17",
"@ai-sdk/perplexity": "^2.0.11",
"@ant-design/v5-patch-for-react-19": "^1.0.3",
"@anthropic-ai/sdk": "^0.41.0",
"@anthropic-ai/vertex-sdk": "patch:@anthropic-ai/vertex-sdk@npm%3A0.11.4#~/.yarn/patches/@anthropic-ai-vertex-sdk-npm-0.11.4-c19cb41edb.patch",
@@ -154,9 +152,7 @@
"@opentelemetry/sdk-trace-base": "^2.0.0",
"@opentelemetry/sdk-trace-node": "^2.0.0",
"@opentelemetry/sdk-trace-web": "^2.0.0",
"@opeoginni/github-copilot-openai-compatible": "patch:@opeoginni/github-copilot-openai-compatible@npm%3A0.1.18#~/.yarn/patches/@opeoginni-github-copilot-openai-compatible-npm-0.1.18-3f65760532.patch",
"@playwright/test": "^1.52.0",
"@radix-ui/react-context-menu": "^2.2.16",
"@reduxjs/toolkit": "^2.2.5",
"@shikijs/markdown-it": "^3.12.0",
"@swc/plugin-styled-components": "^8.0.4",
@@ -208,7 +204,6 @@
"@types/swagger-ui-express": "^4.1.8",
"@types/tinycolor2": "^1",
"@types/turndown": "^5.0.5",
"@types/uuid": "^10.0.0",
"@types/word-extractor": "^1",
"@typescript/native-preview": "latest",
"@uiw/codemirror-extensions-langs": "^4.25.1",
@@ -222,7 +217,7 @@
"@viz-js/lang-dot": "^1.0.5",
"@viz-js/viz": "^3.14.0",
"@xyflow/react": "^12.4.4",
"ai": "^5.0.68",
"ai": "^5.0.59",
"antd": "patch:antd@npm%3A5.27.0#~/.yarn/patches/antd-npm-5.27.0-aa91c36546.patch",
"archiver": "^7.0.1",
"async-mutex": "^0.5.0",
@@ -245,12 +240,9 @@
"docx": "^9.0.2",
"dompurify": "^3.2.6",
"dotenv-cli": "^7.4.2",
"drizzle-kit": "^0.31.4",
"drizzle-orm": "^0.44.5",
"electron": "37.6.0",
"electron-builder": "26.0.15",
"electron-devtools-installer": "^3.2.0",
"electron-reload": "^2.0.0-alpha.1",
"electron-store": "^8.2.0",
"electron-updater": "6.6.4",
"electron-vite": "4.0.0",
@@ -259,12 +251,10 @@
"emoji-picker-element": "^1.22.1",
"epub": "patch:epub@npm%3A1.3.0#~/.yarn/patches/epub-npm-1.3.0-8325494ffe.patch",
"eslint": "^9.22.0",
"eslint-plugin-import-zod": "^1.2.0",
"eslint-plugin-oxlint": "^1.15.0",
"eslint-plugin-react-hooks": "^7.0.0",
"eslint-plugin-react-hooks": "^5.2.0",
"eslint-plugin-simple-import-sort": "^12.1.1",
"eslint-plugin-unused-imports": "^4.1.4",
"express-validator": "^7.2.1",
"fast-diff": "^1.3.0",
"fast-xml-parser": "^5.2.0",
"fetch-socks": "1.3.2",
@@ -280,6 +270,7 @@
"husky": "^9.1.7",
"i18next": "^23.11.5",
"iconv-lite": "^0.6.3",
"ipaddr.js": "^2.2.0",
"isbinaryfile": "5.0.4",
"jaison": "^2.0.2",
"jest-styled-components": "^7.2.0",
@@ -297,15 +288,15 @@
"notion-helper": "^1.3.22",
"npx-scope-finder": "^1.2.0",
"openai": "patch:openai@npm%3A5.12.2#~/.yarn/patches/openai-npm-5.12.2-30b075401c.patch",
"oxlint": "^1.22.0",
"oxlint": "^1.15.0",
"oxlint-tsgolint": "^0.2.0",
"p-queue": "^8.1.0",
"pdf-lib": "^1.17.1",
"pdf-parse": "^1.1.1",
"playwright": "^1.52.0",
"playwright": "^1.55.1",
"proxy-agent": "^6.5.0",
"react": "^19.2.0",
"react-dom": "^19.2.0",
"react": "^19.0.0",
"react-dom": "^19.0.0",
"react-error-boundary": "^6.0.0",
"react-hotkeys-hook": "^4.6.1",
"react-i18next": "^14.1.2",
@@ -348,7 +339,7 @@
"typescript": "~5.8.2",
"undici": "6.21.2",
"unified": "^11.0.5",
"uuid": "^13.0.0",
"uuid": "^10.0.0",
"vite": "npm:rolldown-vite@latest",
"vitest": "^3.2.4",
"webdav": "^5.8.0",
@@ -372,7 +363,6 @@
"app-builder-lib@npm:26.0.13": "patch:app-builder-lib@npm%3A26.0.13#~/.yarn/patches/app-builder-lib-npm-26.0.13-a064c9e1d0.patch",
"app-builder-lib@npm:26.0.15": "patch:app-builder-lib@npm%3A26.0.15#~/.yarn/patches/app-builder-lib-npm-26.0.15-360e5b0476.patch",
"atomically@npm:^1.7.0": "patch:atomically@npm%3A1.7.0#~/.yarn/patches/atomically-npm-1.7.0-e742e5293b.patch",
"esbuild": "^0.25.0",
"file-stream-rotator@npm:^0.6.1": "patch:file-stream-rotator@npm%3A0.6.1#~/.yarn/patches/file-stream-rotator-npm-0.6.1-eab45fb13d.patch",
"libsql@npm:^0.4.4": "patch:libsql@npm%3A0.4.7#~/.yarn/patches/libsql-npm-0.4.7-444e260fb1.patch",
"node-abi": "4.12.0",
@@ -380,11 +370,10 @@
"openai@npm:^4.87.3": "patch:openai@npm%3A5.12.2#~/.yarn/patches/openai-npm-5.12.2-30b075401c.patch",
"pdf-parse@npm:1.1.1": "patch:pdf-parse@npm%3A1.1.1#~/.yarn/patches/pdf-parse-npm-1.1.1-04a6109b2a.patch",
"pkce-challenge@npm:^4.1.0": "patch:pkce-challenge@npm%3A4.1.0#~/.yarn/patches/pkce-challenge-npm-4.1.0-fbc51695a3.patch",
"tar-fs": "^2.1.4",
"undici": "6.21.2",
"vite": "npm:rolldown-vite@latest",
"tesseract.js@npm:*": "patch:tesseract.js@npm%3A6.0.1#~/.yarn/patches/tesseract.js-npm-6.0.1-2562a7e46d.patch",
"@ai-sdk/google@npm:2.0.20": "patch:@ai-sdk/google@npm%3A2.0.20#~/.yarn/patches/@ai-sdk-google-npm-2.0.20-b9102f9d54.patch"
"@ai-sdk/google@npm:2.0.17": "patch:@ai-sdk/google@npm%3A2.0.17#~/.yarn/patches/@ai-sdk-google-npm-2.0.17-fd88491de4.patch"
},
"packageManager": "yarn@4.9.1",
"lint-staged": {

View File

@@ -36,14 +36,14 @@
"ai": "^5.0.26"
},
"dependencies": {
"@ai-sdk/anthropic": "^2.0.27",
"@ai-sdk/azure": "^2.0.49",
"@ai-sdk/deepseek": "^1.0.23",
"@ai-sdk/openai": "^2.0.48",
"@ai-sdk/openai-compatible": "^1.0.22",
"@ai-sdk/anthropic": "^2.0.22",
"@ai-sdk/azure": "^2.0.42",
"@ai-sdk/deepseek": "^1.0.20",
"@ai-sdk/openai": "^2.0.42",
"@ai-sdk/openai-compatible": "^1.0.19",
"@ai-sdk/provider": "^2.0.0",
"@ai-sdk/provider-utils": "^3.0.12",
"@ai-sdk/xai": "^2.0.26",
"@ai-sdk/provider-utils": "^3.0.10",
"@ai-sdk/xai": "^2.0.23",
"zod": "^4.1.5"
},
"devDependencies": {

View File

@@ -1,7 +1,7 @@
import { anthropic } from '@ai-sdk/anthropic'
import { google } from '@ai-sdk/google'
import { openai } from '@ai-sdk/openai'
import { InferToolInput, InferToolOutput, type Tool } from 'ai'
import { InferToolInput, InferToolOutput } from 'ai'
import { ProviderOptionsMap } from '../../../options/types'
import { OpenRouterSearchConfig } from './openrouter'
@@ -15,13 +15,6 @@ export type AnthropicSearchConfig = NonNullable<Parameters<typeof anthropic.tool
export type GoogleSearchConfig = NonNullable<Parameters<typeof google.tools.googleSearch>[0]>
export type XAISearchConfig = NonNullable<ProviderOptionsMap['xai']['searchParameters']>
type NormalizeTool<T> = T extends Tool<infer INPUT, infer OUTPUT> ? Tool<INPUT, OUTPUT> : Tool<any, any>
type AnthropicWebSearchTool = NormalizeTool<ReturnType<typeof anthropic.tools.webSearch_20250305>>
type OpenAIWebSearchTool = NormalizeTool<ReturnType<typeof openai.tools.webSearch>>
type OpenAIChatWebSearchTool = NormalizeTool<ReturnType<typeof openai.tools.webSearchPreview>>
type GoogleWebSearchTool = NormalizeTool<ReturnType<typeof google.tools.googleSearch>>
/**
* 插件初始化时接收的完整配置对象
*
@@ -66,7 +59,7 @@ export const DEFAULT_WEB_SEARCH_CONFIG: WebSearchPluginConfig = {
export type WebSearchToolOutputSchema = {
// Anthropic 工具 - 手动定义
anthropic: InferToolOutput<AnthropicWebSearchTool>
anthropic: InferToolOutput<ReturnType<typeof anthropic.tools.webSearch_20250305>>
// OpenAI 工具 - 基于实际输出
// TODO: 上游定义不规范,是unknown
@@ -89,8 +82,8 @@ export type WebSearchToolOutputSchema = {
}
export type WebSearchToolInputSchema = {
anthropic: InferToolInput<AnthropicWebSearchTool>
openai: InferToolInput<OpenAIWebSearchTool>
google: InferToolInput<GoogleWebSearchTool>
'openai-chat': InferToolInput<OpenAIChatWebSearchTool>
anthropic: InferToolInput<ReturnType<typeof anthropic.tools.webSearch_20250305>>
openai: InferToolInput<ReturnType<typeof openai.tools.webSearch>>
google: InferToolInput<ReturnType<typeof google.tools.googleSearch>>
'openai-chat': InferToolInput<ReturnType<typeof openai.tools.webSearchPreview>>
}

View File

@@ -13,7 +13,7 @@ import { LanguageModelV2 } from '@ai-sdk/provider'
import { createXai } from '@ai-sdk/xai'
import { createOpenRouter } from '@openrouter/ai-sdk-provider'
import { customProvider, Provider } from 'ai'
import * as z from 'zod'
import { z } from 'zod'
/**
* 基础 Provider IDs

View File

@@ -92,10 +92,6 @@ export enum IpcChannel {
// Python
Python_Execute = 'python:execute',
// agent messages
AgentMessage_PersistExchange = 'agent-message:persist-exchange',
AgentMessage_GetHistory = 'agent-message:get-history',
//copilot
Copilot_GetAuthMessage = 'copilot:get-auth-message',
Copilot_GetCopilotToken = 'copilot:get-copilot-token',
@@ -189,7 +185,6 @@ export enum IpcChannel {
File_ValidateNotesDirectory = 'file:validateNotesDirectory',
File_StartWatcher = 'file:startWatcher',
File_StopWatcher = 'file:stopWatcher',
File_ShowInFolder = 'file:showInFolder',
// file service
FileService_Upload = 'file-service:upload',
@@ -317,7 +312,6 @@ export enum IpcChannel {
ApiServer_Stop = 'api-server:stop',
ApiServer_Restart = 'api-server:restart',
ApiServer_GetStatus = 'api-server:get-status',
// NOTE: This api is not be used.
ApiServer_GetConfig = 'api-server:get-config',
// Anthropic OAuth
@@ -349,5 +343,12 @@ export enum IpcChannel {
Ovms_StopOVMS = 'ovms:stop-ovms',
// CherryAI
Cherryai_GetSignature = 'cherryai:get-signature'
Cherryai_GetSignature = 'cherryai:get-signature',
// WebSocket
WebSocket_Start = 'webSocket:start',
WebSocket_Stop = 'webSocket:stop',
WebSocket_Status = 'webSocket:status',
WebSocket_SendFile = 'webSocket:send-file',
WebSocket_GetAllCandidates = 'webSocket:get-all-candidates'
}

View File

@@ -1,12 +0,0 @@
import type { SDKMessage } from '@anthropic-ai/claude-agent-sdk'
import type { ContentBlockParam } from '@anthropic-ai/sdk/resources/messages'
export type ClaudeCodeRawValue =
| {
type: string
session_id: string
slash_commands: string[]
tools: string[]
raw: Extract<SDKMessage, { type: 'system' }>
}
| ContentBlockParam

View File

@@ -1,170 +0,0 @@
/**
* @fileoverview Shared Anthropic AI client utilities for Cherry Studio
*
* This module provides functions for creating Anthropic SDK clients with different
* authentication methods (OAuth, API key) and building Claude Code system messages.
* It supports both standard Anthropic API and Anthropic Vertex AI endpoints.
*
* This shared module can be used by both main and renderer processes.
*/
import Anthropic from '@anthropic-ai/sdk'
import { TextBlockParam } from '@anthropic-ai/sdk/resources'
import { loggerService } from '@logger'
import { Provider } from '@types'
import type { ModelMessage } from 'ai'
const logger = loggerService.withContext('anthropic-sdk')
const defaultClaudeCodeSystemPrompt = `You are Claude Code, Anthropic's official CLI for Claude.`
const defaultClaudeCodeSystem: Array<TextBlockParam> = [
{
type: 'text',
text: defaultClaudeCodeSystemPrompt
}
]
/**
* Creates and configures an Anthropic SDK client based on the provider configuration.
*
* This function supports two authentication methods:
* 1. OAuth: Uses OAuth tokens passed as parameter
* 2. API Key: Uses traditional API key authentication
*
* For OAuth authentication, it includes Claude Code specific headers and beta features.
* For API key authentication, it uses the provider's configuration with custom headers.
*
* @param provider - The provider configuration containing authentication details
* @param oauthToken - Optional OAuth token for OAuth authentication
* @returns An initialized Anthropic or AnthropicVertex client
* @throws Error when OAuth token is not available for OAuth authentication
*
* @example
* ```typescript
* // OAuth authentication
* const oauthProvider = { authType: 'oauth' };
* const oauthClient = getSdkClient(oauthProvider, 'oauth-token-here');
*
* // API key authentication
* const apiKeyProvider = {
* authType: 'apikey',
* apiKey: 'your-api-key',
* apiHost: 'https://api.anthropic.com'
* };
* const apiKeyClient = getSdkClient(apiKeyProvider);
* ```
*/
export function getSdkClient(
provider: Provider,
oauthToken?: string | null,
extraHeaders?: Record<string, string | string[]>
): Anthropic {
if (provider.authType === 'oauth') {
if (!oauthToken) {
throw new Error('OAuth token is not available')
}
return new Anthropic({
authToken: oauthToken,
baseURL: 'https://api.anthropic.com',
dangerouslyAllowBrowser: true,
defaultHeaders: {
'Content-Type': 'application/json',
'anthropic-version': '2023-06-01',
'anthropic-beta':
'oauth-2025-04-20,claude-code-20250219,interleaved-thinking-2025-05-14,fine-grained-tool-streaming-2025-05-14',
'anthropic-dangerous-direct-browser-access': 'true',
'user-agent': 'claude-cli/1.0.118 (external, sdk-ts)',
'x-app': 'cli',
'x-stainless-retry-count': '0',
'x-stainless-timeout': '600',
'x-stainless-lang': 'js',
'x-stainless-package-version': '0.60.0',
'x-stainless-os': 'MacOS',
'x-stainless-arch': 'arm64',
'x-stainless-runtime': 'node',
'x-stainless-runtime-version': 'v22.18.0',
...extraHeaders
}
})
}
const baseURL =
provider.type === 'anthropic'
? provider.apiHost
: (provider.anthropicApiHost && provider.anthropicApiHost.trim()) || provider.apiHost
logger.debug('Anthropic API baseURL', { baseURL, providerId: provider.id })
if (provider.id === 'aihubmix') {
return new Anthropic({
apiKey: provider.apiKey,
baseURL,
dangerouslyAllowBrowser: true,
defaultHeaders: {
'anthropic-beta': 'output-128k-2025-02-19',
'APP-Code': 'MLTG2087',
...provider.extra_headers,
...extraHeaders
}
})
}
return new Anthropic({
apiKey: provider.apiKey,
authToken: provider.apiKey,
baseURL,
dangerouslyAllowBrowser: true,
defaultHeaders: {
'anthropic-beta': 'output-128k-2025-02-19',
...provider.extra_headers
}
})
}
/**
* Builds and prepends the Claude Code system message to user-provided system messages.
*
* This function ensures that all interactions with Claude include the official Claude Code
* system prompt, which identifies the assistant as "Claude Code, Anthropic's official CLI for Claude."
*
* The function handles three cases:
* 1. No system message provided: Returns only the default Claude Code system message
* 2. String system message: Converts to array format and prepends Claude Code message
* 3. Array system message: Checks if Claude Code message exists and prepends if missing
*
* @param system - Optional user-provided system message (string or TextBlockParam array)
* @returns Combined system message with Claude Code prompt prepended
*
* ```
*/
export function buildClaudeCodeSystemMessage(system?: string | Array<TextBlockParam>): Array<TextBlockParam> {
if (!system) {
return defaultClaudeCodeSystem
}
if (typeof system === 'string') {
if (system.trim() === defaultClaudeCodeSystemPrompt || system.trim() === '') {
return defaultClaudeCodeSystem
} else {
return [...defaultClaudeCodeSystem, { type: 'text', text: system }]
}
}
if (Array.isArray(system)) {
const firstSystem = system[0]
if (firstSystem.type === 'text' && firstSystem.text.trim() === defaultClaudeCodeSystemPrompt) {
return system
} else {
return [...defaultClaudeCodeSystem, ...system]
}
}
return defaultClaudeCodeSystem
}
export function buildClaudeCodeSystemModelMessage(system?: string | Array<TextBlockParam>): Array<ModelMessage> {
const textBlocks = buildClaudeCodeSystemMessage(system)
return textBlocks.map((block) => ({
role: 'system',
content: block.text
}))
}

View File

@@ -31,3 +31,16 @@ export type WebviewKeyEvent = {
shift: boolean
alt: boolean
}
export interface WebSocketStatusResponse {
isRunning: boolean
port?: number
ip?: string
clientConnected: boolean
}
export interface WebSocketCandidatesResponse {
host: string
interface: string
priority: number
}

View File

@@ -1,53 +0,0 @@
--> statement-breakpoint
CREATE TABLE `migrations` (
`version` integer PRIMARY KEY NOT NULL,
`tag` text NOT NULL,
`executed_at` integer NOT NULL
);
CREATE TABLE `agents` (
`id` text PRIMARY KEY NOT NULL,
`type` text NOT NULL,
`name` text NOT NULL,
`description` text,
`accessible_paths` text,
`instructions` text,
`model` text NOT NULL,
`plan_model` text,
`small_model` text,
`mcps` text,
`allowed_tools` text,
`configuration` text,
`created_at` text NOT NULL,
`updated_at` text NOT NULL
);
--> statement-breakpoint
CREATE TABLE `sessions` (
`id` text PRIMARY KEY NOT NULL,
`agent_type` text NOT NULL,
`agent_id` text NOT NULL,
`name` text NOT NULL,
`description` text,
`accessible_paths` text,
`instructions` text,
`model` text NOT NULL,
`plan_model` text,
`small_model` text,
`mcps` text,
`allowed_tools` text,
`configuration` text,
`created_at` text NOT NULL,
`updated_at` text NOT NULL
);
--> statement-breakpoint
CREATE TABLE `session_messages` (
`id` integer PRIMARY KEY AUTOINCREMENT NOT NULL,
`session_id` text NOT NULL,
`role` text NOT NULL,
`content` text NOT NULL,
`metadata` text,
`created_at` text NOT NULL,
`updated_at` text NOT NULL
);

View File

@@ -1 +0,0 @@
ALTER TABLE `session_messages` ADD `agent_session_id` text DEFAULT '';

View File

@@ -1,331 +0,0 @@
{
"version": "6",
"dialect": "sqlite",
"id": "35efb412-0230-4767-9c76-7b7c4d40369f",
"prevId": "00000000-0000-0000-0000-000000000000",
"tables": {
"agents": {
"name": "agents",
"columns": {
"id": {
"name": "id",
"type": "text",
"primaryKey": true,
"notNull": true,
"autoincrement": false
},
"type": {
"name": "type",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"name": {
"name": "name",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"description": {
"name": "description",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"accessible_paths": {
"name": "accessible_paths",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"instructions": {
"name": "instructions",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"model": {
"name": "model",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"plan_model": {
"name": "plan_model",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"small_model": {
"name": "small_model",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"mcps": {
"name": "mcps",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"allowed_tools": {
"name": "allowed_tools",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"configuration": {
"name": "configuration",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"updated_at": {
"name": "updated_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"session_messages": {
"name": "session_messages",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"session_id": {
"name": "session_id",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"role": {
"name": "role",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"content": {
"name": "content",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"metadata": {
"name": "metadata",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"updated_at": {
"name": "updated_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"migrations": {
"name": "migrations",
"columns": {
"version": {
"name": "version",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": false
},
"tag": {
"name": "tag",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"executed_at": {
"name": "executed_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"sessions": {
"name": "sessions",
"columns": {
"id": {
"name": "id",
"type": "text",
"primaryKey": true,
"notNull": true,
"autoincrement": false
},
"agent_type": {
"name": "agent_type",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"agent_id": {
"name": "agent_id",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"name": {
"name": "name",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"description": {
"name": "description",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"accessible_paths": {
"name": "accessible_paths",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"instructions": {
"name": "instructions",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"model": {
"name": "model",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"plan_model": {
"name": "plan_model",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"small_model": {
"name": "small_model",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"mcps": {
"name": "mcps",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"allowed_tools": {
"name": "allowed_tools",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"configuration": {
"name": "configuration",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"updated_at": {
"name": "updated_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
}
},
"views": {},
"enums": {},
"_meta": {
"schemas": {},
"tables": {},
"columns": {}
},
"internal": {
"indexes": {}
}
}

View File

@@ -1,339 +0,0 @@
{
"version": "6",
"dialect": "sqlite",
"id": "dabab6db-a2cd-4e96-b06e-6cb87d445a87",
"prevId": "35efb412-0230-4767-9c76-7b7c4d40369f",
"tables": {
"agents": {
"name": "agents",
"columns": {
"id": {
"name": "id",
"type": "text",
"primaryKey": true,
"notNull": true,
"autoincrement": false
},
"type": {
"name": "type",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"name": {
"name": "name",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"description": {
"name": "description",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"accessible_paths": {
"name": "accessible_paths",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"instructions": {
"name": "instructions",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"model": {
"name": "model",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"plan_model": {
"name": "plan_model",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"small_model": {
"name": "small_model",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"mcps": {
"name": "mcps",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"allowed_tools": {
"name": "allowed_tools",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"configuration": {
"name": "configuration",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"updated_at": {
"name": "updated_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"session_messages": {
"name": "session_messages",
"columns": {
"id": {
"name": "id",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": true
},
"session_id": {
"name": "session_id",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"role": {
"name": "role",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"content": {
"name": "content",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"agent_session_id": {
"name": "agent_session_id",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false,
"default": "''"
},
"metadata": {
"name": "metadata",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"updated_at": {
"name": "updated_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"migrations": {
"name": "migrations",
"columns": {
"version": {
"name": "version",
"type": "integer",
"primaryKey": true,
"notNull": true,
"autoincrement": false
},
"tag": {
"name": "tag",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"executed_at": {
"name": "executed_at",
"type": "integer",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
},
"sessions": {
"name": "sessions",
"columns": {
"id": {
"name": "id",
"type": "text",
"primaryKey": true,
"notNull": true,
"autoincrement": false
},
"agent_type": {
"name": "agent_type",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"agent_id": {
"name": "agent_id",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"name": {
"name": "name",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"description": {
"name": "description",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"accessible_paths": {
"name": "accessible_paths",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"instructions": {
"name": "instructions",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"model": {
"name": "model",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"plan_model": {
"name": "plan_model",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"small_model": {
"name": "small_model",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"mcps": {
"name": "mcps",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"allowed_tools": {
"name": "allowed_tools",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"configuration": {
"name": "configuration",
"type": "text",
"primaryKey": false,
"notNull": false,
"autoincrement": false
},
"created_at": {
"name": "created_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
},
"updated_at": {
"name": "updated_at",
"type": "text",
"primaryKey": false,
"notNull": true,
"autoincrement": false
}
},
"indexes": {},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"checkConstraints": {}
}
},
"views": {},
"enums": {},
"_meta": {
"schemas": {},
"tables": {},
"columns": {}
},
"internal": {
"indexes": {}
}
}

View File

@@ -1,20 +0,0 @@
{
"version": "7",
"dialect": "sqlite",
"entries": [
{
"idx": 0,
"version": "6",
"when": 1758091173882,
"tag": "0000_confused_wendigo",
"breakpoints": true
},
{
"idx": 1,
"version": "6",
"when": 1758187378775,
"tag": "0001_woozy_captain_flint",
"breakpoints": true
}
]
}

View File

@@ -9,9 +9,8 @@ import * as path from 'path'
const localesDir = path.join(__dirname, '../src/renderer/src/i18n/locales')
const translateDir = path.join(__dirname, '../src/renderer/src/i18n/translate')
const baseLocale = process.env.BASE_LOCALE ?? 'zh-cn'
const baseLocale = 'zh-cn'
const baseFileName = `${baseLocale}.json`
const baseLocalePath = path.join(__dirname, '../src/renderer/src/i18n/locales', baseFileName)
type I18NValue = string | { [key: string]: I18NValue }
type I18N = { [key: string]: I18NValue }
@@ -106,9 +105,6 @@ const translateRecursively = async (originObj: I18N, systemPrompt: string): Prom
}
const main = async () => {
if (!fs.existsSync(baseLocalePath)) {
throw new Error(`${baseLocalePath} not found.`)
}
const localeFiles = fs
.readdirSync(localesDir)
.filter((file) => file.endsWith('.json') && file !== baseFileName)

View File

@@ -35,9 +35,6 @@ const allX64 = {
'@napi-rs/system-ocr-win32-x64-msvc': '1.0.2'
}
const claudeCodeVenderPath = '@anthropic-ai/claude-agent-sdk/vendor'
const claudeCodeVenders = ['arm64-darwin', 'arm64-linux', 'x64-darwin', 'x64-linux', 'x64-win32']
const platformToArch = {
mac: 'darwin',
windows: 'win32',
@@ -49,6 +46,9 @@ exports.default = async function (context) {
const archType = arch === Arch.arm64 ? 'arm64' : 'x64'
const platform = context.packager.platform.name
const arm64Filters = Object.keys(allArm64).map((f) => '!node_modules/' + f + '/**')
const x64Filters = Object.keys(allX64).map((f) => '!node_modules/' + f + '/*')
const downloadPackages = async (packages) => {
console.log('downloading packages ......')
const downloadPromises = []
@@ -67,39 +67,25 @@ exports.default = async function (context) {
await Promise.all(downloadPromises)
}
const changeFilters = async (filtersToExclude, filtersToInclude) => {
const changeFilters = async (packages, filtersToExclude, filtersToInclude) => {
await downloadPackages(packages)
// remove filters for the target architecture (allow inclusion)
let filters = context.packager.config.files[0].filter
filters = filters.filter((filter) => !filtersToInclude.includes(filter))
// add filters for other architectures (exclude them)
filters.push(...filtersToExclude)
context.packager.config.files[0].filter = filters
}
await downloadPackages(arch === Arch.arm64 ? allArm64 : allX64)
const arm64Filters = Object.keys(allArm64).map((f) => '!node_modules/' + f + '/**')
const x64Filters = Object.keys(allX64).map((f) => '!node_modules/' + f + '/*')
const excludeClaudeCodeRipgrepFilters = claudeCodeVenders
.filter((f) => f !== `${archType}-${platformToArch[platform]}`)
.map((f) => '!node_modules/' + claudeCodeVenderPath + '/ripgrep/' + f + '/**')
const excludeClaudeCodeJBPlutins = ['!node_modules/' + claudeCodeVenderPath + '/' + 'claude-code-jetbrains-plugin']
const includeClaudeCodeFilters = [
'!node_modules/' + claudeCodeVenderPath + '/ripgrep/' + `${archType}-${platformToArch[platform]}/**`
]
if (arch === Arch.arm64) {
await changeFilters(
[...x64Filters, ...excludeClaudeCodeRipgrepFilters, ...excludeClaudeCodeJBPlutins],
[...arm64Filters, ...includeClaudeCodeFilters]
)
} else {
await changeFilters(
[...arm64Filters, ...excludeClaudeCodeRipgrepFilters, ...excludeClaudeCodeJBPlutins],
[...x64Filters, ...includeClaudeCodeFilters]
)
await changeFilters(allArm64, x64Filters, arm64Filters)
return
}
if (arch === Arch.x64) {
await changeFilters(allX64, arm64Filters, x64Filters)
return
}
}

View File

@@ -4,7 +4,7 @@ import * as path from 'path'
import { sortedObjectByKeys } from './sort'
const translationsDir = path.join(__dirname, '../src/renderer/src/i18n/locales')
const baseLocale = process.env.BASE_LOCALE ?? 'zh-cn'
const baseLocale = 'zh-cn'
const baseFileName = `${baseLocale}.json`
const baseFilePath = path.join(translationsDir, baseFileName)

228
scripts/feishu-notify.js Normal file
View File

@@ -0,0 +1,228 @@
/**
* Feishu (Lark) Webhook Notification Script
* Sends GitHub issue summaries to Feishu with signature verification
*/
const crypto = require('crypto')
const https = require('https')
/**
* Generate Feishu webhook signature
* @param {string} secret - Feishu webhook secret
* @param {number} timestamp - Unix timestamp in seconds
* @returns {string} Base64 encoded signature
*/
function generateSignature(secret, timestamp) {
const stringToSign = `${timestamp}\n${secret}`
const hmac = crypto.createHmac('sha256', stringToSign)
return hmac.digest('base64')
}
/**
* Send message to Feishu webhook
* @param {string} webhookUrl - Feishu webhook URL
* @param {string} secret - Feishu webhook secret
* @param {object} content - Message content
* @returns {Promise<void>}
*/
function sendToFeishu(webhookUrl, secret, content) {
return new Promise((resolve, reject) => {
const timestamp = Math.floor(Date.now() / 1000)
const sign = generateSignature(secret, timestamp)
const payload = JSON.stringify({
timestamp: timestamp.toString(),
sign: sign,
msg_type: 'interactive',
card: content
})
const url = new URL(webhookUrl)
const options = {
hostname: url.hostname,
path: url.pathname + url.search,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(payload)
}
}
const req = https.request(options, (res) => {
let data = ''
res.on('data', (chunk) => {
data += chunk
})
res.on('end', () => {
if (res.statusCode >= 200 && res.statusCode < 300) {
console.log('✅ Successfully sent to Feishu:', data)
resolve()
} else {
reject(new Error(`Feishu API error: ${res.statusCode} - ${data}`))
}
})
})
req.on('error', (error) => {
reject(error)
})
req.write(payload)
req.end()
})
}
/**
* Create Feishu card message from issue data
* @param {object} issueData - GitHub issue data
* @returns {object} Feishu card content
*/
function createIssueCard(issueData) {
const { issueUrl, issueNumber, issueTitle, issueSummary, issueAuthor, labels } = issueData
// Build labels section if labels exist
const labelElements =
labels && labels.length > 0
? labels.map((label) => ({
tag: 'markdown',
content: `\`${label}\``
}))
: []
return {
elements: [
{
tag: 'div',
text: {
tag: 'lark_md',
content: `**🐛 New GitHub Issue #${issueNumber}**`
}
},
{
tag: 'hr'
},
{
tag: 'div',
text: {
tag: 'lark_md',
content: `**📝 Title:** ${issueTitle}`
}
},
{
tag: 'div',
text: {
tag: 'lark_md',
content: `**👤 Author:** ${issueAuthor}`
}
},
...(labelElements.length > 0
? [
{
tag: 'div',
text: {
tag: 'lark_md',
content: `**🏷️ Labels:** ${labels.join(', ')}`
}
}
]
: []),
{
tag: 'hr'
},
{
tag: 'div',
text: {
tag: 'lark_md',
content: `**📋 Summary:**\n${issueSummary}`
}
},
{
tag: 'hr'
},
{
tag: 'action',
actions: [
{
tag: 'button',
text: {
tag: 'plain_text',
content: '🔗 View Issue'
},
type: 'primary',
url: issueUrl
}
]
}
],
header: {
template: 'blue',
title: {
tag: 'plain_text',
content: '🆕 Cherry Studio - New Issue'
}
}
}
}
/**
* Main function
*/
async function main() {
try {
// Get environment variables
const webhookUrl = process.env.FEISHU_WEBHOOK_URL
const secret = process.env.FEISHU_WEBHOOK_SECRET
const issueUrl = process.env.ISSUE_URL
const issueNumber = process.env.ISSUE_NUMBER
const issueTitle = process.env.ISSUE_TITLE
const issueSummary = process.env.ISSUE_SUMMARY
const issueAuthor = process.env.ISSUE_AUTHOR
const labelsStr = process.env.ISSUE_LABELS || ''
// Validate required environment variables
if (!webhookUrl) {
throw new Error('FEISHU_WEBHOOK_URL environment variable is required')
}
if (!secret) {
throw new Error('FEISHU_WEBHOOK_SECRET environment variable is required')
}
if (!issueUrl || !issueNumber || !issueTitle || !issueSummary) {
throw new Error('Issue data environment variables are required')
}
// Parse labels
const labels = labelsStr
? labelsStr
.split(',')
.map((l) => l.trim())
.filter(Boolean)
: []
// Create issue data object
const issueData = {
issueUrl,
issueNumber,
issueTitle,
issueSummary,
issueAuthor: issueAuthor || 'Unknown',
labels
}
// Create card content
const card = createIssueCard(issueData)
console.log('📤 Sending notification to Feishu...')
console.log(`Issue #${issueNumber}: ${issueTitle}`)
// Send to Feishu
await sendToFeishu(webhookUrl, secret, card)
console.log('✅ Notification sent successfully!')
} catch (error) {
console.error('❌ Error:', error.message)
process.exit(1)
}
}
// Run main function
main()

View File

@@ -5,7 +5,7 @@ import { sortedObjectByKeys } from './sort'
const localesDir = path.join(__dirname, '../src/renderer/src/i18n/locales')
const translateDir = path.join(__dirname, '../src/renderer/src/i18n/translate')
const baseLocale = process.env.BASE_LOCALE ?? 'zh-cn'
const baseLocale = 'zh-cn'
const baseFileName = `${baseLocale}.json`
const baseFilePath = path.join(localesDir, baseFileName)
@@ -13,45 +13,45 @@ type I18NValue = string | { [key: string]: I18NValue }
type I18N = { [key: string]: I18NValue }
/**
* 递归同步 target 对象,使其与 template 对象保持一致
* 1. 如果 template 中存在 target 中缺少的 key则添加'[to be translated]'
* 2. 如果 target 中存在 template 中不存在的 key则删除
* 3. 对于子对象,递归同步
* Recursively sync target object to match template object structure
* 1. Add keys that exist in template but missing in target (with '[to be translated]')
* 2. Remove keys that exist in target but not in template
* 3. Recursively sync nested objects
*
* @param target 目标对象(需要更新的语言对象)
* @param template 主模板对象(中文)
* @returns 返回是否对 target 进行了更新
* @param target Target object (language object to be updated)
* @param template Base locale object (Chinese)
* @returns Returns whether target was updated
*/
function syncRecursively(target: I18N, template: I18N): void {
// 添加 template 中存在但 target 中缺少的 key
// Add keys that exist in template but missing in target
for (const key in template) {
if (!(key in target)) {
target[key] =
typeof template[key] === 'object' && template[key] !== null ? {} : `[to be translated]:${template[key]}`
console.log(`添加新属性:${key}`)
console.log(`Added new property: ${key}`)
}
if (typeof template[key] === 'object' && template[key] !== null) {
if (typeof target[key] !== 'object' || target[key] === null) {
target[key] = {}
}
// 递归同步子对象
// Recursively sync nested objects
syncRecursively(target[key], template[key])
}
}
// 删除 target 中存在但 template 中没有的 key
// Remove keys that exist in target but not in template
for (const targetKey in target) {
if (!(targetKey in template)) {
console.log(`移除多余属性:${targetKey}`)
console.log(`Removed excess property: ${targetKey}`)
delete target[targetKey]
}
}
}
/**
* 检查 JSON 对象中是否存在重复键,并收集所有重复键
* @param obj 要检查的对象
* @returns 返回重复键的数组(若无重复则返回空数组)
* Check JSON object for duplicate keys and collect all duplicates
* @param obj Object to check
* @returns Returns array of duplicate keys (empty array if no duplicates)
*/
function checkDuplicateKeys(obj: I18N): string[] {
const keys = new Set<string>()
@@ -62,7 +62,7 @@ function checkDuplicateKeys(obj: I18N): string[] {
const fullPath = path ? `${path}.${key}` : key
if (keys.has(fullPath)) {
// 发现重复键时,添加到数组中(避免重复添加)
// When duplicate key found, add to array (avoid duplicate additions)
if (!duplicateKeys.includes(fullPath)) {
duplicateKeys.push(fullPath)
}
@@ -70,7 +70,7 @@ function checkDuplicateKeys(obj: I18N): string[] {
keys.add(fullPath)
}
// 递归检查子对象
// Recursively check nested objects
if (typeof obj[key] === 'object' && obj[key] !== null) {
checkObject(obj[key], fullPath)
}
@@ -83,7 +83,7 @@ function checkDuplicateKeys(obj: I18N): string[] {
function syncTranslations() {
if (!fs.existsSync(baseFilePath)) {
console.error(`主模板文件 ${baseFileName} 不存在,请检查路径或文件名`)
console.error(`Base locale file ${baseFileName} does not exist, please check path or filename`)
return
}
@@ -92,24 +92,24 @@ function syncTranslations() {
try {
baseJson = JSON.parse(baseContent)
} catch (error) {
console.error(`解析 ${baseFileName} 出错。${error}`)
console.error(`Error parsing ${baseFileName}. ${error}`)
return
}
// 检查主模板是否存在重复键
// Check if base locale has duplicate keys
const duplicateKeys = checkDuplicateKeys(baseJson)
if (duplicateKeys.length > 0) {
throw new Error(`主模板文件 ${baseFileName} 存在以下重复键:\n${duplicateKeys.join('\n')}`)
throw new Error(`Base locale file ${baseFileName} has the following duplicate keys:\n${duplicateKeys.join('\n')}`)
}
// 为主模板排序
// Sort base locale
const sortedJson = sortedObjectByKeys(baseJson)
if (JSON.stringify(baseJson) !== JSON.stringify(sortedJson)) {
try {
fs.writeFileSync(baseFilePath, JSON.stringify(sortedJson, null, 2) + '\n', 'utf-8')
console.log(`主模板已排序`)
console.log(`Base locale has been sorted`)
} catch (error) {
console.error(`写入 ${baseFilePath} 出错。`, error)
console.error(`Error writing ${baseFilePath}.`, error)
return
}
}
@@ -124,7 +124,7 @@ function syncTranslations() {
.map((filename) => path.join(translateDir, filename))
const files = [...localeFiles, ...translateFiles]
// 同步键
// Sync keys
for (const filePath of files) {
const filename = path.basename(filePath)
let targetJson: I18N = {}
@@ -132,7 +132,7 @@ function syncTranslations() {
const fileContent = fs.readFileSync(filePath, 'utf-8')
targetJson = JSON.parse(fileContent)
} catch (error) {
console.error(`解析 ${filename} 出错,跳过此文件。`, error)
console.error(`Error parsing ${filename}, skipping this file.`, error)
continue
}
@@ -142,9 +142,9 @@ function syncTranslations() {
try {
fs.writeFileSync(filePath, JSON.stringify(sortedJson, null, 2) + '\n', 'utf-8')
console.log(`文件 ${filename} 已排序并同步更新为主模板的内容`)
console.log(`File ${filename} has been sorted and synced to match base locale content`)
} catch (error) {
console.error(`写入 ${filename} 出错。${error}`)
console.error(`Error writing ${filename}. ${error}`)
}
}
}

View File

@@ -3,42 +3,23 @@ import cors from 'cors'
import express from 'express'
import { v4 as uuidv4 } from 'uuid'
import { LONG_POLL_TIMEOUT_MS } from './config/timeouts'
import { authMiddleware } from './middleware/auth'
import { errorHandler } from './middleware/error'
import { setupOpenAPIDocumentation } from './middleware/openapi'
import { agentsRoutes } from './routes/agents'
import { chatRoutes } from './routes/chat'
import { mcpRoutes } from './routes/mcp'
import { messagesProviderRoutes, messagesRoutes } from './routes/messages'
import { modelsRoutes } from './routes/models'
const logger = loggerService.withContext('ApiServer')
const extendMessagesTimeout: express.RequestHandler = (req, res, next) => {
req.setTimeout(LONG_POLL_TIMEOUT_MS)
res.setTimeout(LONG_POLL_TIMEOUT_MS)
next()
}
const app = express()
app.use(
express.json({
limit: '50mb'
})
)
// Global middleware
app.use((req, res, next) => {
const start = Date.now()
res.on('finish', () => {
const duration = Date.now() - start
logger.info('API request completed', {
method: req.method,
path: req.path,
statusCode: res.statusCode,
durationMs: duration
})
logger.info(`${req.method} ${req.path} - ${res.statusCode} - ${duration}ms`)
})
next()
})
@@ -120,28 +101,27 @@ app.get('/', (_req, res) => {
name: 'Cherry Studio API',
version: '1.0.0',
endpoints: {
health: 'GET /health'
health: 'GET /health',
models: 'GET /v1/models',
chat: 'POST /v1/chat/completions',
mcp: 'GET /v1/mcps'
}
})
})
// Setup OpenAPI documentation before protected routes so docs remain public
setupOpenAPIDocumentation(app)
// Provider-specific messages route requires authentication
app.use('/:provider/v1/messages', authMiddleware, extendMessagesTimeout, messagesProviderRoutes)
// API v1 routes with auth
const apiRouter = express.Router()
apiRouter.use(authMiddleware)
apiRouter.use(express.json())
// Mount routes
apiRouter.use('/chat', chatRoutes)
apiRouter.use('/mcps', mcpRoutes)
apiRouter.use('/messages', extendMessagesTimeout, messagesRoutes)
apiRouter.use('/models', modelsRoutes)
apiRouter.use('/agents', agentsRoutes)
app.use('/v1', apiRouter)
// Setup OpenAPI documentation
setupOpenAPIDocumentation(app)
// Error handling (must be last)
app.use(errorHandler)

View File

@@ -36,7 +36,7 @@ class ConfigManager {
}
return this._config
} catch (error: any) {
logger.warn('Failed to load config from Redux, using defaults', { error })
logger.warn('Failed to load config from Redux, using defaults:', error)
this._config = {
enabled: false,
port: defaultPort,

View File

@@ -1,3 +0,0 @@
export const LONG_POLL_TIMEOUT_MS = 120 * 60_000 // 120 minutes
export const MESSAGE_STREAM_TIMEOUT_MS = LONG_POLL_TIMEOUT_MS

View File

@@ -1,368 +0,0 @@
import type { NextFunction, Request, Response } from 'express'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import { config } from '../../config'
import { authMiddleware } from '../auth'
// Mock the config module
vi.mock('../../config', () => ({
config: {
get: vi.fn()
}
}))
// Mock the logger
vi.mock('@logger', () => ({
loggerService: {
withContext: vi.fn(() => ({
debug: vi.fn()
}))
}
}))
const mockConfig = config as any
describe('authMiddleware', () => {
let req: Partial<Request>
let res: Partial<Response>
let next: NextFunction
let jsonMock: ReturnType<typeof vi.fn>
let statusMock: ReturnType<typeof vi.fn>
beforeEach(() => {
jsonMock = vi.fn()
statusMock = vi.fn(() => ({ json: jsonMock }))
req = {
header: vi.fn()
}
res = {
status: statusMock
}
next = vi.fn()
vi.clearAllMocks()
})
describe('Missing credentials', () => {
it('should return 401 when both auth headers are missing', async () => {
;(req.header as any).mockReturnValue('')
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(401)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Unauthorized: missing credentials' })
expect(next).not.toHaveBeenCalled()
})
it('should return 401 when both auth headers are empty strings', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'authorization') return ''
if (header === 'x-api-key') return ''
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(401)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Unauthorized: missing credentials' })
expect(next).not.toHaveBeenCalled()
})
})
describe('Server configuration', () => {
it('should return 403 when API key is not configured', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return 'some-key'
return ''
})
mockConfig.get.mockResolvedValue({ apiKey: '' })
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(403)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Forbidden' })
expect(next).not.toHaveBeenCalled()
})
it('should return 403 when API key is null', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return 'some-key'
return ''
})
mockConfig.get.mockResolvedValue({ apiKey: null })
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(403)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Forbidden' })
expect(next).not.toHaveBeenCalled()
})
})
describe('API Key authentication (priority)', () => {
const validApiKey = 'valid-api-key-123'
beforeEach(() => {
mockConfig.get.mockResolvedValue({ apiKey: validApiKey })
})
it('should authenticate successfully with valid API key', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return validApiKey
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(next).toHaveBeenCalled()
expect(statusMock).not.toHaveBeenCalled()
})
it('should return 403 with invalid API key', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return 'invalid-key'
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(403)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Forbidden' })
expect(next).not.toHaveBeenCalled()
})
it('should return 401 with empty API key', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return ' '
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(401)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Unauthorized: empty x-api-key' })
expect(next).not.toHaveBeenCalled()
})
it('should handle API key with whitespace', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return ` ${validApiKey} `
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(next).toHaveBeenCalled()
expect(statusMock).not.toHaveBeenCalled()
})
it('should prioritize API key over Bearer token when both are present', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return validApiKey
if (header === 'authorization') return 'Bearer invalid-token'
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(next).toHaveBeenCalled()
expect(statusMock).not.toHaveBeenCalled()
})
it('should return 403 when API key is invalid even if Bearer token is valid', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return 'invalid-key'
if (header === 'authorization') return `Bearer ${validApiKey}`
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(403)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Forbidden' })
expect(next).not.toHaveBeenCalled()
})
})
describe('Bearer token authentication (fallback)', () => {
const validApiKey = 'valid-api-key-123'
beforeEach(() => {
mockConfig.get.mockResolvedValue({ apiKey: validApiKey })
})
it('should authenticate successfully with valid Bearer token when no API key', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'authorization') return `Bearer ${validApiKey}`
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(next).toHaveBeenCalled()
expect(statusMock).not.toHaveBeenCalled()
})
it('should return 403 with invalid Bearer token', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'authorization') return 'Bearer invalid-token'
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(403)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Forbidden' })
expect(next).not.toHaveBeenCalled()
})
it('should return 401 with malformed authorization header', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'authorization') return 'Basic sometoken'
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(401)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Unauthorized: invalid authorization format' })
expect(next).not.toHaveBeenCalled()
})
it('should return 401 with Bearer without space', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'authorization') return 'Bearer'
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(401)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Unauthorized: invalid authorization format' })
expect(next).not.toHaveBeenCalled()
})
it('should handle Bearer token with only trailing spaces (edge case)', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'authorization') return 'Bearer ' // This will be trimmed to "Bearer" and fail format check
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(401)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Unauthorized: invalid authorization format' })
expect(next).not.toHaveBeenCalled()
})
it('should handle Bearer token with case insensitive prefix', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'authorization') return `bearer ${validApiKey}`
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(next).toHaveBeenCalled()
expect(statusMock).not.toHaveBeenCalled()
})
it('should handle Bearer token with whitespace', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'authorization') return ` Bearer ${validApiKey} `
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(next).toHaveBeenCalled()
expect(statusMock).not.toHaveBeenCalled()
})
})
describe('Edge cases', () => {
const validApiKey = 'valid-api-key-123'
beforeEach(() => {
mockConfig.get.mockResolvedValue({ apiKey: validApiKey })
})
it('should handle config.get() rejection', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return validApiKey
return ''
})
mockConfig.get.mockRejectedValue(new Error('Config error'))
await expect(authMiddleware(req as Request, res as Response, next)).rejects.toThrow('Config error')
})
it('should use timing-safe comparison for different length tokens', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return 'short'
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(403)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Forbidden' })
expect(next).not.toHaveBeenCalled()
})
it('should return 401 when neither credential format is valid', async () => {
;(req.header as any).mockImplementation((header: string) => {
if (header === 'authorization') return 'Invalid format'
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(401)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Unauthorized: invalid authorization format' })
expect(next).not.toHaveBeenCalled()
})
})
describe('Timing attack protection', () => {
const validApiKey = 'valid-api-key-123'
beforeEach(() => {
mockConfig.get.mockResolvedValue({ apiKey: validApiKey })
})
it('should handle similar length but different API keys securely', async () => {
const similarKey = 'valid-api-key-124' // Same length, different last char
;(req.header as any).mockImplementation((header: string) => {
if (header === 'x-api-key') return similarKey
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(403)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Forbidden' })
expect(next).not.toHaveBeenCalled()
})
it('should handle similar length but different Bearer tokens securely', async () => {
const similarKey = 'valid-api-key-124' // Same length, different last char
;(req.header as any).mockImplementation((header: string) => {
if (header === 'authorization') return `Bearer ${similarKey}`
return ''
})
await authMiddleware(req as Request, res as Response, next)
expect(statusMock).toHaveBeenCalledWith(403)
expect(jsonMock).toHaveBeenCalledWith({ error: 'Forbidden' })
expect(next).not.toHaveBeenCalled()
})
})
})

View File

@@ -3,17 +3,8 @@ import { NextFunction, Request, Response } from 'express'
import { config } from '../config'
const isValidToken = (token: string, apiKey: string): boolean => {
if (token.length !== apiKey.length) {
return false
}
const tokenBuf = Buffer.from(token)
const keyBuf = Buffer.from(apiKey)
return crypto.timingSafeEqual(tokenBuf, keyBuf)
}
export const authMiddleware = async (req: Request, res: Response, next: NextFunction) => {
const auth = req.header('authorization') || ''
const auth = req.header('Authorization') || ''
const xApiKey = req.header('x-api-key') || ''
// Fast rejection if neither credential header provided
@@ -21,46 +12,51 @@ export const authMiddleware = async (req: Request, res: Response, next: NextFunc
return res.status(401).json({ error: 'Unauthorized: missing credentials' })
}
const { apiKey } = await config.get()
let token: string | undefined
if (!apiKey) {
return res.status(403).json({ error: 'Forbidden' })
}
// Check API key first (priority)
if (xApiKey) {
const trimmedApiKey = xApiKey.trim()
if (!trimmedApiKey) {
return res.status(401).json({ error: 'Unauthorized: empty x-api-key' })
}
if (isValidToken(trimmedApiKey, apiKey)) {
return next()
} else {
return res.status(403).json({ error: 'Forbidden' })
}
}
// Fallback to Bearer token
// Prefer Bearer if wellformed
if (auth) {
const trimmed = auth.trim()
const bearerPrefix = /^Bearer\s+/i
if (!bearerPrefix.test(trimmed)) {
return res.status(401).json({ error: 'Unauthorized: invalid authorization format' })
}
const token = trimmed.replace(bearerPrefix, '').trim()
if (!token) {
return res.status(401).json({ error: 'Unauthorized: empty bearer token' })
}
if (isValidToken(token, apiKey)) {
return next()
} else {
return res.status(403).json({ error: 'Forbidden' })
if (bearerPrefix.test(trimmed)) {
const candidate = trimmed.replace(bearerPrefix, '').trim()
if (!candidate) {
return res.status(401).json({ error: 'Unauthorized: empty bearer token' })
}
token = candidate
}
}
return res.status(401).json({ error: 'Unauthorized: invalid credentials format' })
// Fallback to x-api-key if token still not resolved
if (!token && xApiKey) {
if (!xApiKey.trim()) {
return res.status(401).json({ error: 'Unauthorized: empty x-api-key' })
}
token = xApiKey.trim()
}
if (!token) {
// At this point we had at least one header, but none yielded a usable token
return res.status(401).json({ error: 'Unauthorized: invalid credentials format' })
}
const { apiKey } = await config.get()
if (!apiKey) {
// If server not configured, treat as forbidden (or could be 500). Choose 403 to avoid leaking config state.
return res.status(403).json({ error: 'Forbidden' })
}
// Timing-safe compare when lengths match, else immediate forbidden
if (token.length !== apiKey.length) {
return res.status(403).json({ error: 'Forbidden' })
}
const tokenBuf = Buffer.from(token)
const keyBuf = Buffer.from(apiKey)
if (!crypto.timingSafeEqual(tokenBuf, keyBuf)) {
return res.status(403).json({ error: 'Forbidden' })
}
return next()
}

View File

@@ -6,7 +6,7 @@ const logger = loggerService.withContext('ApiServerErrorHandler')
// oxlint-disable-next-line @typescript-eslint/no-unused-vars
export const errorHandler = (err: Error, _req: Request, res: Response, _next: NextFunction) => {
logger.error('API server error', { error: err })
logger.error('API Server Error:', err)
// Don't expose internal errors in production
const isDev = process.env.NODE_ENV === 'development'

View File

@@ -197,11 +197,10 @@ export function setupOpenAPIDocumentation(app: Express) {
})
)
logger.info('OpenAPI documentation ready', {
docsPath: '/api-docs',
specPath: '/api-docs.json'
})
logger.info('OpenAPI documentation setup complete')
logger.info('Documentation available at /api-docs')
logger.info('OpenAPI spec available at /api-docs.json')
} catch (error) {
logger.error('Failed to setup OpenAPI documentation', { error })
logger.error('Failed to setup OpenAPI documentation:', error as Error)
}
}

View File

@@ -1,567 +0,0 @@
import { loggerService } from '@logger'
import { AgentModelValidationError, agentService, sessionService } from '@main/services/agents'
import { ListAgentsResponse, type ReplaceAgentRequest, type UpdateAgentRequest } from '@types'
import { Request, Response } from 'express'
import type { ValidationRequest } from '../validators/zodValidator'
const logger = loggerService.withContext('ApiServerAgentsHandlers')
const modelValidationErrorBody = (error: AgentModelValidationError) => ({
error: {
message: `Invalid ${error.context.field}: ${error.detail.message}`,
type: 'invalid_request_error',
code: error.detail.code
}
})
/**
* @swagger
* /v1/agents:
* post:
* summary: Create a new agent
* description: Creates a new autonomous agent with the specified configuration and automatically
* provisions an initial session that mirrors the agent's settings.
* tags: [Agents]
* requestBody:
* required: true
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/CreateAgentRequest'
* responses:
* 201:
* description: Agent created successfully
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/AgentEntity'
* 400:
* description: Validation error
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
* 500:
* description: Internal server error
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
*/
export const createAgent = async (req: Request, res: Response): Promise<Response> => {
try {
logger.debug('Creating agent')
logger.debug('Agent payload', { body: req.body })
const agent = await agentService.createAgent(req.body)
try {
logger.info('Agent created', { agentId: agent.id })
logger.debug('Creating default session for agent', { agentId: agent.id })
await sessionService.createSession(agent.id, {})
logger.info('Default session created for agent', { agentId: agent.id })
return res.status(201).json(agent)
} catch (sessionError: any) {
logger.error('Failed to create default session for new agent, rolling back agent creation', {
agentId: agent.id,
error: sessionError
})
try {
await agentService.deleteAgent(agent.id)
} catch (rollbackError: any) {
logger.error('Failed to roll back agent after session creation failure', {
agentId: agent.id,
error: rollbackError
})
}
return res.status(500).json({
error: {
message: `Failed to create default session for agent: ${sessionError.message}`,
type: 'internal_error',
code: 'agent_session_creation_failed'
}
})
}
} catch (error: any) {
if (error instanceof AgentModelValidationError) {
logger.warn('Agent model validation error during create', {
agentType: error.context.agentType,
field: error.context.field,
model: error.context.model,
detail: error.detail
})
return res.status(400).json(modelValidationErrorBody(error))
}
logger.error('Error creating agent', { error })
return res.status(500).json({
error: {
message: `Failed to create agent: ${error.message}`,
type: 'internal_error',
code: 'agent_creation_failed'
}
})
}
}
/**
* @swagger
* /v1/agents:
* get:
* summary: List all agents
* description: Retrieves a paginated list of all agents
* tags: [Agents]
* parameters:
* - in: query
* name: limit
* schema:
* type: integer
* minimum: 1
* maximum: 100
* default: 20
* description: Number of agents to return
* - in: query
* name: offset
* schema:
* type: integer
* minimum: 0
* default: 0
* description: Number of agents to skip
* responses:
* 200:
* description: List of agents
* content:
* application/json:
* schema:
* type: object
* properties:
* data:
* type: array
* items:
* $ref: '#/components/schemas/AgentEntity'
* total:
* type: integer
* description: Total number of agents
* limit:
* type: integer
* description: Number of agents returned
* offset:
* type: integer
* description: Number of agents skipped
* 400:
* description: Validation error
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
* 500:
* description: Internal server error
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
*/
export const listAgents = async (req: Request, res: Response): Promise<Response> => {
try {
const limit = req.query.limit ? parseInt(req.query.limit as string) : 20
const offset = req.query.offset ? parseInt(req.query.offset as string) : 0
logger.debug('Listing agents', { limit, offset })
const result = await agentService.listAgents({ limit, offset })
logger.info('Agents listed', {
returned: result.agents.length,
total: result.total,
limit,
offset
})
return res.json({
data: result.agents,
total: result.total,
limit,
offset
} satisfies ListAgentsResponse)
} catch (error: any) {
logger.error('Error listing agents', { error })
return res.status(500).json({
error: {
message: 'Failed to list agents',
type: 'internal_error',
code: 'agent_list_failed'
}
})
}
}
/**
* @swagger
* /v1/agents/{agentId}:
* get:
* summary: Get agent by ID
* description: Retrieves a specific agent by its ID
* tags: [Agents]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* responses:
* 200:
* description: Agent details
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/AgentEntity'
* 404:
* description: Agent not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
* 500:
* description: Internal server error
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
*/
export const getAgent = async (req: Request, res: Response): Promise<Response> => {
try {
const { agentId } = req.params
logger.debug('Getting agent', { agentId })
const agent = await agentService.getAgent(agentId)
if (!agent) {
logger.warn('Agent not found', { agentId })
return res.status(404).json({
error: {
message: 'Agent not found',
type: 'not_found',
code: 'agent_not_found'
}
})
}
logger.info('Agent retrieved', { agentId })
return res.json(agent)
} catch (error: any) {
logger.error('Error getting agent', { error, agentId: req.params.agentId })
return res.status(500).json({
error: {
message: 'Failed to get agent',
type: 'internal_error',
code: 'agent_get_failed'
}
})
}
}
/**
* @swagger
* /v1/agents/{agentId}:
* put:
* summary: Update agent
* description: Updates an existing agent with the provided data
* tags: [Agents]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* requestBody:
* required: true
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/CreateAgentRequest'
* responses:
* 200:
* description: Agent updated successfully
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/AgentEntity'
* 400:
* description: Validation error
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
* 404:
* description: Agent not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
* 500:
* description: Internal server error
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
*/
export const updateAgent = async (req: Request, res: Response): Promise<Response> => {
const { agentId } = req.params
try {
logger.debug('Updating agent', { agentId })
logger.debug('Replace payload', { body: req.body })
const { validatedBody } = req as ValidationRequest
const replacePayload = (validatedBody ?? {}) as ReplaceAgentRequest
const agent = await agentService.updateAgent(agentId, replacePayload, { replace: true })
if (!agent) {
logger.warn('Agent not found for update', { agentId })
return res.status(404).json({
error: {
message: 'Agent not found',
type: 'not_found',
code: 'agent_not_found'
}
})
}
logger.info('Agent updated', { agentId })
return res.json(agent)
} catch (error: any) {
if (error instanceof AgentModelValidationError) {
logger.warn('Agent model validation error during update', {
agentId,
agentType: error.context.agentType,
field: error.context.field,
model: error.context.model,
detail: error.detail
})
return res.status(400).json(modelValidationErrorBody(error))
}
logger.error('Error updating agent', { error, agentId })
return res.status(500).json({
error: {
message: 'Failed to update agent: ' + error.message,
type: 'internal_error',
code: 'agent_update_failed'
}
})
}
}
/**
* @swagger
* /v1/agents/{agentId}:
* patch:
* summary: Partially update agent
* description: Partially updates an existing agent with only the provided fields
* tags: [Agents]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* requestBody:
* required: true
* content:
* application/json:
* schema:
* type: object
* properties:
* name:
* type: string
* description: Agent name
* description:
* type: string
* description: Agent description
* avatar:
* type: string
* description: Agent avatar URL
* instructions:
* type: string
* description: System prompt/instructions
* model:
* type: string
* description: Main model ID
* plan_model:
* type: string
* description: Optional planning model ID
* small_model:
* type: string
* description: Optional small/fast model ID
* tools:
* type: array
* items:
* type: string
* description: Tools
* mcps:
* type: array
* items:
* type: string
* description: MCP tool IDs
* knowledges:
* type: array
* items:
* type: string
* description: Knowledge base IDs
* configuration:
* type: object
* description: Extensible settings
* accessible_paths:
* type: array
* items:
* type: string
* description: Accessible directory paths
* permission_mode:
* type: string
* enum: [readOnly, acceptEdits, bypassPermissions]
* description: Permission mode
* max_steps:
* type: integer
* description: Maximum steps the agent can take
* description: Only include the fields you want to update
* responses:
* 200:
* description: Agent updated successfully
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/AgentEntity'
* 400:
* description: Validation error
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
* 404:
* description: Agent not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
* 500:
* description: Internal server error
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
*/
export const patchAgent = async (req: Request, res: Response): Promise<Response> => {
const { agentId } = req.params
try {
logger.debug('Partially updating agent', { agentId })
logger.debug('Patch payload', { body: req.body })
const { validatedBody } = req as ValidationRequest
const updatePayload = (validatedBody ?? {}) as UpdateAgentRequest
const agent = await agentService.updateAgent(agentId, updatePayload)
if (!agent) {
logger.warn('Agent not found for partial update', { agentId })
return res.status(404).json({
error: {
message: 'Agent not found',
type: 'not_found',
code: 'agent_not_found'
}
})
}
logger.info('Agent patched', { agentId })
return res.json(agent)
} catch (error: any) {
if (error instanceof AgentModelValidationError) {
logger.warn('Agent model validation error during partial update', {
agentId,
agentType: error.context.agentType,
field: error.context.field,
model: error.context.model,
detail: error.detail
})
return res.status(400).json(modelValidationErrorBody(error))
}
logger.error('Error partially updating agent', { error, agentId })
return res.status(500).json({
error: {
message: `Failed to partially update agent: ${error.message}`,
type: 'internal_error',
code: 'agent_patch_failed'
}
})
}
}
/**
* @swagger
* /v1/agents/{agentId}:
* delete:
* summary: Delete agent
* description: Deletes an agent and all associated sessions and logs
* tags: [Agents]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* responses:
* 204:
* description: Agent deleted successfully
* 404:
* description: Agent not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
* 500:
* description: Internal server error
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
*/
export const deleteAgent = async (req: Request, res: Response): Promise<Response> => {
try {
const { agentId } = req.params
logger.debug('Deleting agent', { agentId })
const deleted = await agentService.deleteAgent(agentId)
if (!deleted) {
logger.warn('Agent not found for deletion', { agentId })
return res.status(404).json({
error: {
message: 'Agent not found',
type: 'not_found',
code: 'agent_not_found'
}
})
}
logger.info('Agent deleted', { agentId })
return res.status(204).send()
} catch (error: any) {
logger.error('Error deleting agent', { error, agentId: req.params.agentId })
return res.status(500).json({
error: {
message: 'Failed to delete agent',
type: 'internal_error',
code: 'agent_delete_failed'
}
})
}
}

View File

@@ -1,3 +0,0 @@
export * as agentHandlers from './agents'
export * as messageHandlers from './messages'
export * as sessionHandlers from './sessions'

View File

@@ -1,317 +0,0 @@
import { loggerService } from '@logger'
import { MESSAGE_STREAM_TIMEOUT_MS } from '@main/apiServer/config/timeouts'
import { createStreamAbortController, STREAM_TIMEOUT_REASON } from '@main/apiServer/utils/createStreamAbortController'
import { agentService, sessionMessageService, sessionService } from '@main/services/agents'
import { Request, Response } from 'express'
const logger = loggerService.withContext('ApiServerMessagesHandlers')
// Helper function to verify agent and session exist and belong together
const verifyAgentAndSession = async (agentId: string, sessionId: string) => {
const agentExists = await agentService.agentExists(agentId)
if (!agentExists) {
throw { status: 404, code: 'agent_not_found', message: 'Agent not found' }
}
const session = await sessionService.getSession(agentId, sessionId)
if (!session) {
throw { status: 404, code: 'session_not_found', message: 'Session not found' }
}
if (session.agent_id !== agentId) {
throw { status: 404, code: 'session_not_found', message: 'Session not found for this agent' }
}
return session
}
export const createMessage = async (req: Request, res: Response): Promise<void> => {
let clearAbortTimeout: (() => void) | undefined
try {
const { agentId, sessionId } = req.params
const session = await verifyAgentAndSession(agentId, sessionId)
const messageData = req.body
logger.info('Creating streaming message', { agentId, sessionId })
logger.debug('Streaming message payload', { messageData })
// Set SSE headers
res.setHeader('Content-Type', 'text/event-stream')
res.setHeader('Cache-Control', 'no-cache')
res.setHeader('Connection', 'keep-alive')
res.setHeader('Access-Control-Allow-Origin', '*')
res.setHeader('Access-Control-Allow-Headers', 'Cache-Control')
const {
abortController,
registerAbortHandler,
clearAbortTimeout: helperClearAbortTimeout
} = createStreamAbortController({
timeoutMs: MESSAGE_STREAM_TIMEOUT_MS
})
clearAbortTimeout = helperClearAbortTimeout
const { stream, completion } = await sessionMessageService.createSessionMessage(
session,
messageData,
abortController
)
const reader = stream.getReader()
// Track stream lifecycle so we keep the SSE connection open until persistence finishes
let responseEnded = false
let streamFinished = false
const cleanupAbortTimeout = () => {
clearAbortTimeout?.()
}
const finalizeResponse = () => {
if (responseEnded) {
return
}
if (!streamFinished) {
return
}
responseEnded = true
cleanupAbortTimeout()
try {
// res.write('data: {"type":"finish"}\n\n')
res.write('data: [DONE]\n\n')
} catch (writeError) {
logger.error('Error writing final sentinel to SSE stream', { error: writeError as Error })
}
res.end()
}
/**
* Client Disconnect Detection for Server-Sent Events (SSE)
*
* We monitor multiple HTTP events to reliably detect when a client disconnects
* from the streaming response. This is crucial for:
* - Aborting long-running Claude Code processes
* - Cleaning up resources and preventing memory leaks
* - Avoiding orphaned processes
*
* Event Priority & Behavior:
* 1. res.on('close') - Most common for SSE client disconnects (browser tab close, curl Ctrl+C)
* 2. req.on('aborted') - Explicit request abortion
* 3. req.on('close') - Request object closure (less common with SSE)
*
* When any disconnect event fires, we:
* - Abort the Claude Code SDK process via abortController
* - Clean up event listeners to prevent memory leaks
* - Mark the response as ended to prevent further writes
*/
registerAbortHandler((abortReason) => {
cleanupAbortTimeout()
if (responseEnded) return
responseEnded = true
if (abortReason === STREAM_TIMEOUT_REASON) {
logger.error('Streaming message timeout', { agentId, sessionId })
try {
res.write(
`data: ${JSON.stringify({
type: 'error',
error: {
message: 'Stream timeout',
type: 'timeout_error',
code: 'stream_timeout'
}
})}\n\n`
)
} catch (writeError) {
logger.error('Error writing timeout to SSE stream', { error: writeError })
}
} else if (abortReason === 'Client disconnected') {
logger.info('Streaming client disconnected', { agentId, sessionId })
} else {
logger.warn('Streaming aborted', { agentId, sessionId, reason: abortReason })
}
reader.cancel(abortReason ?? 'stream aborted').catch(() => {})
if (!res.headersSent) {
res.setHeader('Content-Type', 'text/event-stream')
res.setHeader('Cache-Control', 'no-cache')
res.setHeader('Connection', 'keep-alive')
}
if (!res.writableEnded) {
res.end()
}
})
const handleDisconnect = () => {
if (abortController.signal.aborted) return
abortController.abort('Client disconnected')
}
req.on('close', handleDisconnect)
req.on('aborted', handleDisconnect)
res.on('close', handleDisconnect)
const pumpStream = async () => {
try {
while (!responseEnded) {
const { done, value } = await reader.read()
if (done) {
break
}
res.write(`data: ${JSON.stringify(value)}\n\n`)
}
streamFinished = true
finalizeResponse()
} catch (error) {
if (responseEnded) return
logger.error('Error reading agent stream', { error })
try {
res.write(
`data: ${JSON.stringify({
type: 'error',
error: {
message: (error as Error).message || 'Stream processing error',
type: 'stream_error',
code: 'stream_processing_failed'
}
})}\n\n`
)
} catch (writeError) {
logger.error('Error writing stream error to SSE', { error: writeError })
}
responseEnded = true
cleanupAbortTimeout()
res.end()
}
}
pumpStream().catch((error) => {
logger.error('Pump stream failure', { error })
})
completion
.then(() => {
streamFinished = true
finalizeResponse()
})
.catch((error) => {
if (responseEnded) return
logger.error('Streaming message error', { agentId, sessionId, error })
try {
res.write(
`data: ${JSON.stringify({
type: 'error',
error: {
message: (error as { message?: string })?.message || 'Stream processing error',
type: 'stream_error',
code: 'stream_processing_failed'
}
})}\n\n`
)
} catch (writeError) {
logger.error('Error writing completion error to SSE stream', { error: writeError })
}
responseEnded = true
cleanupAbortTimeout()
res.end()
})
// Clear timeout when response ends
res.on('close', cleanupAbortTimeout)
res.on('finish', cleanupAbortTimeout)
} catch (error: any) {
clearAbortTimeout?.()
logger.error('Error in streaming message handler', {
error,
agentId: req.params.agentId,
sessionId: req.params.sessionId
})
// Send error as SSE if possible
if (!res.headersSent) {
res.setHeader('Content-Type', 'text/event-stream')
res.setHeader('Cache-Control', 'no-cache')
res.setHeader('Connection', 'keep-alive')
}
try {
const errorResponse = {
type: 'error',
error: {
message: error.status ? error.message : 'Failed to create streaming message',
type: error.status ? 'not_found' : 'internal_error',
code: error.status ? error.code : 'stream_creation_failed'
}
}
res.write(`data: ${JSON.stringify(errorResponse)}\n\n`)
} catch (writeError) {
logger.error('Error writing initial error to SSE stream', { error: writeError })
}
res.end()
}
}
export const deleteMessage = async (req: Request, res: Response): Promise<Response> => {
try {
const { agentId, sessionId, messageId: messageIdParam } = req.params
const messageId = Number(messageIdParam)
await verifyAgentAndSession(agentId, sessionId)
const deleted = await sessionMessageService.deleteSessionMessage(sessionId, messageId)
if (!deleted) {
logger.warn('Session message not found', { agentId, sessionId, messageId })
return res.status(404).json({
error: {
message: 'Message not found for this session',
type: 'not_found',
code: 'session_message_not_found'
}
})
}
logger.info('Session message deleted', { agentId, sessionId, messageId })
return res.status(204).send()
} catch (error: any) {
if (error?.status === 404) {
logger.warn('Delete message failed - missing resource', {
agentId: req.params.agentId,
sessionId: req.params.sessionId,
messageId: req.params.messageId,
error
})
return res.status(404).json({
error: {
message: error.message,
type: 'not_found',
code: error.code ?? 'session_message_not_found'
}
})
}
logger.error('Error deleting session message', {
error,
agentId: req.params.agentId,
sessionId: req.params.sessionId,
messageId: Number(req.params.messageId)
})
return res.status(500).json({
error: {
message: 'Failed to delete session message',
type: 'internal_error',
code: 'session_message_delete_failed'
}
})
}
}

View File

@@ -1,366 +0,0 @@
import { loggerService } from '@logger'
import { AgentModelValidationError, sessionMessageService, sessionService } from '@main/services/agents'
import { ListAgentSessionsResponse, type ReplaceSessionRequest, UpdateSessionResponse } from '@types'
import { Request, Response } from 'express'
import type { ValidationRequest } from '../validators/zodValidator'
const logger = loggerService.withContext('ApiServerSessionsHandlers')
const modelValidationErrorBody = (error: AgentModelValidationError) => ({
error: {
message: `Invalid ${error.context.field}: ${error.detail.message}`,
type: 'invalid_request_error',
code: error.detail.code
}
})
export const createSession = async (req: Request, res: Response): Promise<Response> => {
const { agentId } = req.params
try {
const sessionData = req.body
logger.debug('Creating new session', { agentId })
logger.debug('Session payload', { sessionData })
const session = await sessionService.createSession(agentId, sessionData)
logger.info('Session created', { agentId, sessionId: session?.id })
return res.status(201).json(session)
} catch (error: any) {
if (error instanceof AgentModelValidationError) {
logger.warn('Session model validation error during create', {
agentId,
agentType: error.context.agentType,
field: error.context.field,
model: error.context.model,
detail: error.detail
})
return res.status(400).json(modelValidationErrorBody(error))
}
logger.error('Error creating session', { error, agentId })
return res.status(500).json({
error: {
message: `Failed to create session: ${error.message}`,
type: 'internal_error',
code: 'session_creation_failed'
}
})
}
}
export const listSessions = async (req: Request, res: Response): Promise<Response> => {
const { agentId } = req.params
try {
const limit = req.query.limit ? parseInt(req.query.limit as string) : 20
const offset = req.query.offset ? parseInt(req.query.offset as string) : 0
const status = req.query.status as any
logger.debug('Listing agent sessions', { agentId, limit, offset, status })
const result = await sessionService.listSessions(agentId, { limit, offset })
logger.info('Agent sessions listed', {
agentId,
returned: result.sessions.length,
total: result.total,
limit,
offset
})
return res.json({
data: result.sessions,
total: result.total,
limit,
offset
})
} catch (error: any) {
logger.error('Error listing sessions', { error, agentId })
return res.status(500).json({
error: {
message: 'Failed to list sessions',
type: 'internal_error',
code: 'session_list_failed'
}
})
}
}
export const getSession = async (req: Request, res: Response): Promise<Response> => {
try {
const { agentId, sessionId } = req.params
logger.debug('Getting session', { agentId, sessionId })
const session = await sessionService.getSession(agentId, sessionId)
if (!session) {
logger.warn('Session not found', { agentId, sessionId })
return res.status(404).json({
error: {
message: 'Session not found',
type: 'not_found',
code: 'session_not_found'
}
})
}
// // Verify session belongs to the agent
// logger.warn(`Session ${sessionId} does not belong to agent ${agentId}`)
// return res.status(404).json({
// error: {
// message: 'Session not found for this agent',
// type: 'not_found',
// code: 'session_not_found'
// }
// })
// }
// Fetch session messages
logger.debug('Fetching session messages', { sessionId })
const { messages } = await sessionMessageService.listSessionMessages(sessionId)
// Add messages to session
const sessionWithMessages = {
...session,
messages: messages
}
logger.info('Session retrieved', { agentId, sessionId, messageCount: messages.length })
return res.json(sessionWithMessages)
} catch (error: any) {
logger.error('Error getting session', { error, agentId: req.params.agentId, sessionId: req.params.sessionId })
return res.status(500).json({
error: {
message: 'Failed to get session',
type: 'internal_error',
code: 'session_get_failed'
}
})
}
}
export const updateSession = async (req: Request, res: Response): Promise<Response> => {
const { agentId, sessionId } = req.params
try {
logger.debug('Updating session', { agentId, sessionId })
logger.debug('Replace payload', { body: req.body })
// First check if session exists and belongs to agent
const existingSession = await sessionService.getSession(agentId, sessionId)
if (!existingSession || existingSession.agent_id !== agentId) {
logger.warn('Session not found for update', { agentId, sessionId })
return res.status(404).json({
error: {
message: 'Session not found for this agent',
type: 'not_found',
code: 'session_not_found'
}
})
}
const { validatedBody } = req as ValidationRequest
const replacePayload = (validatedBody ?? {}) as ReplaceSessionRequest
const session = await sessionService.updateSession(agentId, sessionId, replacePayload)
if (!session) {
logger.warn('Session missing during update', { agentId, sessionId })
return res.status(404).json({
error: {
message: 'Session not found',
type: 'not_found',
code: 'session_not_found'
}
})
}
logger.info('Session updated', { agentId, sessionId })
return res.json(session satisfies UpdateSessionResponse)
} catch (error: any) {
if (error instanceof AgentModelValidationError) {
logger.warn('Session model validation error during update', {
agentId,
sessionId,
agentType: error.context.agentType,
field: error.context.field,
model: error.context.model,
detail: error.detail
})
return res.status(400).json(modelValidationErrorBody(error))
}
logger.error('Error updating session', { error, agentId, sessionId })
return res.status(500).json({
error: {
message: `Failed to update session: ${error.message}`,
type: 'internal_error',
code: 'session_update_failed'
}
})
}
}
export const patchSession = async (req: Request, res: Response): Promise<Response> => {
const { agentId, sessionId } = req.params
try {
logger.debug('Patching session', { agentId, sessionId })
logger.debug('Patch payload', { body: req.body })
// First check if session exists and belongs to agent
const existingSession = await sessionService.getSession(agentId, sessionId)
if (!existingSession || existingSession.agent_id !== agentId) {
logger.warn('Session not found for patch', { agentId, sessionId })
return res.status(404).json({
error: {
message: 'Session not found for this agent',
type: 'not_found',
code: 'session_not_found'
}
})
}
const updateSession = { ...existingSession, ...req.body }
const session = await sessionService.updateSession(agentId, sessionId, updateSession)
if (!session) {
logger.warn('Session missing while patching', { agentId, sessionId })
return res.status(404).json({
error: {
message: 'Session not found',
type: 'not_found',
code: 'session_not_found'
}
})
}
logger.info('Session patched', { agentId, sessionId })
return res.json(session)
} catch (error: any) {
if (error instanceof AgentModelValidationError) {
logger.warn('Session model validation error during patch', {
agentId,
sessionId,
agentType: error.context.agentType,
field: error.context.field,
model: error.context.model,
detail: error.detail
})
return res.status(400).json(modelValidationErrorBody(error))
}
logger.error('Error patching session', { error, agentId, sessionId })
return res.status(500).json({
error: {
message: `Failed to patch session, ${error.message}`,
type: 'internal_error',
code: 'session_patch_failed'
}
})
}
}
export const deleteSession = async (req: Request, res: Response): Promise<Response> => {
try {
const { agentId, sessionId } = req.params
logger.debug('Deleting session', { agentId, sessionId })
// First check if session exists and belongs to agent
const existingSession = await sessionService.getSession(agentId, sessionId)
if (!existingSession || existingSession.agent_id !== agentId) {
logger.warn('Session not found for deletion', { agentId, sessionId })
return res.status(404).json({
error: {
message: 'Session not found for this agent',
type: 'not_found',
code: 'session_not_found'
}
})
}
const deleted = await sessionService.deleteSession(agentId, sessionId)
if (!deleted) {
logger.warn('Session missing during delete', { agentId, sessionId })
return res.status(404).json({
error: {
message: 'Session not found',
type: 'not_found',
code: 'session_not_found'
}
})
}
logger.info('Session deleted', { agentId, sessionId })
const { total } = await sessionService.listSessions(agentId, { limit: 1 })
if (total === 0) {
logger.info('No remaining sessions, creating default', { agentId })
try {
const fallbackSession = await sessionService.createSession(agentId, {})
logger.info('Default session created after delete', {
agentId,
sessionId: fallbackSession?.id
})
} catch (recoveryError: any) {
logger.error('Failed to recreate session after deleting last session', {
agentId,
error: recoveryError
})
return res.status(500).json({
error: {
message: `Failed to recreate session after deletion: ${recoveryError.message}`,
type: 'internal_error',
code: 'session_recovery_failed'
}
})
}
}
return res.status(204).send()
} catch (error: any) {
logger.error('Error deleting session', { error, agentId: req.params.agentId, sessionId: req.params.sessionId })
return res.status(500).json({
error: {
message: 'Failed to delete session',
type: 'internal_error',
code: 'session_delete_failed'
}
})
}
}
// Convenience endpoints for sessions without agent context
export const listAllSessions = async (req: Request, res: Response): Promise<Response> => {
try {
const limit = req.query.limit ? parseInt(req.query.limit as string) : 20
const offset = req.query.offset ? parseInt(req.query.offset as string) : 0
const status = req.query.status as any
logger.debug('Listing all sessions', { limit, offset, status })
const result = await sessionService.listSessions(undefined, { limit, offset })
logger.info('Sessions listed', {
returned: result.sessions.length,
total: result.total,
limit,
offset
})
return res.json({
data: result.sessions,
total: result.total,
limit,
offset
} satisfies ListAgentSessionsResponse)
} catch (error: any) {
logger.error('Error listing all sessions', { error })
return res.status(500).json({
error: {
message: 'Failed to list sessions',
type: 'internal_error',
code: 'session_list_failed'
}
})
}
}

View File

@@ -1,965 +0,0 @@
import express from 'express'
import { agentHandlers, messageHandlers, sessionHandlers } from './handlers'
import { checkAgentExists, handleValidationErrors } from './middleware'
import {
validateAgent,
validateAgentId,
validateAgentReplace,
validateAgentUpdate,
validatePagination,
validateSession,
validateSessionId,
validateSessionMessage,
validateSessionMessageId,
validateSessionReplace,
validateSessionUpdate
} from './validators'
// Create main agents router
const agentsRouter = express.Router()
/**
* @swagger
* components:
* schemas:
* PermissionMode:
* type: string
* enum: [default, acceptEdits, bypassPermissions, plan]
* description: Permission mode for agent operations
*
* AgentType:
* type: string
* enum: [claude-code]
* description: Type of agent
*
* AgentConfiguration:
* type: object
* properties:
* permission_mode:
* $ref: '#/components/schemas/PermissionMode'
* default: default
* max_turns:
* type: integer
* default: 10
* description: Maximum number of interaction turns
* additionalProperties: true
*
* AgentBase:
* type: object
* properties:
* name:
* type: string
* description: Agent name
* description:
* type: string
* description: Agent description
* accessible_paths:
* type: array
* items:
* type: string
* description: Array of directory paths the agent can access
* instructions:
* type: string
* description: System prompt/instructions
* model:
* type: string
* description: Main model ID
* plan_model:
* type: string
* description: Optional planning model ID
* small_model:
* type: string
* description: Optional small/fast model ID
* mcps:
* type: array
* items:
* type: string
* description: Array of MCP tool IDs
* allowed_tools:
* type: array
* items:
* type: string
* description: Array of allowed tool IDs (whitelist)
* configuration:
* $ref: '#/components/schemas/AgentConfiguration'
* required:
* - model
* - accessible_paths
*
* AgentEntity:
* allOf:
* - $ref: '#/components/schemas/AgentBase'
* - type: object
* properties:
* id:
* type: string
* description: Unique agent identifier
* type:
* $ref: '#/components/schemas/AgentType'
* created_at:
* type: string
* format: date-time
* description: ISO timestamp of creation
* updated_at:
* type: string
* format: date-time
* description: ISO timestamp of last update
* required:
* - id
* - type
* - created_at
* - updated_at
* CreateAgentRequest:
* allOf:
* - $ref: '#/components/schemas/AgentBase'
* - type: object
* properties:
* type:
* $ref: '#/components/schemas/AgentType'
* name:
* type: string
* minLength: 1
* description: Agent name (required)
* model:
* type: string
* minLength: 1
* description: Main model ID (required)
* required:
* - type
* - name
* - model
*
* UpdateAgentRequest:
* type: object
* properties:
* name:
* type: string
* description: Agent name
* description:
* type: string
* description: Agent description
* accessible_paths:
* type: array
* items:
* type: string
* description: Array of directory paths the agent can access
* instructions:
* type: string
* description: System prompt/instructions
* model:
* type: string
* description: Main model ID
* plan_model:
* type: string
* description: Optional planning model ID
* small_model:
* type: string
* description: Optional small/fast model ID
* mcps:
* type: array
* items:
* type: string
* description: Array of MCP tool IDs
* allowed_tools:
* type: array
* items:
* type: string
* description: Array of allowed tool IDs (whitelist)
* configuration:
* $ref: '#/components/schemas/AgentConfiguration'
* description: Partial update - all fields are optional
*
* ReplaceAgentRequest:
* $ref: '#/components/schemas/AgentBase'
*
* SessionEntity:
* allOf:
* - $ref: '#/components/schemas/AgentBase'
* - type: object
* properties:
* id:
* type: string
* description: Unique session identifier
* agent_id:
* type: string
* description: Primary agent ID for the session
* agent_type:
* $ref: '#/components/schemas/AgentType'
* created_at:
* type: string
* format: date-time
* description: ISO timestamp of creation
* updated_at:
* type: string
* format: date-time
* description: ISO timestamp of last update
* required:
* - id
* - agent_id
* - agent_type
* - created_at
* - updated_at
*
* CreateSessionRequest:
* allOf:
* - $ref: '#/components/schemas/AgentBase'
* - type: object
* properties:
* model:
* type: string
* minLength: 1
* description: Main model ID (required)
* required:
* - model
*
* UpdateSessionRequest:
* type: object
* properties:
* name:
* type: string
* description: Session name
* description:
* type: string
* description: Session description
* accessible_paths:
* type: array
* items:
* type: string
* description: Array of directory paths the agent can access
* instructions:
* type: string
* description: System prompt/instructions
* model:
* type: string
* description: Main model ID
* plan_model:
* type: string
* description: Optional planning model ID
* small_model:
* type: string
* description: Optional small/fast model ID
* mcps:
* type: array
* items:
* type: string
* description: Array of MCP tool IDs
* allowed_tools:
* type: array
* items:
* type: string
* description: Array of allowed tool IDs (whitelist)
* configuration:
* $ref: '#/components/schemas/AgentConfiguration'
* description: Partial update - all fields are optional
*
* ReplaceSessionRequest:
* allOf:
* - $ref: '#/components/schemas/AgentBase'
* - type: object
* properties:
* model:
* type: string
* minLength: 1
* description: Main model ID (required)
* required:
* - model
*
* CreateSessionMessageRequest:
* type: object
* properties:
* content:
* type: string
* minLength: 1
* description: Message content
* required:
* - content
*
* PaginationQuery:
* type: object
* properties:
* limit:
* type: integer
* minimum: 1
* maximum: 100
* default: 20
* description: Number of items to return
* offset:
* type: integer
* minimum: 0
* default: 0
* description: Number of items to skip
* status:
* type: string
* enum: [idle, running, completed, failed, stopped]
* description: Filter by session status
*
* ListAgentsResponse:
* type: object
* properties:
* agents:
* type: array
* items:
* $ref: '#/components/schemas/AgentEntity'
* total:
* type: integer
* description: Total number of agents
* limit:
* type: integer
* description: Number of items returned
* offset:
* type: integer
* description: Number of items skipped
* required:
* - agents
* - total
* - limit
* - offset
*
* ListSessionsResponse:
* type: object
* properties:
* sessions:
* type: array
* items:
* $ref: '#/components/schemas/SessionEntity'
* total:
* type: integer
* description: Total number of sessions
* limit:
* type: integer
* description: Number of items returned
* offset:
* type: integer
* description: Number of items skipped
* required:
* - sessions
* - total
* - limit
* - offset
*
* ErrorResponse:
* type: object
* properties:
* error:
* type: object
* properties:
* message:
* type: string
* description: Error message
* type:
* type: string
* description: Error type
* code:
* type: string
* description: Error code
* required:
* - message
* - type
* - code
* required:
* - error
*/
/**
* @swagger
* /agents:
* post:
* summary: Create a new agent
* tags: [Agents]
* requestBody:
* required: true
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/CreateAgentRequest'
* responses:
* 201:
* description: Agent created successfully
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/AgentEntity'
* 400:
* description: Invalid request body
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
// Agent CRUD routes
agentsRouter.post('/', validateAgent, handleValidationErrors, agentHandlers.createAgent)
/**
* @swagger
* /agents:
* get:
* summary: List all agents with pagination
* tags: [Agents]
* parameters:
* - in: query
* name: limit
* schema:
* type: integer
* minimum: 1
* maximum: 100
* default: 20
* description: Number of agents to return
* - in: query
* name: offset
* schema:
* type: integer
* minimum: 0
* default: 0
* description: Number of agents to skip
* - in: query
* name: status
* schema:
* type: string
* enum: [idle, running, completed, failed, stopped]
* description: Filter by agent status
* responses:
* 200:
* description: List of agents
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ListAgentsResponse'
*/
agentsRouter.get('/', validatePagination, handleValidationErrors, agentHandlers.listAgents)
/**
* @swagger
* /agents/{agentId}:
* get:
* summary: Get agent by ID
* tags: [Agents]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* responses:
* 200:
* description: Agent details
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/AgentEntity'
* 404:
* description: Agent not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
agentsRouter.get('/:agentId', validateAgentId, handleValidationErrors, agentHandlers.getAgent)
/**
* @swagger
* /agents/{agentId}:
* put:
* summary: Replace agent (full update)
* tags: [Agents]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* requestBody:
* required: true
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ReplaceAgentRequest'
* responses:
* 200:
* description: Agent updated successfully
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/AgentEntity'
* 400:
* description: Invalid request body
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
* 404:
* description: Agent not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
agentsRouter.put('/:agentId', validateAgentId, validateAgentReplace, handleValidationErrors, agentHandlers.updateAgent)
/**
* @swagger
* /agents/{agentId}:
* patch:
* summary: Update agent (partial update)
* tags: [Agents]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* requestBody:
* required: true
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/UpdateAgentRequest'
* responses:
* 200:
* description: Agent updated successfully
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/AgentEntity'
* 400:
* description: Invalid request body
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
* 404:
* description: Agent not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
agentsRouter.patch('/:agentId', validateAgentId, validateAgentUpdate, handleValidationErrors, agentHandlers.patchAgent)
/**
* @swagger
* /agents/{agentId}:
* delete:
* summary: Delete agent
* tags: [Agents]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* responses:
* 204:
* description: Agent deleted successfully
* 404:
* description: Agent not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
agentsRouter.delete('/:agentId', validateAgentId, handleValidationErrors, agentHandlers.deleteAgent)
// Create sessions router with agent context
const createSessionsRouter = (): express.Router => {
const sessionsRouter = express.Router({ mergeParams: true })
// Session CRUD routes (nested under agent)
/**
* @swagger
* /agents/{agentId}/sessions:
* post:
* summary: Create a new session for an agent
* tags: [Sessions]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* requestBody:
* required: true
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/CreateSessionRequest'
* responses:
* 201:
* description: Session created successfully
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/SessionEntity'
* 400:
* description: Invalid request body
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
* 404:
* description: Agent not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
sessionsRouter.post('/', validateSession, handleValidationErrors, sessionHandlers.createSession)
/**
* @swagger
* /agents/{agentId}/sessions:
* get:
* summary: List sessions for an agent
* tags: [Sessions]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* - in: query
* name: limit
* schema:
* type: integer
* minimum: 1
* maximum: 100
* default: 20
* description: Number of sessions to return
* - in: query
* name: offset
* schema:
* type: integer
* minimum: 0
* default: 0
* description: Number of sessions to skip
* - in: query
* name: status
* schema:
* type: string
* enum: [idle, running, completed, failed, stopped]
* description: Filter by session status
* responses:
* 200:
* description: List of sessions
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ListSessionsResponse'
* 404:
* description: Agent not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
sessionsRouter.get('/', validatePagination, handleValidationErrors, sessionHandlers.listSessions)
/**
* @swagger
* /agents/{agentId}/sessions/{sessionId}:
* get:
* summary: Get session by ID
* tags: [Sessions]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* - in: path
* name: sessionId
* required: true
* schema:
* type: string
* description: Session ID
* responses:
* 200:
* description: Session details
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/SessionEntity'
* 404:
* description: Agent or session not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
sessionsRouter.get('/:sessionId', validateSessionId, handleValidationErrors, sessionHandlers.getSession)
/**
* @swagger
* /agents/{agentId}/sessions/{sessionId}:
* put:
* summary: Replace session (full update)
* tags: [Sessions]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* - in: path
* name: sessionId
* required: true
* schema:
* type: string
* description: Session ID
* requestBody:
* required: true
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ReplaceSessionRequest'
* responses:
* 200:
* description: Session updated successfully
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/SessionEntity'
* 400:
* description: Invalid request body
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
* 404:
* description: Agent or session not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
sessionsRouter.put(
'/:sessionId',
validateSessionId,
validateSessionReplace,
handleValidationErrors,
sessionHandlers.updateSession
)
/**
* @swagger
* /agents/{agentId}/sessions/{sessionId}:
* patch:
* summary: Update session (partial update)
* tags: [Sessions]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* - in: path
* name: sessionId
* required: true
* schema:
* type: string
* description: Session ID
* requestBody:
* required: true
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/UpdateSessionRequest'
* responses:
* 200:
* description: Session updated successfully
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/SessionEntity'
* 400:
* description: Invalid request body
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
* 404:
* description: Agent or session not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
sessionsRouter.patch(
'/:sessionId',
validateSessionId,
validateSessionUpdate,
handleValidationErrors,
sessionHandlers.patchSession
)
/**
* @swagger
* /agents/{agentId}/sessions/{sessionId}:
* delete:
* summary: Delete session
* tags: [Sessions]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* - in: path
* name: sessionId
* required: true
* schema:
* type: string
* description: Session ID
* responses:
* 204:
* description: Session deleted successfully
* 404:
* description: Agent or session not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
sessionsRouter.delete('/:sessionId', validateSessionId, handleValidationErrors, sessionHandlers.deleteSession)
return sessionsRouter
}
// Create messages router with agent and session context
const createMessagesRouter = (): express.Router => {
const messagesRouter = express.Router({ mergeParams: true })
// Message CRUD routes (nested under agent/session)
/**
* @swagger
* /agents/{agentId}/sessions/{sessionId}/messages:
* post:
* summary: Create a new message in a session
* tags: [Messages]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* - in: path
* name: sessionId
* required: true
* schema:
* type: string
* description: Session ID
* requestBody:
* required: true
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/CreateSessionMessageRequest'
* responses:
* 201:
* description: Message created successfully
* content:
* application/json:
* schema:
* type: object
* properties:
* id:
* type: number
* description: Message ID
* session_id:
* type: string
* description: Session ID
* role:
* type: string
* enum: [assistant, user, system, tool]
* description: Message role
* content:
* type: object
* description: Message content (AI SDK format)
* agent_session_id:
* type: string
* description: Agent session ID for resuming
* metadata:
* type: object
* description: Additional metadata
* created_at:
* type: string
* format: date-time
* updated_at:
* type: string
* format: date-time
* 400:
* description: Invalid request body
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
* 404:
* description: Agent or session not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
messagesRouter.post('/', validateSessionMessage, handleValidationErrors, messageHandlers.createMessage)
/**
* @swagger
* /agents/{agentId}/sessions/{sessionId}/messages/{messageId}:
* delete:
* summary: Delete a message from a session
* tags: [Messages]
* parameters:
* - in: path
* name: agentId
* required: true
* schema:
* type: string
* description: Agent ID
* - in: path
* name: sessionId
* required: true
* schema:
* type: string
* description: Session ID
* - in: path
* name: messageId
* required: true
* schema:
* type: integer
* description: Message ID
* responses:
* 204:
* description: Message deleted successfully
* 404:
* description: Agent, session, or message not found
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/ErrorResponse'
*/
messagesRouter.delete('/:messageId', validateSessionMessageId, handleValidationErrors, messageHandlers.deleteMessage)
return messagesRouter
}
// Mount nested resources with clear hierarchy
const sessionsRouter = createSessionsRouter()
const messagesRouter = createMessagesRouter()
// Mount sessions under specific agent
agentsRouter.use('/:agentId/sessions', validateAgentId, checkAgentExists, handleValidationErrors, sessionsRouter)
// Mount messages under specific agent/session
agentsRouter.use(
'/:agentId/sessions/:sessionId/messages',
validateAgentId,
validateSessionId,
handleValidationErrors,
messagesRouter
)
// Export main router and convenience router
export const agentsRoutes = agentsRouter

View File

@@ -1,44 +0,0 @@
import { Request, Response } from 'express'
import { agentService } from '../../../../services/agents'
import { loggerService } from '../../../../services/LoggerService'
const logger = loggerService.withContext('ApiServerMiddleware')
// Since Zod validators handle their own errors, this is now a pass-through
export const handleValidationErrors = (_req: Request, _res: Response, next: any): void => {
next()
}
// Middleware to check if agent exists
export const checkAgentExists = async (req: Request, res: Response, next: any): Promise<void> => {
try {
const { agentId } = req.params
const exists = await agentService.agentExists(agentId)
if (!exists) {
res.status(404).json({
error: {
message: 'Agent not found',
type: 'not_found',
code: 'agent_not_found'
}
})
return
}
next()
} catch (error) {
logger.error('Error checking agent existence', {
error: error as Error,
agentId: req.params.agentId
})
res.status(500).json({
error: {
message: 'Failed to validate agent',
type: 'internal_error',
code: 'agent_validation_failed'
}
})
}
}

View File

@@ -1 +0,0 @@
export * from './common'

View File

@@ -1,24 +0,0 @@
import {
AgentIdParamSchema,
CreateAgentRequestSchema,
ReplaceAgentRequestSchema,
UpdateAgentRequestSchema
} from '@types'
import { createZodValidator } from './zodValidator'
export const validateAgent = createZodValidator({
body: CreateAgentRequestSchema
})
export const validateAgentReplace = createZodValidator({
body: ReplaceAgentRequestSchema
})
export const validateAgentUpdate = createZodValidator({
body: UpdateAgentRequestSchema
})
export const validateAgentId = createZodValidator({
params: AgentIdParamSchema
})

View File

@@ -1,7 +0,0 @@
import { PaginationQuerySchema } from '@types'
import { createZodValidator } from './zodValidator'
export const validatePagination = createZodValidator({
query: PaginationQuerySchema
})

View File

@@ -1,4 +0,0 @@
export * from './agents'
export * from './common'
export * from './messages'
export * from './sessions'

View File

@@ -1,11 +0,0 @@
import { CreateSessionMessageRequestSchema, SessionMessageIdParamSchema } from '@types'
import { createZodValidator } from './zodValidator'
export const validateSessionMessage = createZodValidator({
body: CreateSessionMessageRequestSchema
})
export const validateSessionMessageId = createZodValidator({
params: SessionMessageIdParamSchema
})

View File

@@ -1,24 +0,0 @@
import {
CreateSessionRequestSchema,
ReplaceSessionRequestSchema,
SessionIdParamSchema,
UpdateSessionRequestSchema
} from '@types'
import { createZodValidator } from './zodValidator'
export const validateSession = createZodValidator({
body: CreateSessionRequestSchema
})
export const validateSessionReplace = createZodValidator({
body: ReplaceSessionRequestSchema
})
export const validateSessionUpdate = createZodValidator({
body: UpdateSessionRequestSchema
})
export const validateSessionId = createZodValidator({
params: SessionIdParamSchema
})

View File

@@ -1,68 +0,0 @@
import { NextFunction, Request, Response } from 'express'
import { ZodError, ZodType } from 'zod'
export interface ValidationRequest extends Request {
validatedBody?: any
validatedParams?: any
validatedQuery?: any
}
export interface ZodValidationConfig {
body?: ZodType
params?: ZodType
query?: ZodType
}
export const createZodValidator = (config: ZodValidationConfig) => {
return (req: ValidationRequest, res: Response, next: NextFunction): void => {
try {
if (config.body && req.body) {
req.validatedBody = config.body.parse(req.body)
}
if (config.params && req.params) {
req.validatedParams = config.params.parse(req.params)
}
if (config.query && req.query) {
req.validatedQuery = config.query.parse(req.query)
}
next()
} catch (error) {
if (error instanceof ZodError) {
const validationErrors = error.issues.map((err) => ({
type: 'field',
value: err.input,
msg: err.message,
path: err.path.map((p) => String(p)).join('.'),
location: getLocationFromPath(err.path, config)
}))
res.status(400).json({
error: {
message: 'Validation failed',
type: 'validation_error',
details: validationErrors
}
})
return
}
res.status(500).json({
error: {
message: 'Internal validation error',
type: 'internal_error',
code: 'validation_processing_failed'
}
})
}
}
}
function getLocationFromPath(path: (string | number | symbol)[], config: ZodValidationConfig): string {
if (config.body && path.length > 0) return 'body'
if (config.params && path.length > 0) return 'params'
if (config.query && path.length > 0) return 'query'
return 'unknown'
}

View File

@@ -1,105 +1,15 @@
import express, { Request, Response } from 'express'
import OpenAI from 'openai'
import { ChatCompletionCreateParams } from 'openai/resources'
import { loggerService } from '../../services/LoggerService'
import {
ChatCompletionModelError,
chatCompletionService,
ChatCompletionValidationError
} from '../services/chat-completion'
import { chatCompletionService } from '../services/chat-completion'
import { validateModelId } from '../utils'
const logger = loggerService.withContext('ApiServerChatRoutes')
const router = express.Router()
interface ErrorResponseBody {
error: {
message: string
type: string
code: string
}
}
const mapChatCompletionError = (error: unknown): { status: number; body: ErrorResponseBody } => {
if (error instanceof ChatCompletionValidationError) {
logger.warn('Chat completion validation error', {
errors: error.errors
})
return {
status: 400,
body: {
error: {
message: error.errors.join('; '),
type: 'invalid_request_error',
code: 'validation_failed'
}
}
}
}
if (error instanceof ChatCompletionModelError) {
logger.warn('Chat completion model error', error.error)
return {
status: 400,
body: {
error: {
message: error.error.message,
type: 'invalid_request_error',
code: error.error.code
}
}
}
}
if (error instanceof Error) {
let statusCode = 500
let errorType = 'server_error'
let errorCode = 'internal_error'
if (error.message.includes('API key') || error.message.includes('authentication')) {
statusCode = 401
errorType = 'authentication_error'
errorCode = 'invalid_api_key'
} else if (error.message.includes('rate limit') || error.message.includes('quota')) {
statusCode = 429
errorType = 'rate_limit_error'
errorCode = 'rate_limit_exceeded'
} else if (error.message.includes('timeout') || error.message.includes('connection')) {
statusCode = 502
errorType = 'server_error'
errorCode = 'upstream_error'
}
logger.error('Chat completion error', { error })
return {
status: statusCode,
body: {
error: {
message: error.message || 'Internal server error',
type: errorType,
code: errorCode
}
}
}
}
logger.error('Chat completion unknown error', { error })
return {
status: 500,
body: {
error: {
message: 'Internal server error',
type: 'server_error',
code: 'internal_error'
}
}
}
}
/**
* @swagger
* /v1/chat/completions:
@@ -150,7 +60,7 @@ const mapChatCompletionError = (error: unknown): { status: number; body: ErrorRe
* type: integer
* total_tokens:
* type: integer
* text/event-stream:
* text/plain:
* schema:
* type: string
* description: Server-sent events stream (when stream=true)
@@ -193,31 +103,72 @@ router.post('/completions', async (req: Request, res: Response) => {
})
}
logger.debug('Chat completion request', {
logger.info('Chat completion request:', {
model: request.model,
messageCount: request.messages?.length || 0,
stream: request.stream,
temperature: request.temperature
})
const isStreaming = !!request.stream
// Validate request
const validation = chatCompletionService.validateRequest(request)
if (!validation.isValid) {
return res.status(400).json({
error: {
message: validation.errors.join('; '),
type: 'invalid_request_error',
code: 'validation_failed'
}
})
}
if (isStreaming) {
const { stream } = await chatCompletionService.processStreamingCompletion(request)
// Validate model ID and get provider
const modelValidation = await validateModelId(request.model)
if (!modelValidation.valid) {
const error = modelValidation.error!
logger.warn(`Model validation failed for '${request.model}':`, error)
return res.status(400).json({
error: {
message: error.message,
type: 'invalid_request_error',
code: error.code
}
})
}
res.setHeader('Content-Type', 'text/event-stream; charset=utf-8')
res.setHeader('Cache-Control', 'no-cache, no-transform')
const provider = modelValidation.provider!
const modelId = modelValidation.modelId!
logger.info('Model validation successful:', {
provider: provider.id,
providerType: provider.type,
modelId: modelId,
fullModelId: request.model
})
// Create OpenAI client
const client = new OpenAI({
baseURL: provider.apiHost,
apiKey: provider.apiKey
})
request.model = modelId
// Handle streaming
if (request.stream) {
const streamResponse = await client.chat.completions.create(request)
res.setHeader('Content-Type', 'text/plain; charset=utf-8')
res.setHeader('Cache-Control', 'no-cache')
res.setHeader('Connection', 'keep-alive')
res.setHeader('X-Accel-Buffering', 'no')
res.flushHeaders()
try {
for await (const chunk of stream) {
for await (const chunk of streamResponse as any) {
res.write(`data: ${JSON.stringify(chunk)}\n\n`)
}
res.write('data: [DONE]\n\n')
res.end()
} catch (streamError: any) {
logger.error('Stream error', { error: streamError })
logger.error('Stream error:', streamError)
res.write(
`data: ${JSON.stringify({
error: {
@@ -227,17 +178,47 @@ router.post('/completions', async (req: Request, res: Response) => {
}
})}\n\n`
)
} finally {
res.end()
}
return
}
const { response } = await chatCompletionService.processCompletion(request)
// Handle non-streaming
const response = await client.chat.completions.create(request)
return res.json(response)
} catch (error: unknown) {
const { status, body } = mapChatCompletionError(error)
return res.status(status).json(body)
} catch (error: any) {
logger.error('Chat completion error:', error)
let statusCode = 500
let errorType = 'server_error'
let errorCode = 'internal_error'
let errorMessage = 'Internal server error'
if (error instanceof Error) {
errorMessage = error.message
if (error.message.includes('API key') || error.message.includes('authentication')) {
statusCode = 401
errorType = 'authentication_error'
errorCode = 'invalid_api_key'
} else if (error.message.includes('rate limit') || error.message.includes('quota')) {
statusCode = 429
errorType = 'rate_limit_error'
errorCode = 'rate_limit_exceeded'
} else if (error.message.includes('timeout') || error.message.includes('connection')) {
statusCode = 502
errorType = 'server_error'
errorCode = 'upstream_error'
}
}
return res.status(statusCode).json({
error: {
message: errorMessage,
type: errorType,
code: errorCode
}
})
}
})

View File

@@ -43,14 +43,14 @@ const router = express.Router()
*/
router.get('/', async (req: Request, res: Response) => {
try {
logger.debug('Listing MCP servers')
logger.info('Get all MCP servers request received')
const servers = await mcpApiService.getAllServers(req)
return res.json({
success: true,
data: servers
})
} catch (error: any) {
logger.error('Error fetching MCP servers', { error })
logger.error('Error fetching MCP servers:', error)
return res.status(503).json({
success: false,
error: {
@@ -103,12 +103,10 @@ router.get('/', async (req: Request, res: Response) => {
*/
router.get('/:server_id', async (req: Request, res: Response) => {
try {
logger.debug('Get MCP server info request received', {
serverId: req.params.server_id
})
logger.info('Get MCP server info request received')
const server = await mcpApiService.getServerInfo(req.params.server_id)
if (!server) {
logger.warn('MCP server not found', { serverId: req.params.server_id })
logger.warn('MCP server not found')
return res.status(404).json({
success: false,
error: {
@@ -123,7 +121,7 @@ router.get('/:server_id', async (req: Request, res: Response) => {
data: server
})
} catch (error: any) {
logger.error('Error fetching MCP server info', { error, serverId: req.params.server_id })
logger.error('Error fetching MCP server info:', error)
return res.status(503).json({
success: false,
error: {
@@ -139,7 +137,7 @@ router.get('/:server_id', async (req: Request, res: Response) => {
router.all('/:server_id/mcp', async (req: Request, res: Response) => {
const server = await mcpApiService.getServerById(req.params.server_id)
if (!server) {
logger.warn('MCP server not found', { serverId: req.params.server_id })
logger.warn('MCP server not found')
return res.status(404).json({
success: false,
error: {

View File

@@ -1,403 +0,0 @@
import { MessageCreateParams } from '@anthropic-ai/sdk/resources'
import { loggerService } from '@logger'
import { Provider } from '@types'
import express, { Request, Response } from 'express'
import { messagesService } from '../services/messages'
import { getProviderById, validateModelId } from '../utils'
const logger = loggerService.withContext('ApiServerMessagesRoutes')
const router = express.Router()
const providerRouter = express.Router({ mergeParams: true })
// Helper function for basic request validation
async function validateRequestBody(req: Request): Promise<{ valid: boolean; error?: any }> {
const request: MessageCreateParams = req.body
if (!request) {
return {
valid: false,
error: {
type: 'error',
error: {
type: 'invalid_request_error',
message: 'Request body is required'
}
}
}
}
return { valid: true }
}
interface HandleMessageProcessingOptions {
req: Request
res: Response
provider: Provider
request: MessageCreateParams
modelId?: string
}
async function handleMessageProcessing({
req,
res,
provider,
request,
modelId
}: HandleMessageProcessingOptions): Promise<void> {
try {
const validation = messagesService.validateRequest(request)
if (!validation.isValid) {
res.status(400).json({
type: 'error',
error: {
type: 'invalid_request_error',
message: validation.errors.join('; ')
}
})
return
}
const extraHeaders = messagesService.prepareHeaders(req.headers)
const { client, anthropicRequest } = await messagesService.processMessage({
provider,
request,
extraHeaders,
modelId
})
if (request.stream) {
await messagesService.handleStreaming(client, anthropicRequest, { response: res }, provider)
return
}
const response = await client.messages.create(anthropicRequest)
res.json(response)
} catch (error: any) {
logger.error('Message processing error', { error })
const { statusCode, errorResponse } = messagesService.transformError(error)
res.status(statusCode).json(errorResponse)
}
}
/**
* @swagger
* /v1/messages:
* post:
* summary: Create message
* description: Create a message response using Anthropic's API format
* tags: [Messages]
* requestBody:
* required: true
* content:
* application/json:
* schema:
* type: object
* required:
* - model
* - max_tokens
* - messages
* properties:
* model:
* type: string
* description: Model ID in format "provider:model_id"
* example: "my-anthropic:claude-3-5-sonnet-20241022"
* max_tokens:
* type: integer
* minimum: 1
* description: Maximum number of tokens to generate
* example: 1024
* messages:
* type: array
* items:
* type: object
* properties:
* role:
* type: string
* enum: [user, assistant]
* content:
* oneOf:
* - type: string
* - type: array
* system:
* type: string
* description: System message
* temperature:
* type: number
* minimum: 0
* maximum: 1
* description: Sampling temperature
* top_p:
* type: number
* minimum: 0
* maximum: 1
* description: Nucleus sampling
* top_k:
* type: integer
* minimum: 0
* description: Top-k sampling
* stream:
* type: boolean
* description: Whether to stream the response
* tools:
* type: array
* description: Available tools for the model
* responses:
* 200:
* description: Message response
* content:
* application/json:
* schema:
* type: object
* properties:
* id:
* type: string
* type:
* type: string
* example: message
* role:
* type: string
* example: assistant
* content:
* type: array
* items:
* type: object
* model:
* type: string
* stop_reason:
* type: string
* stop_sequence:
* type: string
* usage:
* type: object
* properties:
* input_tokens:
* type: integer
* output_tokens:
* type: integer
* text/event-stream:
* schema:
* type: string
* description: Server-sent events stream (when stream=true)
* 400:
* description: Bad request
* content:
* application/json:
* schema:
* type: object
* properties:
* type:
* type: string
* example: error
* error:
* type: object
* properties:
* type:
* type: string
* message:
* type: string
* 401:
* description: Unauthorized
* 429:
* description: Rate limit exceeded
* 500:
* description: Internal server error
*/
router.post('/', async (req: Request, res: Response) => {
// Validate request body
const bodyValidation = await validateRequestBody(req)
if (!bodyValidation.valid) {
return res.status(400).json(bodyValidation.error)
}
try {
const request: MessageCreateParams = req.body
// Validate model ID and get provider
const modelValidation = await validateModelId(request.model)
if (!modelValidation.valid) {
const error = modelValidation.error!
logger.warn('Model validation failed', {
model: request.model,
error
})
return res.status(400).json({
type: 'error',
error: {
type: 'invalid_request_error',
message: error.message
}
})
}
const provider = modelValidation.provider!
const modelId = modelValidation.modelId!
return handleMessageProcessing({ req, res, provider, request, modelId })
} catch (error: any) {
logger.error('Message processing error', { error })
const { statusCode, errorResponse } = messagesService.transformError(error)
return res.status(statusCode).json(errorResponse)
}
})
/**
* @swagger
* /{provider_id}/v1/messages:
* post:
* summary: Create message with provider in path
* description: Create a message response using provider ID from URL path
* tags: [Messages]
* parameters:
* - in: path
* name: provider_id
* required: true
* schema:
* type: string
* description: Provider ID (e.g., "my-anthropic")
* example: "my-anthropic"
* requestBody:
* required: true
* content:
* application/json:
* schema:
* type: object
* required:
* - model
* - max_tokens
* - messages
* properties:
* model:
* type: string
* description: Model ID without provider prefix
* example: "claude-3-5-sonnet-20241022"
* max_tokens:
* type: integer
* minimum: 1
* description: Maximum number of tokens to generate
* example: 1024
* messages:
* type: array
* items:
* type: object
* properties:
* role:
* type: string
* enum: [user, assistant]
* content:
* oneOf:
* - type: string
* - type: array
* system:
* type: string
* description: System message
* temperature:
* type: number
* minimum: 0
* maximum: 1
* description: Sampling temperature
* top_p:
* type: number
* minimum: 0
* maximum: 1
* description: Nucleus sampling
* top_k:
* type: integer
* minimum: 0
* description: Top-k sampling
* stream:
* type: boolean
* description: Whether to stream the response
* tools:
* type: array
* description: Available tools for the model
* responses:
* 200:
* description: Message response
* content:
* application/json:
* schema:
* type: object
* properties:
* id:
* type: string
* type:
* type: string
* example: message
* role:
* type: string
* example: assistant
* content:
* type: array
* items:
* type: object
* model:
* type: string
* stop_reason:
* type: string
* stop_sequence:
* type: string
* usage:
* type: object
* properties:
* input_tokens:
* type: integer
* output_tokens:
* type: integer
* text/event-stream:
* schema:
* type: string
* description: Server-sent events stream (when stream=true)
* 400:
* description: Bad request
* 401:
* description: Unauthorized
* 429:
* description: Rate limit exceeded
* 500:
* description: Internal server error
*/
providerRouter.post('/', async (req: Request, res: Response) => {
// Validate request body
const bodyValidation = await validateRequestBody(req)
if (!bodyValidation.valid) {
return res.status(400).json(bodyValidation.error)
}
try {
const providerId = req.params.provider
if (!providerId) {
return res.status(400).json({
type: 'error',
error: {
type: 'invalid_request_error',
message: 'Provider ID is required in URL path'
}
})
}
// Get provider directly by ID from URL path
const provider = await getProviderById(providerId)
if (!provider) {
return res.status(400).json({
type: 'error',
error: {
type: 'invalid_request_error',
message: `Provider '${providerId}' not found or not enabled`
}
})
}
const request: MessageCreateParams = req.body
return handleMessageProcessing({ req, res, provider, request })
} catch (error: any) {
logger.error('Message processing error', { error })
const { statusCode, errorResponse } = messagesService.transformError(error)
return res.status(statusCode).json(errorResponse)
}
})
export { providerRouter as messagesProviderRoutes, router as messagesRoutes }

View File

@@ -1,124 +1,73 @@
import { ApiModelsFilterSchema, ApiModelsResponse } from '@types'
import express, { Request, Response } from 'express'
import { loggerService } from '../../services/LoggerService'
import { modelsService } from '../services/models'
import { chatCompletionService } from '../services/chat-completion'
const logger = loggerService.withContext('ApiServerModelsRoutes')
const router = express
.Router()
const router = express.Router()
/**
* @swagger
* /v1/models:
* get:
* summary: List available models
* description: Returns a list of available AI models from all configured providers with optional filtering
* tags: [Models]
* parameters:
* - in: query
* name: providerType
* schema:
* type: string
* enum: [openai, openai-response, anthropic, gemini]
* description: Filter models by provider type
* - in: query
* name: offset
* schema:
* type: integer
* minimum: 0
* default: 0
* description: Pagination offset
* - in: query
* name: limit
* schema:
* type: integer
* minimum: 1
* description: Maximum number of models to return
* responses:
* 200:
* description: List of available models
* content:
* application/json:
* schema:
* type: object
* properties:
* object:
* type: string
* example: list
* data:
* type: array
* items:
* $ref: '#/components/schemas/Model'
* total:
* type: integer
* description: Total number of models (when using pagination)
* offset:
* type: integer
* description: Current offset (when using pagination)
* limit:
* type: integer
* description: Current limit (when using pagination)
* 400:
* description: Invalid query parameters
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
* 503:
* description: Service unavailable
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
*/
.get('/', async (req: Request, res: Response) => {
try {
logger.debug('Models list request received', { query: req.query })
/**
* @swagger
* /v1/models:
* get:
* summary: List available models
* description: Returns a list of available AI models from all configured providers
* tags: [Models]
* responses:
* 200:
* description: List of available models
* content:
* application/json:
* schema:
* type: object
* properties:
* object:
* type: string
* example: list
* data:
* type: array
* items:
* $ref: '#/components/schemas/Model'
* 503:
* description: Service unavailable
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Error'
*/
router.get('/', async (_req: Request, res: Response) => {
try {
logger.info('Models list request received')
// Validate query parameters using Zod schema
const filterResult = ApiModelsFilterSchema.safeParse(req.query)
const models = await chatCompletionService.getModels()
if (!filterResult.success) {
logger.warn('Invalid model query parameters', { issues: filterResult.error.issues })
return res.status(400).json({
error: {
message: 'Invalid query parameters',
type: 'invalid_request_error',
code: 'invalid_parameters',
details: filterResult.error.issues.map((issue) => ({
field: issue.path.join('.'),
message: issue.message
}))
}
})
}
const filter = filterResult.data
const response = await modelsService.getModels(filter)
if (response.data.length === 0) {
logger.warn('No models available from providers', { filter })
}
logger.info('Models response ready', {
filter,
total: response.total,
modelIds: response.data.map((m) => m.id)
})
return res.json(response satisfies ApiModelsResponse)
} catch (error: any) {
logger.error('Error fetching models', { error })
return res.status(503).json({
error: {
message: 'Failed to retrieve models from available providers',
type: 'service_unavailable',
code: 'models_unavailable'
}
})
if (models.length === 0) {
logger.warn(
'No models available from providers. This may be because no OpenAI providers are configured or enabled.'
)
}
})
logger.info(`Returning ${models.length} models (OpenAI providers only)`)
logger.debug(
'Model IDs:',
models.map((m) => m.id)
)
return res.json({
object: 'list',
data: models
})
} catch (error: any) {
logger.error('Error fetching models:', error)
return res.status(503).json({
error: {
message: 'Failed to retrieve models from available providers',
type: 'service_unavailable',
code: 'models_unavailable'
}
})
}
})
export { router as modelsRoutes }

View File

@@ -1,72 +1,44 @@
import { createServer } from 'node:http'
import { loggerService } from '@logger'
import { agentService } from '../services/agents'
import { loggerService } from '../services/LoggerService'
import { app } from './app'
import { config } from './config'
const logger = loggerService.withContext('ApiServer')
const GLOBAL_REQUEST_TIMEOUT_MS = 5 * 60_000
const GLOBAL_HEADERS_TIMEOUT_MS = GLOBAL_REQUEST_TIMEOUT_MS + 5_000
const GLOBAL_KEEPALIVE_TIMEOUT_MS = 60_000
export class ApiServer {
private server: ReturnType<typeof createServer> | null = null
async start(): Promise<void> {
if (this.server && this.server.listening) {
if (this.server) {
logger.warn('Server already running')
return
}
// Clean up any failed server instance
if (this.server && !this.server.listening) {
logger.warn('Cleaning up failed server instance')
this.server = null
}
// Load config
const { port, host } = await config.load()
// Initialize AgentService
logger.info('Initializing AgentService')
await agentService.initialize()
logger.info('AgentService initialized')
const { port, host, apiKey } = await config.load()
// Create server with Express app
this.server = createServer(app)
this.applyServerTimeouts(this.server)
// Start server
return new Promise((resolve, reject) => {
this.server!.listen(port, host, () => {
logger.info('API server started', { host, port })
logger.info(`API Server started at http://${host}:${port}`)
logger.info(`API Key: ${apiKey}`)
resolve()
})
this.server!.on('error', (error) => {
// Clean up the server instance if listen fails
this.server = null
reject(error)
})
this.server!.on('error', reject)
})
}
private applyServerTimeouts(server: ReturnType<typeof createServer>): void {
server.requestTimeout = GLOBAL_REQUEST_TIMEOUT_MS
server.headersTimeout = Math.max(GLOBAL_HEADERS_TIMEOUT_MS, server.requestTimeout + 1_000)
server.keepAliveTimeout = GLOBAL_KEEPALIVE_TIMEOUT_MS
server.setTimeout(0)
}
async stop(): Promise<void> {
if (!this.server) return
return new Promise((resolve) => {
this.server!.close(() => {
logger.info('API server stopped')
logger.info('API Server stopped')
this.server = null
resolve()
})
@@ -84,7 +56,7 @@ export class ApiServer {
const isListening = this.server?.listening || false
const result = hasServer && isListening
logger.debug('isRunning check', { hasServer, isListening, result })
logger.debug('isRunning check:', { hasServer, isListening, result })
return result
}

View File

@@ -1,132 +1,83 @@
import { Provider } from '@types'
import OpenAI from 'openai'
import { ChatCompletionCreateParams, ChatCompletionCreateParamsStreaming } from 'openai/resources'
import { ChatCompletionCreateParams } from 'openai/resources'
import { loggerService } from '../../services/LoggerService'
import { ModelValidationError, validateModelId } from '../utils'
import {
getProviderByModel,
getRealProviderModel,
listAllAvailableModels,
OpenAICompatibleModel,
transformModelToOpenAI,
validateProvider
} from '../utils'
const logger = loggerService.withContext('ChatCompletionService')
export interface ModelData extends OpenAICompatibleModel {
provider_id: string
model_id: string
name: string
}
export interface ValidationResult {
isValid: boolean
errors: string[]
}
export class ChatCompletionValidationError extends Error {
constructor(public readonly errors: string[]) {
super(`Request validation failed: ${errors.join('; ')}`)
this.name = 'ChatCompletionValidationError'
}
}
export class ChatCompletionModelError extends Error {
constructor(public readonly error: ModelValidationError) {
super(`Model validation failed: ${error.message}`)
this.name = 'ChatCompletionModelError'
}
}
export type PrepareRequestResult =
| { status: 'validation_error'; errors: string[] }
| { status: 'model_error'; error: ModelValidationError }
| {
status: 'ok'
provider: Provider
modelId: string
client: OpenAI
providerRequest: ChatCompletionCreateParams
}
export class ChatCompletionService {
async resolveProviderContext(
model: string
): Promise<
{ ok: false; error: ModelValidationError } | { ok: true; provider: Provider; modelId: string; client: OpenAI }
> {
const modelValidation = await validateModelId(model)
if (!modelValidation.valid) {
return {
ok: false,
error: modelValidation.error!
}
}
async getModels(): Promise<ModelData[]> {
try {
logger.info('Getting available models from providers')
const provider = modelValidation.provider!
const models = await listAllAvailableModels()
if (provider.type !== 'openai') {
return {
ok: false,
error: {
type: 'unsupported_provider_type',
message: `Provider '${provider.id}' of type '${provider.type}' is not supported for OpenAI chat completions`,
code: 'unsupported_provider_type'
// Use Map to deduplicate models by their full ID (provider:model_id)
const uniqueModels = new Map<string, ModelData>()
for (const model of models) {
const openAIModel = transformModelToOpenAI(model)
const fullModelId = openAIModel.id // This is already in format "provider:model_id"
// Only add if not already present (first occurrence wins)
if (!uniqueModels.has(fullModelId)) {
uniqueModels.set(fullModelId, {
...openAIModel,
provider_id: model.provider,
model_id: model.id,
name: model.name
})
} else {
logger.debug(`Skipping duplicate model: ${fullModelId}`)
}
}
}
const modelId = modelValidation.modelId!
const modelData = Array.from(uniqueModels.values())
const client = new OpenAI({
baseURL: provider.apiHost,
apiKey: provider.apiKey
})
logger.info(`Successfully retrieved ${modelData.length} unique models from ${models.length} total models`)
return {
ok: true,
provider,
modelId,
client
}
}
async prepareRequest(request: ChatCompletionCreateParams, stream: boolean): Promise<PrepareRequestResult> {
const requestValidation = this.validateRequest(request)
if (!requestValidation.isValid) {
return {
status: 'validation_error',
errors: requestValidation.errors
if (models.length > modelData.length) {
logger.debug(`Filtered out ${models.length - modelData.length} duplicate models`)
}
}
const providerContext = await this.resolveProviderContext(request.model!)
if (!providerContext.ok) {
return {
status: 'model_error',
error: providerContext.error
}
}
const { provider, modelId, client } = providerContext
logger.debug('Model validation successful', {
provider: provider.id,
providerType: provider.type,
modelId,
fullModelId: request.model
})
return {
status: 'ok',
provider,
modelId,
client,
providerRequest: stream
? {
...request,
model: modelId,
stream: true as const
}
: {
...request,
model: modelId,
stream: false as const
}
return modelData
} catch (error: any) {
logger.error('Error getting models:', error)
return []
}
}
validateRequest(request: ChatCompletionCreateParams): ValidationResult {
const errors: string[] = []
// Validate model
if (!request.model) {
errors.push('Model is required')
} else if (typeof request.model !== 'string') {
errors.push('Model must be a string')
} else if (!request.model.includes(':')) {
errors.push('Model must be in format "provider:model_id"')
}
// Validate messages
if (!request.messages) {
errors.push('Messages array is required')
@@ -147,6 +98,17 @@ export class ChatCompletionService {
}
// Validate optional parameters
if (request.temperature !== undefined) {
if (typeof request.temperature !== 'number' || request.temperature < 0 || request.temperature > 2) {
errors.push('Temperature must be a number between 0 and 2')
}
}
if (request.max_tokens !== undefined) {
if (typeof request.max_tokens !== 'number' || request.max_tokens < 1) {
errors.push('max_tokens must be a positive number')
}
}
return {
isValid: errors.length === 0,
@@ -154,30 +116,48 @@ export class ChatCompletionService {
}
}
async processCompletion(request: ChatCompletionCreateParams): Promise<{
provider: Provider
modelId: string
response: OpenAI.Chat.Completions.ChatCompletion
}> {
async processCompletion(request: ChatCompletionCreateParams): Promise<OpenAI.Chat.Completions.ChatCompletion> {
try {
logger.debug('Processing chat completion request', {
logger.info('Processing chat completion request:', {
model: request.model,
messageCount: request.messages.length,
stream: request.stream
})
const preparation = await this.prepareRequest(request, false)
if (preparation.status === 'validation_error') {
throw new ChatCompletionValidationError(preparation.errors)
// Validate request
const validation = this.validateRequest(request)
if (!validation.isValid) {
throw new Error(`Request validation failed: ${validation.errors.join(', ')}`)
}
if (preparation.status === 'model_error') {
throw new ChatCompletionModelError(preparation.error)
// Get provider for the model
const provider = await getProviderByModel(request.model!)
if (!provider) {
throw new Error(`Provider not found for model: ${request.model}`)
}
const { provider, modelId, client, providerRequest } = preparation
// Validate provider
if (!validateProvider(provider)) {
throw new Error(`Provider validation failed for: ${provider.id}`)
}
logger.debug('Sending request to provider', {
// Extract model ID from the full model string
const modelId = getRealProviderModel(request.model)
// Create OpenAI client for the provider
const client = new OpenAI({
baseURL: provider.apiHost,
apiKey: provider.apiKey
})
// Prepare request with the actual model ID
const providerRequest = {
...request,
model: modelId,
stream: false
}
logger.debug('Sending request to provider:', {
provider: provider.id,
model: modelId,
apiHost: provider.apiHost
@@ -185,71 +165,71 @@ export class ChatCompletionService {
const response = (await client.chat.completions.create(providerRequest)) as OpenAI.Chat.Completions.ChatCompletion
logger.info('Chat completion processed', {
modelId,
provider: provider.id
})
return {
provider,
modelId,
response
}
logger.info('Successfully processed chat completion')
return response
} catch (error: any) {
logger.error('Error processing chat completion', {
error,
model: request.model
})
logger.error('Error processing chat completion:', error)
throw error
}
}
async processStreamingCompletion(request: ChatCompletionCreateParams): Promise<{
provider: Provider
modelId: string
stream: AsyncIterable<OpenAI.Chat.Completions.ChatCompletionChunk>
}> {
async *processStreamingCompletion(
request: ChatCompletionCreateParams
): AsyncIterable<OpenAI.Chat.Completions.ChatCompletionChunk> {
try {
logger.debug('Processing streaming chat completion request', {
logger.info('Processing streaming chat completion request:', {
model: request.model,
messageCount: request.messages.length
})
const preparation = await this.prepareRequest(request, true)
if (preparation.status === 'validation_error') {
throw new ChatCompletionValidationError(preparation.errors)
// Validate request
const validation = this.validateRequest(request)
if (!validation.isValid) {
throw new Error(`Request validation failed: ${validation.errors.join(', ')}`)
}
if (preparation.status === 'model_error') {
throw new ChatCompletionModelError(preparation.error)
// Get provider for the model
const provider = await getProviderByModel(request.model!)
if (!provider) {
throw new Error(`Provider not found for model: ${request.model}`)
}
const { provider, modelId, client, providerRequest } = preparation
// Validate provider
if (!validateProvider(provider)) {
throw new Error(`Provider validation failed for: ${provider.id}`)
}
logger.debug('Sending streaming request to provider', {
// Extract model ID from the full model string
const modelId = getRealProviderModel(request.model)
// Create OpenAI client for the provider
const client = new OpenAI({
baseURL: provider.apiHost,
apiKey: provider.apiKey
})
// Prepare streaming request
const streamingRequest = {
...request,
model: modelId,
stream: true as const
}
logger.debug('Sending streaming request to provider:', {
provider: provider.id,
model: modelId,
apiHost: provider.apiHost
})
const streamRequest = providerRequest as ChatCompletionCreateParamsStreaming
const stream = (await client.chat.completions.create(
streamRequest
)) as AsyncIterable<OpenAI.Chat.Completions.ChatCompletionChunk>
const stream = await client.chat.completions.create(streamingRequest)
logger.info('Streaming chat completion started', {
modelId,
provider: provider.id
})
return {
provider,
modelId,
stream
for await (const chunk of stream) {
yield chunk
}
logger.info('Successfully completed streaming chat completion')
} catch (error: any) {
logger.error('Error processing streaming chat completion', {
error,
model: request.model
})
logger.error('Error processing streaming chat completion:', error)
throw error
}
}

View File

@@ -13,7 +13,8 @@ import { Request, Response } from 'express'
import { IncomingMessage, ServerResponse } from 'http'
import { loggerService } from '../../services/LoggerService'
import { getMcpServerById, getMCPServersFromRedux } from '../utils/mcp'
import { reduxService } from '../../services/ReduxService'
import { getMcpServerById } from '../utils/mcp'
const logger = loggerService.withContext('MCPApiService')
const transports: Record<string, StreamableHTTPServerTransport> = {}
@@ -49,18 +50,42 @@ class MCPApiService extends EventEmitter {
constructor() {
super()
this.initMcpServer()
logger.debug('MCPApiService initialized')
logger.silly('MCPApiService initialized')
}
private initMcpServer() {
this.transport.onmessage = this.onMessage
}
/**
* Get servers directly from Redux store
*/
private async getServersFromRedux(): Promise<MCPServer[]> {
try {
logger.silly('Getting servers from Redux store')
// Try to get from cache first (faster)
const cachedServers = reduxService.selectSync<MCPServer[]>('state.mcp.servers')
if (cachedServers && Array.isArray(cachedServers)) {
logger.silly(`Found ${cachedServers.length} servers in Redux cache`)
return cachedServers
}
// If cache is not available, get fresh data
const servers = await reduxService.select<MCPServer[]>('state.mcp.servers')
logger.silly(`Fetched ${servers?.length || 0} servers from Redux store`)
return servers || []
} catch (error: any) {
logger.error('Failed to get servers from Redux:', error)
return []
}
}
// get all activated servers
async getAllServers(req: Request): Promise<McpServersResp> {
try {
const servers = await getMCPServersFromRedux()
logger.debug('Returning servers from Redux', { count: servers.length })
const servers = await this.getServersFromRedux()
logger.silly(`Returning ${servers.length} servers`)
const resp: McpServersResp = {
servers: {}
}
@@ -77,7 +102,7 @@ class MCPApiService extends EventEmitter {
}
return resp
} catch (error: any) {
logger.error('Failed to get all servers', { error })
logger.error('Failed to get all servers:', error)
throw new Error('Failed to retrieve servers')
}
}
@@ -85,47 +110,87 @@ class MCPApiService extends EventEmitter {
// get server by id
async getServerById(id: string): Promise<MCPServer | null> {
try {
logger.debug('getServerById called', { id })
const servers = await getMCPServersFromRedux()
logger.silly(`getServerById called with id: ${id}`)
const servers = await this.getServersFromRedux()
const server = servers.find((s) => s.id === id)
if (!server) {
logger.warn('Server not found', { id })
logger.warn(`Server with id ${id} not found`)
return null
}
logger.debug('Returning server', { id })
logger.silly(`Returning server with id ${id}`)
return server
} catch (error: any) {
logger.error('Failed to get server', { id, error })
logger.error(`Failed to get server with id ${id}:`, error)
throw new Error('Failed to retrieve server')
}
}
async getServerInfo(id: string): Promise<any> {
try {
logger.silly(`getServerInfo called with id: ${id}`)
const server = await this.getServerById(id)
if (!server) {
logger.warn('Server not found while fetching info', { id })
logger.warn(`Server with id ${id} not found`)
return null
}
logger.silly(`Returning server info for id ${id}`)
const client = await mcpService.initClient(server)
const tools = await client.listTools()
logger.info(`Server with id ${id} info:`, { tools: JSON.stringify(tools) })
// const [version, tools, prompts, resources] = await Promise.all([
// () => {
// try {
// return client.getServerVersion()
// } catch (error) {
// logger.error(`Failed to get server version for id ${id}:`, { error: error })
// return '1.0.0'
// }
// },
// (() => {
// try {
// return client.listTools()
// } catch (error) {
// logger.error(`Failed to list tools for id ${id}:`, { error: error })
// return []
// }
// })(),
// (() => {
// try {
// return client.listPrompts()
// } catch (error) {
// logger.error(`Failed to list prompts for id ${id}:`, { error: error })
// return []
// }
// })(),
// (() => {
// try {
// return client.listResources()
// } catch (error) {
// logger.error(`Failed to list resources for id ${id}:`, { error: error })
// return []
// }
// })()
// ])
return {
id: server.id,
name: server.name,
type: server.type,
description: server.description,
tools: tools.tools
tools
}
} catch (error: any) {
logger.error('Failed to get server info', { id, error })
logger.error(`Failed to get server info with id ${id}:`, error)
throw new Error('Failed to retrieve server info')
}
}
async handleRequest(req: Request, res: Response, server: MCPServer) {
const sessionId = req.headers['mcp-session-id'] as string | undefined
logger.debug('Handling MCP request', { sessionId, serverId: server.id })
logger.silly(`Handling request for server with sessionId ${sessionId}`)
let transport: StreamableHTTPServerTransport
if (sessionId && transports[sessionId]) {
transport = transports[sessionId]
@@ -138,7 +203,7 @@ class MCPApiService extends EventEmitter {
})
transport.onclose = () => {
logger.info('Transport closed', { sessionId })
logger.info(`Transport for sessionId ${sessionId} closed`)
if (transport.sessionId) {
delete transports[transport.sessionId]
}
@@ -173,15 +238,12 @@ class MCPApiService extends EventEmitter {
}
}
logger.debug('Dispatching MCP request', {
sessionId: transport.sessionId ?? sessionId,
messageCount: messages.length
})
logger.info(`Request body`, { rawBody: req.body, messages: JSON.stringify(messages) })
await transport.handleRequest(req as IncomingMessage, res as ServerResponse, messages)
}
private onMessage(message: JSONRPCMessage, extra?: MessageExtraInfo) {
logger.debug('Received MCP message', { message, extra })
logger.info(`Received message: ${JSON.stringify(message)}`, extra)
// Handle message here
}
}

View File

@@ -1,321 +0,0 @@
import Anthropic from '@anthropic-ai/sdk'
import { MessageCreateParams, MessageStreamEvent } from '@anthropic-ai/sdk/resources'
import { loggerService } from '@logger'
import anthropicService from '@main/services/AnthropicService'
import { buildClaudeCodeSystemMessage, getSdkClient } from '@shared/anthropic'
import { Provider } from '@types'
import { Response } from 'express'
const logger = loggerService.withContext('MessagesService')
const EXCLUDED_FORWARD_HEADERS: ReadonlySet<string> = new Set([
'host',
'x-api-key',
'authorization',
'sentry-trace',
'baggage',
'content-length',
'connection'
])
export interface ValidationResult {
isValid: boolean
errors: string[]
}
export interface ErrorResponse {
type: 'error'
error: {
type: string
message: string
requestId?: string
}
}
export interface StreamConfig {
response: Response
onChunk?: (chunk: MessageStreamEvent) => void
onError?: (error: any) => void
onComplete?: () => void
}
export interface ProcessMessageOptions {
provider: Provider
request: MessageCreateParams
extraHeaders?: Record<string, string | string[]>
modelId?: string
}
export interface ProcessMessageResult {
client: Anthropic
anthropicRequest: MessageCreateParams
}
export class MessagesService {
validateRequest(request: MessageCreateParams): ValidationResult {
// TODO: Implement comprehensive request validation
const errors: string[] = []
if (!request.model || typeof request.model !== 'string') {
errors.push('Model is required')
}
if (typeof request.max_tokens !== 'number' || !Number.isFinite(request.max_tokens) || request.max_tokens < 1) {
errors.push('max_tokens is required and must be a positive number')
}
if (!request.messages || !Array.isArray(request.messages) || request.messages.length === 0) {
errors.push('messages is required and must be a non-empty array')
} else {
request.messages.forEach((message, index) => {
if (!message || typeof message !== 'object') {
errors.push(`messages[${index}] must be an object`)
return
}
if (!('role' in message) || typeof message.role !== 'string' || message.role.trim().length === 0) {
errors.push(`messages[${index}].role is required`)
}
const content: unknown = message.content
if (content === undefined || content === null) {
errors.push(`messages[${index}].content is required`)
return
}
if (typeof content === 'string' && content.trim().length === 0) {
errors.push(`messages[${index}].content cannot be empty`)
} else if (Array.isArray(content) && content.length === 0) {
errors.push(`messages[${index}].content must include at least one item when using an array`)
}
})
}
return {
isValid: errors.length === 0,
errors
}
}
async getClient(provider: Provider, extraHeaders?: Record<string, string | string[]>): Promise<Anthropic> {
// Create Anthropic client for the provider
if (provider.authType === 'oauth') {
const oauthToken = await anthropicService.getValidAccessToken()
return getSdkClient(provider, oauthToken, extraHeaders)
}
return getSdkClient(provider, null, extraHeaders)
}
prepareHeaders(headers: Record<string, string | string[] | undefined>): Record<string, string | string[]> {
const extraHeaders: Record<string, string | string[]> = {}
for (const [key, value] of Object.entries(headers)) {
if (value === undefined) {
continue
}
const normalizedKey = key.toLowerCase()
if (EXCLUDED_FORWARD_HEADERS.has(normalizedKey)) {
continue
}
extraHeaders[normalizedKey] = value
}
return extraHeaders
}
createAnthropicRequest(request: MessageCreateParams, provider: Provider, modelId?: string): MessageCreateParams {
const anthropicRequest: MessageCreateParams = {
...request,
stream: !!request.stream
}
// Override model if provided
if (modelId) {
anthropicRequest.model = modelId
}
// Add Claude Code system message for OAuth providers
if (provider.type === 'anthropic' && provider.authType === 'oauth') {
anthropicRequest.system = buildClaudeCodeSystemMessage(request.system)
}
return anthropicRequest
}
async handleStreaming(
client: Anthropic,
request: MessageCreateParams,
config: StreamConfig,
provider: Provider
): Promise<void> {
const { response, onChunk, onError, onComplete } = config
// Set streaming headers
response.setHeader('Content-Type', 'text/event-stream; charset=utf-8')
response.setHeader('Cache-Control', 'no-cache, no-transform')
response.setHeader('Connection', 'keep-alive')
response.setHeader('X-Accel-Buffering', 'no')
response.flushHeaders()
const flushableResponse = response as Response & { flush?: () => void }
const flushStream = () => {
if (typeof flushableResponse.flush !== 'function') {
return
}
try {
flushableResponse.flush()
} catch (flushError: unknown) {
logger.warn('Failed to flush streaming response', { error: flushError })
}
}
const writeSse = (eventType: string | undefined, payload: unknown) => {
if (response.writableEnded || response.destroyed) {
return
}
if (eventType) {
response.write(`event: ${eventType}\n`)
}
const data = typeof payload === 'string' ? payload : JSON.stringify(payload)
response.write(`data: ${data}\n\n`)
flushStream()
}
try {
const stream = client.messages.stream(request)
for await (const chunk of stream) {
if (response.writableEnded || response.destroyed) {
logger.warn('Streaming response ended before stream completion', {
provider: provider.id,
model: request.model
})
break
}
writeSse(chunk.type, chunk)
if (onChunk) {
onChunk(chunk)
}
}
writeSse(undefined, '[DONE]')
if (onComplete) {
onComplete()
}
} catch (streamError: any) {
logger.error('Stream error', {
error: streamError,
provider: provider.id,
model: request.model,
apiHost: provider.apiHost,
anthropicApiHost: provider.anthropicApiHost
})
writeSse(undefined, {
type: 'error',
error: {
type: 'api_error',
message: 'Stream processing error'
}
})
if (onError) {
onError(streamError)
}
} finally {
if (!response.writableEnded) {
response.end()
}
}
}
transformError(error: any): { statusCode: number; errorResponse: ErrorResponse } {
let statusCode = 500
let errorType = 'api_error'
let errorMessage = 'Internal server error'
const anthropicStatus = typeof error?.status === 'number' ? error.status : undefined
const anthropicError = error?.error
if (anthropicStatus) {
statusCode = anthropicStatus
}
if (anthropicError?.type) {
errorType = anthropicError.type
}
if (anthropicError?.message) {
errorMessage = anthropicError.message
} else if (error instanceof Error && error.message) {
errorMessage = error.message
}
// Infer error type from message if not from Anthropic API
if (!anthropicStatus && error instanceof Error) {
const errorMessageText = error.message ?? ''
if (errorMessageText.includes('API key') || errorMessageText.includes('authentication')) {
statusCode = 401
errorType = 'authentication_error'
} else if (errorMessageText.includes('rate limit') || errorMessageText.includes('quota')) {
statusCode = 429
errorType = 'rate_limit_error'
} else if (errorMessageText.includes('timeout') || errorMessageText.includes('connection')) {
statusCode = 502
errorType = 'api_error'
} else if (errorMessageText.includes('validation') || errorMessageText.includes('invalid')) {
statusCode = 400
errorType = 'invalid_request_error'
}
}
const safeErrorMessage =
typeof errorMessage === 'string' && errorMessage.length > 0 ? errorMessage : 'Internal server error'
return {
statusCode,
errorResponse: {
type: 'error',
error: {
type: errorType,
message: safeErrorMessage,
requestId: error?.request_id
}
}
}
}
async processMessage(options: ProcessMessageOptions): Promise<ProcessMessageResult> {
const { provider, request, extraHeaders, modelId } = options
const client = await this.getClient(provider, extraHeaders)
const anthropicRequest = this.createAnthropicRequest(request, provider, modelId)
const messageCount = Array.isArray(request.messages) ? request.messages.length : 0
logger.info('Processing anthropic messages request', {
provider: provider.id,
apiHost: provider.apiHost,
anthropicApiHost: provider.anthropicApiHost,
model: anthropicRequest.model,
stream: !!anthropicRequest.stream,
// systemPrompt: JSON.stringify(!!request.system),
// messages: JSON.stringify(request.messages),
messageCount,
toolCount: Array.isArray(request.tools) ? request.tools.length : 0
})
// Return client and request for route layer to handle streaming/non-streaming
return {
client,
anthropicRequest
}
}
}
// Export singleton instance
export const messagesService = new MessagesService()

View File

@@ -1,113 +0,0 @@
import { isEmpty } from 'lodash'
import { ApiModel, ApiModelsFilter, ApiModelsResponse } from '../../../renderer/src/types/apiModels'
import { loggerService } from '../../services/LoggerService'
import {
getAvailableProviders,
getProviderAnthropicModelChecker,
listAllAvailableModels,
transformModelToOpenAI
} from '../utils'
const logger = loggerService.withContext('ModelsService')
// Re-export for backward compatibility
export type ModelsFilter = ApiModelsFilter
export class ModelsService {
async getModels(filter: ModelsFilter): Promise<ApiModelsResponse> {
try {
logger.debug('Getting available models from providers', { filter })
let providers = await getAvailableProviders()
if (filter.providerType === 'anthropic') {
providers = providers.filter((p) => p.type === 'anthropic' || !isEmpty(p.anthropicApiHost?.trim()))
}
const models = await listAllAvailableModels(providers)
// Use Map to deduplicate models by their full ID (provider:model_id)
const uniqueModels = new Map<string, ApiModel>()
for (const model of models) {
const provider = providers.find((p) => p.id === model.provider)
logger.debug(`Processing model ${model.id}`)
if (!provider) {
logger.debug(`Skipping model ${model.id} . Reason: Provider not found.`)
continue
}
if (filter.providerType === 'anthropic') {
const checker = getProviderAnthropicModelChecker(provider.id)
if (!checker(model)) {
logger.debug(`Skipping model ${model.id} from ${model.provider}. Reason: Not an Anthropic model.`)
continue
}
}
const openAIModel = transformModelToOpenAI(model, provider)
const fullModelId = openAIModel.id // This is already in format "provider:model_id"
// Only add if not already present (first occurrence wins)
if (!uniqueModels.has(fullModelId)) {
uniqueModels.set(fullModelId, openAIModel)
} else {
logger.debug(`Skipping duplicate model: ${fullModelId}`)
}
}
let modelData = Array.from(uniqueModels.values())
const total = modelData.length
// Apply pagination
const offset = filter?.offset || 0
const limit = filter?.limit
if (limit !== undefined) {
modelData = modelData.slice(offset, offset + limit)
logger.debug(
`Applied pagination: offset=${offset}, limit=${limit}, showing ${modelData.length} of ${total} models`
)
} else if (offset > 0) {
modelData = modelData.slice(offset)
logger.debug(`Applied offset: offset=${offset}, showing ${modelData.length} of ${total} models`)
}
logger.info('Models retrieved', {
returned: modelData.length,
discovered: models.length,
filter
})
if (models.length > total) {
logger.debug(`Filtered out ${models.length - total} models after deduplication and filtering`)
}
const response: ApiModelsResponse = {
object: 'list',
data: modelData
}
// Add pagination metadata if applicable
if (filter?.limit !== undefined || filter?.offset !== undefined) {
response.total = total
response.offset = offset
if (filter?.limit !== undefined) {
response.limit = filter.limit
}
}
return response
} catch (error: any) {
logger.error('Error getting models', { error, filter })
return {
object: 'list',
data: []
}
}
}
}
// Export singleton instance
export const modelsService = new ModelsService()

View File

@@ -1,64 +0,0 @@
export type StreamAbortHandler = (reason: unknown) => void
export interface StreamAbortController {
abortController: AbortController
registerAbortHandler: (handler: StreamAbortHandler) => void
clearAbortTimeout: () => void
}
export const STREAM_TIMEOUT_REASON = 'stream timeout'
interface CreateStreamAbortControllerOptions {
timeoutMs: number
}
export const createStreamAbortController = (options: CreateStreamAbortControllerOptions): StreamAbortController => {
const { timeoutMs } = options
const abortController = new AbortController()
const signal = abortController.signal
let timeoutId: NodeJS.Timeout | undefined
let abortHandler: StreamAbortHandler | undefined
const clearAbortTimeout = () => {
if (!timeoutId) {
return
}
clearTimeout(timeoutId)
timeoutId = undefined
}
const handleAbort = () => {
clearAbortTimeout()
if (!abortHandler) {
return
}
abortHandler(signal.reason)
}
signal.addEventListener('abort', handleAbort, { once: true })
const registerAbortHandler = (handler: StreamAbortHandler) => {
abortHandler = handler
if (signal.aborted) {
abortHandler(signal.reason)
}
}
if (timeoutMs > 0) {
timeoutId = setTimeout(() => {
if (!signal.aborted) {
abortController.abort(STREAM_TIMEOUT_REASON)
}
}, timeoutMs)
}
return {
abortController,
registerAbortHandler,
clearAbortTimeout
}
}

View File

@@ -1,60 +1,46 @@
import { CacheService } from '@main/services/CacheService'
import { loggerService } from '@main/services/LoggerService'
import { reduxService } from '@main/services/ReduxService'
import { ApiModel, Model, Provider } from '@types'
import { Model, Provider } from '@types'
const logger = loggerService.withContext('ApiServerUtils')
// Cache configuration
const PROVIDERS_CACHE_KEY = 'api-server:providers'
const PROVIDERS_CACHE_TTL = 10 * 1000 // 10 seconds
// OpenAI compatible model format
export interface OpenAICompatibleModel {
id: string
object: 'model'
created: number
owned_by: string
provider?: string
provider_model_id?: string
}
export async function getAvailableProviders(): Promise<Provider[]> {
try {
// Try to get from cache first (faster)
const cachedSupportedProviders = CacheService.get<Provider[]>(PROVIDERS_CACHE_KEY)
if (cachedSupportedProviders && cachedSupportedProviders.length > 0) {
logger.debug('Providers resolved from cache', {
count: cachedSupportedProviders.length
})
return cachedSupportedProviders
}
// If cache is not available, get fresh data from Redux
// Wait for store to be ready before accessing providers
const providers = await reduxService.select('state.llm.providers')
if (!providers || !Array.isArray(providers)) {
logger.warn('No providers found in Redux store')
logger.warn('No providers found in Redux store, returning empty array')
return []
}
// Support OpenAI and Anthropic type providers for API server
const supportedProviders = providers.filter(
(p: Provider) => p.enabled && (p.type === 'openai' || p.type === 'anthropic')
)
// Only support OpenAI type providers for API server
const openAIProviders = providers.filter((p: Provider) => p.enabled && p.type === 'openai')
// Cache the filtered results
CacheService.set(PROVIDERS_CACHE_KEY, supportedProviders, PROVIDERS_CACHE_TTL)
logger.info(`Filtered to ${openAIProviders.length} OpenAI providers from ${providers.length} total providers`)
logger.info('Providers filtered', {
supported: supportedProviders.length,
total: providers.length
})
return supportedProviders
return openAIProviders
} catch (error: any) {
logger.error('Failed to get providers from Redux store', { error })
logger.error('Failed to get providers from Redux store:', error)
return []
}
}
export async function listAllAvailableModels(providers?: Provider[]): Promise<Model[]> {
export async function listAllAvailableModels(): Promise<Model[]> {
try {
if (!providers) {
providers = await getAvailableProviders()
}
const providers = await getAvailableProviders()
return providers.map((p: Provider) => p.models || []).flat()
} catch (error: any) {
logger.error('Failed to list available models', { error })
logger.error('Failed to list available models:', error)
return []
}
}
@@ -62,13 +48,15 @@ export async function listAllAvailableModels(providers?: Provider[]): Promise<Mo
export async function getProviderByModel(model: string): Promise<Provider | undefined> {
try {
if (!model || typeof model !== 'string') {
logger.warn('Invalid model parameter', { model })
logger.warn(`Invalid model parameter: ${model}`)
return undefined
}
// Validate model format first
if (!model.includes(':')) {
logger.warn('Invalid model format missing separator', { model })
logger.warn(
`Invalid model format, must contain ':' separator. Expected format "provider:model_id", got: ${model}`
)
return undefined
}
@@ -76,7 +64,7 @@ export async function getProviderByModel(model: string): Promise<Provider | unde
const modelInfo = model.split(':')
if (modelInfo.length < 2 || modelInfo[0].length === 0 || modelInfo[1].length === 0) {
logger.warn('Invalid model format with empty parts', { model })
logger.warn(`Invalid model format, expected "provider:model_id" with non-empty parts, got: ${model}`)
return undefined
}
@@ -84,17 +72,16 @@ export async function getProviderByModel(model: string): Promise<Provider | unde
const provider = providers.find((p: Provider) => p.id === providerId)
if (!provider) {
logger.warn('Provider not found for model', {
providerId,
available: providers.map((p) => p.id)
})
logger.warn(
`Provider '${providerId}' not found or not enabled. Available providers: ${providers.map((p) => p.id).join(', ')}`
)
return undefined
}
logger.debug('Provider resolved for model', { providerId, model })
logger.debug(`Found provider '${providerId}' for model: ${model}`)
return provider
} catch (error: any) {
logger.error('Failed to get provider by model', { error, model })
logger.error('Failed to get provider by model:', error)
return undefined
}
}
@@ -109,12 +96,9 @@ export interface ModelValidationError {
code: string
}
export async function validateModelId(model: string): Promise<{
valid: boolean
error?: ModelValidationError
provider?: Provider
modelId?: string
}> {
export async function validateModelId(
model: string
): Promise<{ valid: boolean; error?: ModelValidationError; provider?: Provider; modelId?: string }> {
try {
if (!model || typeof model !== 'string') {
return {
@@ -185,7 +169,7 @@ export async function validateModelId(model: string): Promise<{
modelId
}
} catch (error: any) {
logger.error('Error validating model ID', { error, model })
logger.error('Error validating model ID:', error)
return {
valid: false,
error: {
@@ -197,47 +181,17 @@ export async function validateModelId(model: string): Promise<{
}
}
export function transformModelToOpenAI(model: Model, provider?: Provider): ApiModel {
const providerDisplayName = provider?.name
export function transformModelToOpenAI(model: Model): OpenAICompatibleModel {
return {
id: `${model.provider}:${model.id}`,
object: 'model',
name: model.name,
created: Math.floor(Date.now() / 1000),
owned_by: model.owned_by || providerDisplayName || model.provider,
owned_by: model.owned_by || model.provider,
provider: model.provider,
provider_name: providerDisplayName,
provider_type: provider?.type,
provider_model_id: model.id
}
}
export async function getProviderById(providerId: string): Promise<Provider | undefined> {
try {
if (!providerId || typeof providerId !== 'string') {
logger.warn('Invalid provider ID parameter', { providerId })
return undefined
}
const providers = await getAvailableProviders()
const provider = providers.find((p: Provider) => p.id === providerId)
if (!provider) {
logger.warn('Provider not found by ID', {
providerId,
available: providers.map((p) => p.id)
})
return undefined
}
logger.debug('Provider found by ID', { providerId })
return provider
} catch (error: any) {
logger.error('Failed to get provider by ID', { error, providerId })
return undefined
}
}
export function validateProvider(provider: Provider): boolean {
try {
if (!provider) {
@@ -246,7 +200,7 @@ export function validateProvider(provider: Provider): boolean {
// Check required fields
if (!provider.id || !provider.type || !provider.apiKey || !provider.apiHost) {
logger.warn('Provider missing required fields', {
logger.warn('Provider missing required fields:', {
id: !!provider.id,
type: !!provider.type,
apiKey: !!provider.apiKey,
@@ -257,38 +211,21 @@ export function validateProvider(provider: Provider): boolean {
// Check if provider is enabled
if (!provider.enabled) {
logger.debug('Provider is disabled', { providerId: provider.id })
logger.debug(`Provider is disabled: ${provider.id}`)
return false
}
// Support OpenAI and Anthropic type providers
if (provider.type !== 'openai' && provider.type !== 'anthropic') {
logger.debug('Provider type not supported', {
providerId: provider.id,
providerType: provider.type
})
// Only support OpenAI type providers
if (provider.type !== 'openai') {
logger.debug(
`Provider type '${provider.type}' not supported, only 'openai' type is currently supported: ${provider.id}`
)
return false
}
return true
} catch (error: any) {
logger.error('Error validating provider', {
error,
providerId: provider?.id
})
logger.error('Error validating provider:', error)
return false
}
}
export const getProviderAnthropicModelChecker = (providerId: string): ((m: Model) => boolean) => {
switch (providerId) {
case 'cherryin':
case 'new-api':
return (m: Model) => m.endpoint_type === 'anthropic'
case 'aihubmix':
return (m: Model) => m.id.includes('claude')
default:
// allow all models when checker not configured
return () => true
}
}

View File

@@ -1,4 +1,3 @@
import { CacheService } from '@main/services/CacheService'
import mcpService from '@main/services/MCPService'
import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { CallToolRequestSchema, ListToolsRequestSchema, ListToolsResult } from '@modelcontextprotocol/sdk/types.js'
@@ -9,10 +8,6 @@ import { reduxService } from '../../services/ReduxService'
const logger = loggerService.withContext('MCPApiService')
// Cache configuration
const MCP_SERVERS_CACHE_KEY = 'api-server:mcp-servers'
const MCP_SERVERS_CACHE_TTL = 5 * 60 * 1000 // 5 minutes
const cachedServers: Record<string, Server> = {}
async function handleListToolsRequest(request: any, extra: any): Promise<ListToolsResult> {
@@ -38,35 +33,20 @@ async function handleCallToolRequest(request: any, extra: any): Promise<any> {
}
async function getMcpServerConfigById(id: string): Promise<MCPServer | undefined> {
const servers = await getMCPServersFromRedux()
const servers = await getServersFromRedux()
return servers.find((s) => s.id === id || s.name === id)
}
/**
* Get servers directly from Redux store
*/
export async function getMCPServersFromRedux(): Promise<MCPServer[]> {
async function getServersFromRedux(): Promise<MCPServer[]> {
try {
logger.debug('Getting servers from Redux store')
// Try to get from cache first (faster)
const cachedServers = CacheService.get<MCPServer[]>(MCP_SERVERS_CACHE_KEY)
if (cachedServers) {
logger.debug('MCP servers resolved from cache', { count: cachedServers.length })
return cachedServers
}
// If cache is not available, get fresh data from Redux
const servers = await reduxService.select<MCPServer[]>('state.mcp.servers')
const serverList = servers || []
// Cache the results
CacheService.set(MCP_SERVERS_CACHE_KEY, serverList, MCP_SERVERS_CACHE_TTL)
logger.debug('Fetched servers from Redux store', { count: serverList.length })
return serverList
logger.silly(`Fetched ${servers?.length || 0} servers from Redux store`)
return servers || []
} catch (error: any) {
logger.error('Failed to get servers from Redux', { error })
logger.error('Failed to get servers from Redux:', error)
return []
}
}
@@ -74,7 +54,7 @@ export async function getMCPServersFromRedux(): Promise<MCPServer[]> {
export async function getMcpServerById(id: string): Promise<Server> {
const server = cachedServers[id]
if (!server) {
const servers = await getMCPServersFromRedux()
const servers = await getServersFromRedux()
const mcpServer = servers.find((s) => s.id === id || s.name === id)
if (!mcpServer) {
throw new Error(`Server not found: ${id}`)
@@ -91,6 +71,6 @@ export async function getMcpServerById(id: string): Promise<Server> {
cachedServers[id] = newServer
return newServer
}
logger.debug('Returning cached MCP server', { id, hasHandlers: Boolean(server) })
logger.silly('getMcpServer ', { server: server })
return server
}

View File

@@ -10,13 +10,9 @@ import { electronApp, optimizer } from '@electron-toolkit/utils'
import { replaceDevtoolsFont } from '@main/utils/windowUtil'
import { app } from 'electron'
import installExtension, { REACT_DEVELOPER_TOOLS, REDUX_DEVTOOLS } from 'electron-devtools-installer'
import { isDev, isLinux, isWin } from './constant'
import process from 'node:process'
import { registerIpc } from './ipc'
import { agentService } from './services/agents'
import { apiServerService } from './services/ApiServerService'
import { configManager } from './services/ConfigManager'
import mcpService from './services/MCPService'
import { nodeTraceService } from './services/NodeTraceService'
@@ -30,6 +26,8 @@ import selectionService, { initSelectionService } from './services/SelectionServ
import { registerShortcuts } from './services/ShortcutService'
import { TrayService } from './services/TrayService'
import { windowService } from './services/WindowService'
import process from 'node:process'
import { apiServerService } from './services/ApiServerService'
import { initWebviewHotkeys } from './services/WebviewService'
const logger = loggerService.withContext('MainEntry')
@@ -151,34 +149,11 @@ if (!app.requestSingleInstanceLock()) {
//start selection assistant service
initSelectionService()
// Initialize Agent Service
try {
await agentService.initialize()
logger.info('Agent service initialized successfully')
} catch (error: any) {
logger.error('Failed to initialize Agent service:', error)
}
// Start API server if enabled or if agents exist
// Start API server if enabled
try {
const config = await apiServerService.getCurrentConfig()
logger.info('API server config:', config)
// Check if there are any agents
let shouldStart = config.enabled
if (!shouldStart) {
try {
const { total } = await agentService.listAgents({ limit: 1 })
if (total > 0) {
shouldStart = true
logger.info(`Detected ${total} agent(s), auto-starting API server`)
}
} catch (error: any) {
logger.warn('Failed to check agent count:', error)
}
}
if (shouldStart) {
if (config.enabled) {
await apiServerService.start()
}
} catch (error: any) {

View File

@@ -11,21 +11,11 @@ import { handleZoomFactor } from '@main/utils/zoom'
import { SpanEntity, TokenUsage } from '@mcp-trace/trace-core'
import { MIN_WINDOW_HEIGHT, MIN_WINDOW_WIDTH, UpgradeChannel } from '@shared/config/constant'
import { IpcChannel } from '@shared/IpcChannel'
import {
AgentPersistedMessage,
FileMetadata,
Notification,
OcrProvider,
Provider,
Shortcut,
SupportedOcrFile,
ThemeMode
} from '@types'
import { FileMetadata, Notification, OcrProvider, Provider, Shortcut, SupportedOcrFile, ThemeMode } from '@types'
import checkDiskSpace from 'check-disk-space'
import { BrowserWindow, dialog, ipcMain, ProxyConfig, session, shell, systemPreferences, webContents } from 'electron'
import fontList from 'font-list'
import { agentMessageRepository } from './services/agents/database'
import { apiServerService } from './services/ApiServerService'
import appService from './services/AppService'
import AppUpdater from './services/AppUpdater'
@@ -68,6 +58,7 @@ import {
import storeSyncService from './services/StoreSyncService'
import { themeService } from './services/ThemeService'
import VertexAIService from './services/VertexAIService'
import WebSocketService from './services/WebSocketService'
import { setOpenLinkExternal } from './services/WebviewService'
import { windowService } from './services/WindowService'
import { calculateDirectorySize, getResourcePath } from './utils'
@@ -212,27 +203,6 @@ export function registerIpc(mainWindow: BrowserWindow, app: Electron.App) {
}
})
ipcMain.handle(IpcChannel.AgentMessage_PersistExchange, async (_event, payload) => {
try {
return await agentMessageRepository.persistExchange(payload)
} catch (error) {
logger.error('Failed to persist agent session messages', error as Error)
throw error
}
})
ipcMain.handle(
IpcChannel.AgentMessage_GetHistory,
async (_event, { sessionId }: { sessionId: string }): Promise<AgentPersistedMessage[]> => {
try {
return await agentMessageRepository.getSessionHistory(sessionId)
} catch (error) {
logger.error('Failed to get agent session history', error as Error)
throw error
}
}
)
//only for mac
if (isMac) {
ipcMain.handle(IpcChannel.App_MacIsProcessTrusted, (): boolean => {
@@ -529,7 +499,6 @@ export function registerIpc(mainWindow: BrowserWindow, app: Electron.App) {
ipcMain.handle(IpcChannel.File_ValidateNotesDirectory, fileManager.validateNotesDirectory.bind(fileManager))
ipcMain.handle(IpcChannel.File_StartWatcher, fileManager.startFileWatcher.bind(fileManager))
ipcMain.handle(IpcChannel.File_StopWatcher, fileManager.stopFileWatcher.bind(fileManager))
ipcMain.handle(IpcChannel.File_ShowInFolder, fileManager.showInFolder.bind(fileManager))
// file service
ipcMain.handle(IpcChannel.FileService_Upload, async (_, provider: Provider, file: FileMetadata) => {
@@ -890,4 +859,11 @@ export function registerIpc(mainWindow: BrowserWindow, app: Electron.App) {
// CherryAI
ipcMain.handle(IpcChannel.Cherryai_GetSignature, (_, params) => generateSignature(params))
// WebSocket
ipcMain.handle(IpcChannel.WebSocket_Start, WebSocketService.start)
ipcMain.handle(IpcChannel.WebSocket_Stop, WebSocketService.stop)
ipcMain.handle(IpcChannel.WebSocket_Status, WebSocketService.getStatus)
ipcMain.handle(IpcChannel.WebSocket_SendFile, WebSocketService.sendFile)
ipcMain.handle(IpcChannel.WebSocket_GetAllCandidates, WebSocketService.getAllCandidates)
}

View File

@@ -80,6 +80,7 @@ export default abstract class BaseReranker {
message: error.message,
status: error.response?.status,
statusText: error.response?.statusText,
responseBody: error.response?.body, // Include the actual API error response
requestBody: requestBody
}
return JSON.stringify(errorDetails, null, 2)

View File

@@ -2,6 +2,15 @@ import { KnowledgeBaseParams, KnowledgeSearchResult } from '@types'
import { net } from 'electron'
import BaseReranker from './BaseReranker'
interface RerankError extends Error {
response?: {
status: number
statusText: string
body?: unknown
}
}
export default class GeneralReranker extends BaseReranker {
constructor(base: KnowledgeBaseParams) {
super(base)
@@ -17,7 +26,30 @@ export default class GeneralReranker extends BaseReranker {
})
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`)
// Read the response body to get detailed error information
// Clone the response to avoid consuming the body multiple times
const clonedResponse = response.clone()
let errorBody: unknown
try {
errorBody = await clonedResponse.json()
} catch {
// If response body is not JSON, try to read as text
try {
errorBody = await response.text()
} catch {
errorBody = null
}
}
const error = new Error(`HTTP ${response.status}: ${response.statusText}`) as RerankError
// Attach response details to the error object for formatErrorMessage
error.response = {
status: response.status,
statusText: response.statusText,
body: errorBody
}
throw error
}
const data = await response.json()

View File

@@ -3,7 +3,7 @@ import { loggerService } from '@logger'
import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { CallToolRequestSchema, ListToolsRequestSchema } from '@modelcontextprotocol/sdk/types.js'
import { net } from 'electron'
import * as z from 'zod'
import { z } from 'zod'
const logger = loggerService.withContext('DifyKnowledgeServer')

View File

@@ -5,7 +5,7 @@ import { CallToolRequestSchema, ListToolsRequestSchema } from '@modelcontextprot
import { net } from 'electron'
import { JSDOM } from 'jsdom'
import TurndownService from 'turndown'
import * as z from 'zod'
import { z } from 'zod'
export const RequestPayloadSchema = z.object({
url: z.url(),

View File

@@ -8,7 +8,7 @@ import fs from 'fs/promises'
import { minimatch } from 'minimatch'
import os from 'os'
import path from 'path'
import * as z from 'zod'
import { z } from 'zod'
const logger = loggerService.withContext('MCP:FileSystemServer')

View File

@@ -1,11 +1,5 @@
import { IpcChannel } from '@shared/IpcChannel'
import {
ApiServerConfig,
GetApiServerStatusResult,
RestartApiServerStatusResult,
StartApiServerStatusResult,
StopApiServerStatusResult
} from '@types'
import { ApiServerConfig } from '@types'
import { ipcMain } from 'electron'
import { apiServer } from '../apiServer'
@@ -58,7 +52,7 @@ export class ApiServerService {
registerIpcHandlers(): void {
// API Server
ipcMain.handle(IpcChannel.ApiServer_Start, async (): Promise<StartApiServerStatusResult> => {
ipcMain.handle(IpcChannel.ApiServer_Start, async () => {
try {
await this.start()
return { success: true }
@@ -67,7 +61,7 @@ export class ApiServerService {
}
})
ipcMain.handle(IpcChannel.ApiServer_Stop, async (): Promise<StopApiServerStatusResult> => {
ipcMain.handle(IpcChannel.ApiServer_Stop, async () => {
try {
await this.stop()
return { success: true }
@@ -76,7 +70,7 @@ export class ApiServerService {
}
})
ipcMain.handle(IpcChannel.ApiServer_Restart, async (): Promise<RestartApiServerStatusResult> => {
ipcMain.handle(IpcChannel.ApiServer_Restart, async () => {
try {
await this.restart()
return { success: true }
@@ -85,7 +79,7 @@ export class ApiServerService {
}
})
ipcMain.handle(IpcChannel.ApiServer_GetStatus, async (): Promise<GetApiServerStatusResult> => {
ipcMain.handle(IpcChannel.ApiServer_GetStatus, async () => {
try {
const config = await this.getCurrentConfig()
return {

View File

@@ -725,10 +725,7 @@ class FileStorage {
}
public openPath = async (_: Electron.IpcMainInvokeEvent, path: string): Promise<void> => {
const resolved = await shell.openPath(path)
if (resolved !== '') {
throw new Error(resolved)
}
shell.openPath(path).catch((err) => logger.error('[IPC - Error] Failed to open file:', err))
}
/**
@@ -1232,19 +1229,6 @@ class FileStorage {
return false
}
}
public showInFolder = async (_: Electron.IpcMainInvokeEvent, path: string): Promise<void> => {
if (!fs.existsSync(path)) {
const msg = `File or folder does not exist: ${path}`
logger.error(msg)
throw new Error(msg)
}
try {
shell.showItemInFolder(path)
} catch (error) {
logger.error('Failed to show item in folder:', error as Error)
}
}
}
export const fileStorage = new FileStorage()

View File

@@ -7,7 +7,6 @@ import { createInMemoryMCPServer } from '@main/mcpServers/factory'
import { makeSureDirExists, removeEnvProxy } from '@main/utils'
import { buildFunctionCallToolName } from '@main/utils/mcp'
import { getBinaryName, getBinaryPath } from '@main/utils/process'
import getLoginShellEnvironment from '@main/utils/shell-env'
import { TraceMethod, withSpanFunc } from '@mcp-trace/trace-core'
import { Client } from '@modelcontextprotocol/sdk/client/index.js'
import { SSEClientTransport, SSEClientTransportOptions } from '@modelcontextprotocol/sdk/client/sse.js'
@@ -44,12 +43,14 @@ import {
} from '@types'
import { app, net } from 'electron'
import { EventEmitter } from 'events'
import { memoize } from 'lodash'
import { v4 as uuidv4 } from 'uuid'
import { CacheService } from './CacheService'
import DxtService from './DxtService'
import { CallBackServer } from './mcp/oauth/callback'
import { McpOAuthClientProvider } from './mcp/oauth/provider'
import getLoginShellEnvironment from './mcp/shell-env'
import { windowService } from './WindowService'
// Generic type for caching wrapped functions
@@ -334,7 +335,7 @@ class McpService {
getServerLogger(server).debug(`Starting server`, { command: cmd, args })
// Logger.info(`[MCP] Environment variables for server:`, server.env)
const loginShellEnv = await getLoginShellEnvironment()
const loginShellEnv = await this.getLoginShellEnv()
// Bun not support proxy https://github.com/oven-sh/bun/issues/16812
if (cmd.includes('bun')) {
@@ -877,6 +878,20 @@ class McpService {
return await cachedGetResource(server, uri)
}
private getLoginShellEnv = memoize(async (): Promise<Record<string, string>> => {
try {
const loginEnv = await getLoginShellEnvironment()
const pathSeparator = process.platform === 'win32' ? ';' : ':'
const cherryBinPath = path.join(os.homedir(), '.cherrystudio', 'bin')
loginEnv.PATH = `${loginEnv.PATH}${pathSeparator}${cherryBinPath}`
logger.debug('Successfully fetched login shell environment variables:')
return loginEnv
} catch (error) {
logger.error('Failed to fetch login shell environment variables:', error as Error)
return {}
}
})
// 实现 abortTool 方法
public async abortTool(_: Electron.IpcMainInvokeEvent, callId: string) {
const activeToolCall = this.activeToolCalls.get(callId)

View File

@@ -4,6 +4,7 @@ import { app, ProxyConfig, session } from 'electron'
import { socksDispatcher } from 'fetch-socks'
import http from 'http'
import https from 'https'
import * as ipaddr from 'ipaddr.js'
import { getSystemProxy } from 'os-proxy-config'
import { ProxyAgent } from 'proxy-agent'
import { Dispatcher, EnvHttpProxyAgent, getGlobalDispatcher, setGlobalDispatcher } from 'undici'
@@ -11,41 +12,293 @@ import { Dispatcher, EnvHttpProxyAgent, getGlobalDispatcher, setGlobalDispatcher
const logger = loggerService.withContext('ProxyManager')
let byPassRules: string[] = []
const isByPass = (url: string) => {
if (byPassRules.length === 0) {
type HostnameMatchType = 'exact' | 'wildcardSubdomain' | 'generalWildcard'
const enum ProxyBypassRuleType {
Local = 'local',
Cidr = 'cidr',
Ip = 'ip',
Domain = 'domain'
}
interface ParsedProxyBypassRule {
type: ProxyBypassRuleType
matchType: HostnameMatchType
rule: string
scheme?: string
port?: string
domain?: string
regex?: RegExp
cidr?: [ipaddr.IPv4 | ipaddr.IPv6, number]
ip?: string
}
let parsedByPassRules: ParsedProxyBypassRule[] = []
const getDefaultPortForProtocol = (protocol: string): string | null => {
switch (protocol.toLowerCase()) {
case 'http:':
return '80'
case 'https:':
return '443'
default:
return null
}
}
const buildWildcardRegex = (pattern: string): RegExp => {
const escapedSegments = pattern.split('*').map((segment) => segment.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'))
return new RegExp(`^${escapedSegments.join('.*')}$`, 'i')
}
const isWildcardIp = (value: string): boolean => {
if (!value.includes('*')) {
return false
}
const replaced = value.replace(/\*/g, '0')
return ipaddr.isValid(replaced)
}
const matchHostnameRule = (hostname: string, rule: ParsedProxyBypassRule): boolean => {
const normalizedHostname = hostname.toLowerCase()
switch (rule.matchType) {
case 'exact':
return normalizedHostname === rule.domain
case 'wildcardSubdomain': {
const domain = rule.domain
if (!domain) {
return false
}
return normalizedHostname === domain || normalizedHostname.endsWith(`.${domain}`)
}
case 'generalWildcard':
return rule.regex ? rule.regex.test(normalizedHostname) : false
default:
return false
}
}
const parseProxyBypassRule = (rule: string): ParsedProxyBypassRule | null => {
const trimmedRule = rule.trim()
if (!trimmedRule) {
return null
}
if (trimmedRule === '<local>') {
return {
type: ProxyBypassRuleType.Local,
matchType: 'exact',
rule: '<local>'
}
}
let workingRule = trimmedRule
let scheme: string | undefined
const schemeMatch = workingRule.match(/^([a-zA-Z][a-zA-Z\d+\-.]*):\/\//)
if (schemeMatch) {
scheme = schemeMatch[1].toLowerCase()
workingRule = workingRule.slice(schemeMatch[0].length)
}
// CIDR notation must be processed before port extraction
if (workingRule.includes('/')) {
const cleanedCidr = workingRule.replace(/^\[|\]$/g, '')
if (ipaddr.isValidCIDR(cleanedCidr)) {
return {
type: ProxyBypassRuleType.Cidr,
matchType: 'exact',
rule: workingRule,
scheme,
cidr: ipaddr.parseCIDR(cleanedCidr)
}
}
}
// Extract port: supports "host:port" and "[ipv6]:port" formats
let port: string | undefined
const portMatch = workingRule.match(/^(.+?):(\d+)$/)
if (portMatch) {
// For IPv6, ensure we're not splitting inside the brackets
const potentialHost = portMatch[1]
if (!potentialHost.startsWith('[') || potentialHost.includes(']')) {
workingRule = potentialHost
port = portMatch[2]
}
}
const cleanedHost = workingRule.replace(/^\[|\]$/g, '')
const normalizedHost = cleanedHost.toLowerCase()
if (!cleanedHost) {
return null
}
if (ipaddr.isValid(cleanedHost)) {
return {
type: ProxyBypassRuleType.Ip,
matchType: 'exact',
rule: cleanedHost,
scheme,
port,
ip: cleanedHost
}
}
if (isWildcardIp(cleanedHost)) {
const regexPattern = cleanedHost.replace(/\./g, '\\.').replace(/\*/g, '\\d+')
return {
type: ProxyBypassRuleType.Ip,
matchType: 'generalWildcard',
rule: cleanedHost,
scheme,
port,
regex: new RegExp(`^${regexPattern}$`)
}
}
if (workingRule.startsWith('*.')) {
const domain = normalizedHost.slice(2)
return {
type: ProxyBypassRuleType.Domain,
matchType: 'wildcardSubdomain',
rule: workingRule,
scheme,
port,
domain
}
}
if (workingRule.startsWith('.')) {
const domain = normalizedHost.slice(1)
return {
type: ProxyBypassRuleType.Domain,
matchType: 'wildcardSubdomain',
rule: workingRule,
scheme,
port,
domain
}
}
if (workingRule.includes('*')) {
return {
type: ProxyBypassRuleType.Domain,
matchType: 'generalWildcard',
rule: workingRule,
scheme,
port,
regex: buildWildcardRegex(normalizedHost)
}
}
return {
type: ProxyBypassRuleType.Domain,
matchType: 'exact',
rule: workingRule,
scheme,
port,
domain: normalizedHost
}
}
const isLocalHostname = (hostname: string): boolean => {
const normalized = hostname.toLowerCase()
if (normalized === 'localhost') {
return true
}
const cleaned = hostname.replace(/^\[|\]$/g, '')
if (ipaddr.isValid(cleaned)) {
const parsed = ipaddr.parse(cleaned)
return parsed.range() === 'loopback'
}
return false
}
export const updateByPassRules = (rules: string[]): void => {
byPassRules = rules
parsedByPassRules = []
for (const rule of rules) {
const parsedRule = parseProxyBypassRule(rule)
if (parsedRule) {
parsedByPassRules.push(parsedRule)
} else {
logger.warn(`Skipping invalid proxy bypass rule: ${rule}`)
}
}
}
export const isByPass = (url: string) => {
if (parsedByPassRules.length === 0) {
return false
}
try {
const subjectUrlTokens = new URL(url)
for (const rule of byPassRules) {
const ruleMatch = rule.replace(/^(?<leadingDot>\.)/, '*').match(/^(?<hostname>.+?)(?::(?<port>\d+))?$/)
const parsedUrl = new URL(url)
const hostname = parsedUrl.hostname
const cleanedHostname = hostname.replace(/^\[|\]$/g, '')
const protocol = parsedUrl.protocol
const protocolName = protocol.replace(':', '').toLowerCase()
const defaultPort = getDefaultPortForProtocol(protocol)
const port = parsedUrl.port || defaultPort || ''
const hostnameIsIp = ipaddr.isValid(cleanedHostname)
if (!ruleMatch || !ruleMatch.groups) {
logger.warn('Failed to parse bypass rule:', { rule })
for (const rule of parsedByPassRules) {
if (rule.scheme && rule.scheme !== protocolName) {
continue
}
if (!ruleMatch.groups.hostname) {
if (rule.port && rule.port !== port) {
continue
}
const hostnameIsMatch = subjectUrlTokens.hostname === ruleMatch.groups.hostname
switch (rule.type) {
case ProxyBypassRuleType.Local:
if (isLocalHostname(hostname)) {
return true
}
break
case ProxyBypassRuleType.Ip:
if (!hostnameIsIp) {
break
}
if (
hostnameIsMatch &&
(!ruleMatch.groups ||
!ruleMatch.groups.port ||
(subjectUrlTokens.port && subjectUrlTokens.port === ruleMatch.groups.port))
) {
return true
if (rule.ip && cleanedHostname === rule.ip) {
return true
}
if (rule.regex && rule.regex.test(cleanedHostname)) {
return true
}
break
case ProxyBypassRuleType.Cidr:
if (hostnameIsIp && rule.cidr) {
const parsedHost = ipaddr.parse(cleanedHostname)
const [cidrAddress, prefixLength] = rule.cidr
// Ensure IP version matches before comparing
if (parsedHost.kind() === cidrAddress.kind() && parsedHost.match([cidrAddress, prefixLength])) {
return true
}
}
break
case ProxyBypassRuleType.Domain:
if (!hostnameIsIp && matchHostnameRule(hostname, rule)) {
return true
}
break
default:
logger.error(`Unknown proxy bypass rule type: ${rule.type}`)
break
}
}
return false
} catch (error) {
logger.error('Failed to check bypass:', error as Error)
return false
}
return false
}
class SelectiveDispatcher extends Dispatcher {
private proxyDispatcher: Dispatcher
@@ -154,19 +407,31 @@ export class ProxyManager {
this.isSettingProxy = true
try {
this.config = config
this.clearSystemProxyMonitor()
if (config.mode === 'system') {
const currentProxy = await getSystemProxy()
if (currentProxy) {
logger.info(`current system proxy: ${currentProxy.proxyUrl}`)
this.config.proxyRules = currentProxy.proxyUrl.toLowerCase()
logger.info(`current system proxy: ${currentProxy.proxyUrl}, bypass rules: ${currentProxy.noProxy.join(',')}`)
config.proxyRules = currentProxy.proxyUrl.toLowerCase()
config.proxyBypassRules = currentProxy.noProxy.join(',')
}
this.monitorSystemProxy()
}
byPassRules = config.proxyBypassRules?.split(',') || []
this.setGlobalProxy(this.config)
// Support both semicolon and comma as separators
if (config.proxyBypassRules !== this.config.proxyBypassRules) {
const rawRules = config.proxyBypassRules
? config.proxyBypassRules
.split(/[;,]/)
.map((rule) => rule.trim())
.filter((rule) => rule.length > 0)
: []
updateByPassRules(rawRules)
}
this.setGlobalProxy(config)
this.config = config
} catch (error) {
logger.error('Failed to config proxy:', error as Error)
throw error

View File

@@ -0,0 +1,358 @@
import { loggerService } from '@logger'
import { WebSocketCandidatesResponse, WebSocketStatusResponse } from '@shared/config/types'
import * as fs from 'fs'
import { networkInterfaces } from 'os'
import * as path from 'path'
import { Server, Socket } from 'socket.io'
import { windowService } from './WindowService'
const logger = loggerService.withContext('WebSocketService')
class WebSocketService {
private io: Server | null = null
private isStarted = false
private port = 7017
private connectedClients = new Set<string>()
private getLocalIpAddress(): string | undefined {
const interfaces = networkInterfaces()
// 按优先级排序的网络接口名称模式
const interfacePriority = [
// macOS: 以太网/Wi-Fi 优先
/^en[0-9]+$/, // en0, en1 (以太网/Wi-Fi)
/^(en|eth)[0-9]+$/, // 以太网接口
/^wlan[0-9]+$/, // 无线接口
// Windows: 以太网/Wi-Fi 优先
/^(Ethernet|Wi-Fi|Local Area Connection)/,
/^(Wi-Fi|无线网络连接)/,
// Linux: 以太网/Wi-Fi 优先
/^(eth|enp|wlp|wlan)[0-9]+/,
// 虚拟化接口(低优先级)
/^bridge[0-9]+$/, // Docker bridge
/^veth[0-9]+$/, // Docker veth
/^docker[0-9]+/, // Docker interfaces
/^br-[0-9a-f]+/, // Docker bridge
/^vmnet[0-9]+$/, // VMware
/^vboxnet[0-9]+$/, // VirtualBox
// VPN 隧道接口(低优先级)
/^utun[0-9]+$/, // macOS VPN
/^tun[0-9]+$/, // Linux/Unix VPN
/^tap[0-9]+$/, // TAP interfaces
/^tailscale[0-9]*$/, // Tailscale VPN
/^wg[0-9]+$/ // WireGuard VPN
]
const candidates: Array<{ interface: string; address: string; priority: number }> = []
for (const [name, ifaces] of Object.entries(interfaces)) {
for (const iface of ifaces || []) {
if (iface.family === 'IPv4' && !iface.internal) {
// 计算接口优先级
let priority = 999 // 默认最低优先级
for (let i = 0; i < interfacePriority.length; i++) {
if (interfacePriority[i].test(name)) {
priority = i
break
}
}
candidates.push({
interface: name,
address: iface.address,
priority
})
}
}
}
if (candidates.length === 0) {
logger.warn('无法获取局域网 IP使用默认 IP: 127.0.0.1')
return '127.0.0.1'
}
// 按优先级排序,选择优先级最高的
candidates.sort((a, b) => a.priority - b.priority)
const best = candidates[0]
logger.info(`获取局域网 IP: ${best.address} (interface: ${best.interface})`)
return best.address
}
public start = async (): Promise<{ success: boolean; port?: number; error?: string }> => {
if (this.isStarted && this.io) {
return { success: true, port: this.port }
}
try {
this.io = new Server(this.port, {
cors: {
origin: '*',
methods: ['GET', 'POST']
},
transports: ['websocket', 'polling'],
allowEIO3: true,
pingTimeout: 60000,
pingInterval: 25000
})
this.io.on('connection', (socket: Socket) => {
this.connectedClients.add(socket.id)
const mainWindow = windowService.getMainWindow()
if (!mainWindow) {
logger.error('Main window is null, cannot send connection event')
} else {
mainWindow.webContents.send('websocket-client-connected', {
connected: true,
clientId: socket.id
})
logger.info(`Connection event sent to renderer, total clients: ${this.connectedClients.size}`)
}
socket.on('message', (data) => {
logger.info('Received message from mobile:', data)
mainWindow?.webContents.send('websocket-message-received', data)
socket.emit('message_received', { success: true })
})
socket.on('disconnect', () => {
logger.info(`Client disconnected: ${socket.id}`)
this.connectedClients.delete(socket.id)
if (this.connectedClients.size === 0) {
mainWindow?.webContents.send('websocket-client-connected', {
connected: false,
clientId: socket.id
})
}
})
})
// Engine 层面的事件监听
this.io.engine.on('connection_error', (err) => {
logger.error('Engine connection error:', err)
})
this.io.engine.on('connection', (rawSocket) => {
const remoteAddr = rawSocket.request.connection.remoteAddress
logger.info(`[Engine] Raw connection from: ${remoteAddr}`)
logger.info(`[Engine] Transport: ${rawSocket.transport.name}`)
rawSocket.on('packet', (packet: { type: string; data?: any }) => {
logger.info(
`[Engine] ← Packet from ${remoteAddr}: type="${packet.type}"`,
packet.data ? { data: packet.data } : {}
)
})
rawSocket.on('packetCreate', (packet: { type: string; data?: any }) => {
logger.info(`[Engine] → Packet to ${remoteAddr}: type="${packet.type}"`)
})
rawSocket.on('close', (reason: string) => {
logger.warn(`[Engine] Connection closed from ${remoteAddr}, reason: ${reason}`)
})
rawSocket.on('error', (error: Error) => {
logger.error(`[Engine] Connection error from ${remoteAddr}:`, error)
})
})
// Socket.IO 握手失败监听
this.io.on('connection_error', (err) => {
logger.error('[Socket.IO] Connection error during handshake:', err)
})
this.isStarted = true
logger.info(`WebSocket server started on port ${this.port}`)
return { success: true, port: this.port }
} catch (error) {
logger.error('Failed to start WebSocket server:', error as Error)
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error'
}
}
}
public stop = async (): Promise<{ success: boolean }> => {
if (!this.isStarted || !this.io) {
return { success: true }
}
try {
await new Promise<void>((resolve) => {
this.io!.close(() => {
resolve()
})
})
this.io = null
this.isStarted = false
this.connectedClients.clear()
logger.info('WebSocket server stopped')
return { success: true }
} catch (error) {
logger.error('Failed to stop WebSocket server:', error as Error)
return { success: false }
}
}
public getStatus = async (): Promise<WebSocketStatusResponse> => {
return {
isRunning: this.isStarted,
port: this.isStarted ? this.port : undefined,
ip: this.isStarted ? this.getLocalIpAddress() : undefined,
clientConnected: this.connectedClients.size > 0
}
}
public getAllCandidates = async (): Promise<WebSocketCandidatesResponse[]> => {
const interfaces = networkInterfaces()
// 按优先级排序的网络接口名称模式
const interfacePriority = [
// macOS: 以太网/Wi-Fi 优先
/^en[0-9]+$/, // en0, en1 (以太网/Wi-Fi)
/^(en|eth)[0-9]+$/, // 以太网接口
/^wlan[0-9]+$/, // 无线接口
// Windows: 以太网/Wi-Fi 优先
/^(Ethernet|Wi-Fi|Local Area Connection)/,
/^(Wi-Fi|无线网络连接)/,
// Linux: 以太网/Wi-Fi 优先
/^(eth|enp|wlp|wlan)[0-9]+/,
// 虚拟化接口(低优先级)
/^bridge[0-9]+$/, // Docker bridge
/^veth[0-9]+$/, // Docker veth
/^docker[0-9]+/, // Docker interfaces
/^br-[0-9a-f]+/, // Docker bridge
/^vmnet[0-9]+$/, // VMware
/^vboxnet[0-9]+$/, // VirtualBox
// VPN 隧道接口(低优先级)
/^utun[0-9]+$/, // macOS VPN
/^tun[0-9]+$/, // Linux/Unix VPN
/^tap[0-9]+$/, // TAP interfaces
/^tailscale[0-9]*$/, // Tailscale VPN
/^wg[0-9]+$/ // WireGuard VPN
]
const candidates: Array<{ host: string; interface: string; priority: number }> = []
for (const [name, ifaces] of Object.entries(interfaces)) {
for (const iface of ifaces || []) {
if (iface.family === 'IPv4' && !iface.internal) {
// 计算接口优先级
let priority = 999 // 默认最低优先级
for (let i = 0; i < interfacePriority.length; i++) {
if (interfacePriority[i].test(name)) {
priority = i
break
}
}
candidates.push({
host: iface.address,
interface: name,
priority
})
logger.debug(`Found interface: ${name} -> ${iface.address} (priority: ${priority})`)
}
}
}
// 按优先级排序返回
candidates.sort((a, b) => a.priority - b.priority)
logger.info(
`Found ${candidates.length} IP candidates: ${candidates.map((c) => `${c.host}(${c.interface})`).join(', ')}`
)
return candidates
}
public sendFile = async (
_: Electron.IpcMainInvokeEvent,
filePath: string
): Promise<{ success: boolean; error?: string }> => {
if (!this.isStarted || !this.io) {
const errorMsg = 'WebSocket server is not running.'
logger.error(errorMsg)
return { success: false, error: errorMsg }
}
if (this.connectedClients.size === 0) {
const errorMsg = 'No client connected.'
logger.error(errorMsg)
return { success: false, error: errorMsg }
}
const mainWindow = windowService.getMainWindow()
return new Promise((resolve, reject) => {
const stats = fs.statSync(filePath)
const totalSize = stats.size
const filename = path.basename(filePath)
const stream = fs.createReadStream(filePath)
let bytesSent = 0
const startTime = Date.now()
logger.info(`Starting file transfer: ${filename} (${this.formatFileSize(totalSize)})`)
// 向客户端发送文件开始的信号,包含文件名和总大小
this.io!.emit('zip-file-start', { filename, totalSize })
stream.on('data', (chunk) => {
bytesSent += chunk.length
const progress = (bytesSent / totalSize) * 100
// 向客户端发送文件块
this.io!.emit('zip-file-chunk', chunk)
// 向渲染进程发送进度更新
mainWindow?.webContents.send('file-send-progress', { progress })
// 每10%记录一次进度
if (Math.floor(progress) % 10 === 0) {
const elapsed = (Date.now() - startTime) / 1000
const speed = elapsed > 0 ? bytesSent / elapsed : 0
logger.info(`Transfer progress: ${Math.floor(progress)}% (${this.formatFileSize(speed)}/s)`)
}
})
stream.on('end', () => {
const totalTime = (Date.now() - startTime) / 1000
const avgSpeed = totalTime > 0 ? totalSize / totalTime : 0
logger.info(
`File transfer completed: ${filename} in ${totalTime.toFixed(1)}s (${this.formatFileSize(avgSpeed)}/s)`
)
// 确保发送100%的进度
mainWindow?.webContents.send('file-send-progress', { progress: 100 })
// 向客户端发送文件结束的信号
this.io!.emit('zip-file-end')
resolve({ success: true })
})
stream.on('error', (error) => {
logger.error(`File transfer failed: ${filename}`, error)
reject({
success: false,
error: error instanceof Error ? error.message : 'Unknown error'
})
})
})
}
private formatFileSize(bytes: number): string {
if (bytes === 0) return '0 B'
const k = 1024
const sizes = ['B', 'KB', 'MB', 'GB']
const i = Math.floor(Math.log(bytes) / Math.log(k))
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i]
}
}
export default new WebSocketService()

View File

@@ -0,0 +1,86 @@
import { beforeEach, describe, expect, it } from 'vitest'
import { isByPass, updateByPassRules } from '../ProxyManager'
describe('ProxyManager - bypass evaluation', () => {
beforeEach(() => {
updateByPassRules([])
})
it('matches simple hostname patterns', () => {
updateByPassRules(['foobar.com'])
expect(isByPass('http://foobar.com')).toBe(true)
expect(isByPass('http://www.foobar.com')).toBe(false)
updateByPassRules(['*.foobar.com'])
expect(isByPass('http://api.foobar.com')).toBe(true)
expect(isByPass('http://foobar.com')).toBe(true)
expect(isByPass('http://foobar.org')).toBe(false)
updateByPassRules(['*foobar.com'])
expect(isByPass('http://devfoobar.com')).toBe(true)
expect(isByPass('http://foobar.com')).toBe(true)
expect(isByPass('http://foobar.company')).toBe(false)
})
it('matches hostname patterns with scheme and port qualifiers', () => {
updateByPassRules(['https://secure.example.com'])
expect(isByPass('https://secure.example.com')).toBe(true)
expect(isByPass('https://secure.example.com:443/home')).toBe(true)
expect(isByPass('http://secure.example.com')).toBe(false)
updateByPassRules(['https://secure.example.com:8443'])
expect(isByPass('https://secure.example.com:8443')).toBe(true)
expect(isByPass('https://secure.example.com')).toBe(false)
expect(isByPass('https://secure.example.com:443')).toBe(false)
updateByPassRules(['https://x.*.y.com:99'])
expect(isByPass('https://x.api.y.com:99')).toBe(true)
expect(isByPass('https://x.api.y.com')).toBe(false)
expect(isByPass('http://x.api.y.com:99')).toBe(false)
})
it('matches domain suffix patterns with leading dot', () => {
updateByPassRules(['.example.com'])
expect(isByPass('https://example.com')).toBe(true)
expect(isByPass('https://api.example.com')).toBe(true)
expect(isByPass('https://deep.api.example.com')).toBe(true)
expect(isByPass('https://example.org')).toBe(false)
updateByPassRules(['.com'])
expect(isByPass('https://anything.com')).toBe(true)
expect(isByPass('https://example.org')).toBe(false)
updateByPassRules(['http://.google.com'])
expect(isByPass('http://maps.google.com')).toBe(true)
expect(isByPass('https://maps.google.com')).toBe(false)
})
it('matches IP literals, CIDR ranges, and wildcard IPs', () => {
updateByPassRules(['127.0.0.1', '[::1]', '192.168.1.0/24', 'fefe:13::abc/33', '192.168.*.*'])
expect(isByPass('http://127.0.0.1')).toBe(true)
expect(isByPass('http://[::1]')).toBe(true)
expect(isByPass('http://192.168.1.55')).toBe(true)
expect(isByPass('http://192.168.200.200')).toBe(true)
expect(isByPass('http://192.169.1.1')).toBe(false)
expect(isByPass('http://[fefe:13::abc]')).toBe(true)
})
it('matches CIDR ranges specified with IPv6 prefix lengths', () => {
updateByPassRules(['[2001:db8::1]', '2001:db8::/32'])
expect(isByPass('http://[2001:db8::1]')).toBe(true)
expect(isByPass('http://[2001:db8:0:0:0:0:0:ffff]')).toBe(true)
expect(isByPass('http://[2001:db9::1]')).toBe(false)
})
it('matches local addresses when <local> keyword is provided', () => {
updateByPassRules(['<local>'])
expect(isByPass('http://localhost')).toBe(true)
expect(isByPass('http://127.0.0.1')).toBe(true)
expect(isByPass('http://[::1]')).toBe(true)
expect(isByPass('http://dev.localdomain')).toBe(false)
})
})

View File

@@ -1,336 +0,0 @@
import { type Client, createClient } from '@libsql/client'
import { loggerService } from '@logger'
import { mcpApiService } from '@main/apiServer/services/mcp'
import { ModelValidationError, validateModelId } from '@main/apiServer/utils'
import { AgentType, MCPTool, objectKeys, SlashCommand, Tool } from '@types'
import { drizzle, type LibSQLDatabase } from 'drizzle-orm/libsql'
import fs from 'fs'
import path from 'path'
import { MigrationService } from './database/MigrationService'
import * as schema from './database/schema'
import { dbPath } from './drizzle.config'
import { AgentModelField, AgentModelValidationError } from './errors'
import { builtinSlashCommands } from './services/claudecode/commands'
import { builtinTools } from './services/claudecode/tools'
const logger = loggerService.withContext('BaseService')
/**
* Base service class providing shared database connection and utilities
* for all agent-related services.
*
* Features:
* - Programmatic schema management (no CLI dependencies)
* - Automatic table creation and migration
* - Schema version tracking and compatibility checks
* - Transaction-based operations for safety
* - Development vs production mode handling
* - Connection retry logic with exponential backoff
*/
export abstract class BaseService {
protected static client: Client | null = null
protected static db: LibSQLDatabase<typeof schema> | null = null
protected static isInitialized = false
protected static initializationPromise: Promise<void> | null = null
protected jsonFields: string[] = ['tools', 'mcps', 'configuration', 'accessible_paths', 'allowed_tools']
/**
* Initialize database with retry logic and proper error handling
*/
protected static async initialize(): Promise<void> {
// Return existing initialization if in progress
if (BaseService.initializationPromise) {
return BaseService.initializationPromise
}
if (BaseService.isInitialized) {
return
}
BaseService.initializationPromise = BaseService.performInitialization()
return BaseService.initializationPromise
}
public async listMcpTools(agentType: AgentType, ids?: string[]): Promise<Tool[]> {
const tools: Tool[] = []
if (agentType === 'claude-code') {
tools.push(...builtinTools)
}
if (ids && ids.length > 0) {
for (const id of ids) {
try {
const server = await mcpApiService.getServerInfo(id)
if (server) {
server.tools.forEach((tool: MCPTool) => {
tools.push({
id: `mcp_${id}_${tool.name}`,
name: tool.name,
type: 'mcp',
description: tool.description || '',
requirePermissions: true
})
})
}
} catch (error) {
logger.warn('Failed to list MCP tools', {
id,
error: error as Error
})
}
}
}
return tools
}
public async listSlashCommands(agentType: AgentType): Promise<SlashCommand[]> {
if (agentType === 'claude-code') {
return builtinSlashCommands
}
return []
}
private static async performInitialization(): Promise<void> {
const maxRetries = 3
let lastError: Error
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
logger.info(`Initializing Agent database at: ${dbPath} (attempt ${attempt}/${maxRetries})`)
// Ensure the database directory exists
const dbDir = path.dirname(dbPath)
if (!fs.existsSync(dbDir)) {
logger.info(`Creating database directory: ${dbDir}`)
fs.mkdirSync(dbDir, { recursive: true })
}
BaseService.client = createClient({
url: `file:${dbPath}`
})
BaseService.db = drizzle(BaseService.client, { schema })
// Run database migrations
const migrationService = new MigrationService(BaseService.db, BaseService.client)
await migrationService.runMigrations()
BaseService.isInitialized = true
logger.info('Agent database initialized successfully')
return
} catch (error) {
lastError = error as Error
logger.warn(`Database initialization attempt ${attempt} failed:`, lastError)
// Clean up on failure
if (BaseService.client) {
try {
BaseService.client.close()
} catch (closeError) {
logger.warn('Failed to close client during cleanup:', closeError as Error)
}
}
BaseService.client = null
BaseService.db = null
// Wait before retrying (exponential backoff)
if (attempt < maxRetries) {
const delay = Math.pow(2, attempt) * 1000 // 2s, 4s, 8s
logger.info(`Retrying in ${delay}ms...`)
await new Promise((resolve) => setTimeout(resolve, delay))
}
}
}
// All retries failed
BaseService.initializationPromise = null
logger.error('Failed to initialize Agent database after all retries:', lastError!)
throw lastError!
}
protected ensureInitialized(): void {
if (!BaseService.isInitialized || !BaseService.db || !BaseService.client) {
throw new Error('Database not initialized. Call initialize() first.')
}
}
protected get database(): LibSQLDatabase<typeof schema> {
this.ensureInitialized()
return BaseService.db!
}
protected get rawClient(): Client {
this.ensureInitialized()
return BaseService.client!
}
protected serializeJsonFields(data: any): any {
const serialized = { ...data }
for (const field of this.jsonFields) {
if (serialized[field] !== undefined) {
serialized[field] =
Array.isArray(serialized[field]) || typeof serialized[field] === 'object'
? JSON.stringify(serialized[field])
: serialized[field]
}
}
return serialized
}
protected deserializeJsonFields(data: any): any {
if (!data) return data
const deserialized = { ...data }
for (const field of this.jsonFields) {
if (deserialized[field] && typeof deserialized[field] === 'string') {
try {
deserialized[field] = JSON.parse(deserialized[field])
} catch (error) {
logger.warn(`Failed to parse JSON field ${field}:`, error as Error)
}
}
}
// convert null from db to undefined to satisfy type definition
for (const key of objectKeys(data)) {
if (deserialized[key] === null) {
deserialized[key] = undefined
}
}
return deserialized
}
/**
* Validate, normalize, and ensure filesystem access for a set of absolute paths.
*
* - Requires every entry to be an absolute path and throws if not.
* - Normalizes each path and deduplicates while preserving order.
* - Creates missing directories (or parent directories for file-like paths).
*/
protected ensurePathsExist(paths?: string[]): string[] {
if (!paths?.length) {
return []
}
const sanitizedPaths: string[] = []
const seenPaths = new Set<string>()
for (const rawPath of paths) {
if (!rawPath) {
continue
}
if (!path.isAbsolute(rawPath)) {
throw new Error(`Accessible path must be absolute: ${rawPath}`)
}
// Normalize to provide consistent values to downstream consumers.
const resolvedPath = path.normalize(rawPath)
let stats: fs.Stats | null = null
try {
// Attempt to stat the path to understand whether it already exists and if it is a file.
if (fs.existsSync(resolvedPath)) {
stats = fs.statSync(resolvedPath)
}
} catch (error) {
logger.warn('Failed to inspect accessible path', {
path: rawPath,
error: error instanceof Error ? error.message : String(error)
})
}
const looksLikeFile =
(stats && stats.isFile()) || (!stats && path.extname(resolvedPath) !== '' && !resolvedPath.endsWith(path.sep))
// For file-like targets create the parent directory; otherwise ensure the directory itself.
const directoryToEnsure = looksLikeFile ? path.dirname(resolvedPath) : resolvedPath
if (!fs.existsSync(directoryToEnsure)) {
try {
fs.mkdirSync(directoryToEnsure, { recursive: true })
} catch (error) {
logger.error('Failed to create accessible path directory', {
path: directoryToEnsure,
error: error instanceof Error ? error.message : String(error)
})
throw error
}
}
// Preserve the first occurrence only to avoid duplicates while keeping caller order stable.
if (!seenPaths.has(resolvedPath)) {
seenPaths.add(resolvedPath)
sanitizedPaths.push(resolvedPath)
}
}
return sanitizedPaths
}
/**
* Force re-initialization (for development/testing)
*/
protected async validateAgentModels(
agentType: AgentType,
models: Partial<Record<AgentModelField, string | undefined>>
): Promise<void> {
const entries = Object.entries(models) as [AgentModelField, string | undefined][]
if (entries.length === 0) {
return
}
for (const [field, rawValue] of entries) {
if (rawValue === undefined || rawValue === null) {
continue
}
const modelValue = rawValue
const validation = await validateModelId(modelValue)
if (!validation.valid || !validation.provider) {
const detail: ModelValidationError = validation.error ?? {
type: 'invalid_format',
message: 'Unknown model validation error',
code: 'validation_error'
}
throw new AgentModelValidationError({ agentType, field, model: modelValue }, detail)
}
if (!validation.provider.apiKey) {
throw new AgentModelValidationError(
{ agentType, field, model: modelValue },
{
type: 'invalid_format',
message: `Provider '${validation.provider.id}' is missing an API key`,
code: 'provider_api_key_missing'
}
)
}
}
}
static async reinitialize(): Promise<void> {
BaseService.isInitialized = false
BaseService.initializationPromise = null
if (BaseService.client) {
try {
BaseService.client.close()
} catch (error) {
logger.warn('Failed to close client during reinitialize:', error as Error)
}
}
BaseService.client = null
BaseService.db = null
await BaseService.initialize()
}
}

View File

@@ -1,81 +0,0 @@
# Agents Service
Simplified Drizzle ORM implementation for agent and session management in Cherry Studio.
## Features
- **Native Drizzle migrations** - Uses built-in migrate() function
- **Zero CLI dependencies** in production
- **Auto-initialization** with retry logic
- **Full TypeScript** type safety
- **Model validation** to ensure models exist and provider configuration matches the agent type
## Schema
- `agents.schema.ts` - Agent definitions
- `sessions.schema.ts` - Session and message tables
- `migrations.schema.ts` - Migration tracking
## Usage
```typescript
import { agentService } from './services'
// Create agent - fully typed
const agent = await agentService.createAgent({
type: 'custom',
name: 'My Agent',
model: 'anthropic:claude-3-5-sonnet-20241022'
})
```
## Model Validation
- Model identifiers must use the `provider:model_id` format (for example `anthropic:claude-3-5-sonnet-20241022`).
- `model`, `plan_model`, and `small_model` are validated against the configured providers before the database is touched.
- Invalid configurations return a `400 invalid_request_error` response and the create/update operation is aborted.
## Development Commands
```bash
# Apply schema changes
yarn agents:generate
# Quick development sync
yarn agents:push
# Database tools
yarn agents:studio # Open Drizzle Studio
yarn agents:health # Health check
yarn agents:drop # Reset database
```
## Workflow
1. **Edit schema** in `/database/schema/`
2. **Generate migration** with `yarn agents:generate`
3. **Test changes** with `yarn agents:health`
4. **Deploy** - migrations apply automatically
## Services
- `AgentService` - Agent CRUD operations
- `SessionService` - Session management
- `SessionMessageService` - Message logging
- `BaseService` - Database utilities
- `schemaSyncer` - Migration handler
## Troubleshooting
```bash
# Check status
yarn agents:health
# Apply migrations
yarn agents:migrate
# Reset completely
yarn agents:reset --yes
```
The simplified migration system reduced complexity from 463 to ~30 lines while maintaining all functionality through Drizzle's native migration system.

View File

@@ -1,161 +0,0 @@
import { type Client } from '@libsql/client'
import { loggerService } from '@logger'
import { getResourcePath } from '@main/utils'
import { type LibSQLDatabase } from 'drizzle-orm/libsql'
import fs from 'fs'
import path from 'path'
import * as schema from './schema'
import { migrations, type NewMigration } from './schema/migrations.schema'
const logger = loggerService.withContext('MigrationService')
interface MigrationJournal {
version: string
dialect: string
entries: Array<{
idx: number
version: string
when: number
tag: string
breakpoints: boolean
}>
}
export class MigrationService {
private db: LibSQLDatabase<typeof schema>
private client: Client
private migrationDir: string
constructor(db: LibSQLDatabase<typeof schema>, client: Client) {
this.db = db
this.client = client
this.migrationDir = path.join(getResourcePath(), 'database', 'drizzle')
}
async runMigrations(): Promise<void> {
try {
logger.info('Starting migration check...')
const hasMigrationsTable = await this.migrationsTableExists()
if (!hasMigrationsTable) {
logger.info('Migrations table not found; assuming fresh database state')
}
// Read migration journal
const journal = await this.readMigrationJournal()
if (!journal.entries.length) {
logger.info('No migrations found in journal')
return
}
// Get applied migrations
const appliedMigrations = hasMigrationsTable ? await this.getAppliedMigrations() : []
const appliedVersions = new Set(appliedMigrations.map((m) => Number(m.version)))
const latestAppliedVersion = appliedMigrations.reduce(
(max, migration) => Math.max(max, Number(migration.version)),
0
)
const latestJournalVersion = journal.entries.reduce((max, entry) => Math.max(max, entry.idx), 0)
logger.info(`Latest applied migration: v${latestAppliedVersion}, latest available: v${latestJournalVersion}`)
// Find pending migrations (compare journal idx with stored version, which is the same value)
const pendingMigrations = journal.entries
.filter((entry) => !appliedVersions.has(entry.idx))
.sort((a, b) => a.idx - b.idx)
if (pendingMigrations.length === 0) {
logger.info('Database is up to date')
return
}
logger.info(`Found ${pendingMigrations.length} pending migrations`)
// Execute pending migrations
for (const migration of pendingMigrations) {
await this.executeMigration(migration)
}
logger.info('All migrations completed successfully')
} catch (error) {
logger.error('Migration failed:', { error })
throw error
}
}
private async migrationsTableExists(): Promise<boolean> {
try {
const table = await this.client.execute(`SELECT name FROM sqlite_master WHERE type='table' AND name='migrations'`)
return table.rows.length > 0
} catch (error) {
logger.error('Failed to check migrations table status:', { error })
throw error
}
}
private async readMigrationJournal(): Promise<MigrationJournal> {
const journalPath = path.join(this.migrationDir, 'meta', '_journal.json')
if (!fs.existsSync(journalPath)) {
logger.warn('Migration journal not found:', { journalPath })
return { version: '7', dialect: 'sqlite', entries: [] }
}
try {
const journalContent = fs.readFileSync(journalPath, 'utf-8')
return JSON.parse(journalContent)
} catch (error) {
logger.error('Failed to read migration journal:', { error })
throw error
}
}
private async getAppliedMigrations(): Promise<schema.Migration[]> {
try {
return await this.db.select().from(migrations)
} catch (error) {
// This should not happen since we ensure the table exists in runMigrations()
logger.error('Failed to query applied migrations:', { error })
throw error
}
}
private async executeMigration(migration: MigrationJournal['entries'][0]): Promise<void> {
const sqlFilePath = path.join(this.migrationDir, `${migration.tag}.sql`)
if (!fs.existsSync(sqlFilePath)) {
throw new Error(`Migration SQL file not found: ${sqlFilePath}`)
}
try {
logger.info(`Executing migration ${migration.tag}...`)
const startTime = Date.now()
// Read and execute SQL
const sqlContent = fs.readFileSync(sqlFilePath, 'utf-8')
await this.client.executeMultiple(sqlContent)
// Record migration as applied (store journal idx as version for tracking)
const newMigration: NewMigration = {
version: migration.idx,
tag: migration.tag,
executedAt: Date.now()
}
if (!(await this.migrationsTableExists())) {
throw new Error('Migrations table missing after executing migration; cannot record progress')
}
await this.db.insert(migrations).values(newMigration)
const executionTime = Date.now() - startTime
logger.info(`Migration ${migration.tag} completed in ${executionTime}ms`)
} catch (error) {
logger.error(`Migration ${migration.tag} failed:`, { error })
throw error
}
}
}

View File

@@ -1,14 +0,0 @@
/**
* Database Module
*
* This module provides centralized access to Drizzle ORM schemas
* for type-safe database operations.
*
* Schema evolution is handled by Drizzle Kit migrations.
*/
// Drizzle ORM schemas
export * from './schema'
// Repository helpers
export * from './sessionMessageRepository'

View File

@@ -1,35 +0,0 @@
/**
* Drizzle ORM schema for agents table
*/
import { index, sqliteTable, text } from 'drizzle-orm/sqlite-core'
export const agentsTable = sqliteTable('agents', {
id: text('id').primaryKey(),
type: text('type').notNull(),
name: text('name').notNull(),
description: text('description'),
accessible_paths: text('accessible_paths'), // JSON array of directory paths the agent can access
instructions: text('instructions'),
model: text('model').notNull(), // Main model ID (required)
plan_model: text('plan_model'), // Optional plan/thinking model ID
small_model: text('small_model'), // Optional small/fast model ID
mcps: text('mcps'), // JSON array of MCP tool IDs
allowed_tools: text('allowed_tools'), // JSON array of allowed tool IDs (whitelist)
configuration: text('configuration'), // JSON, extensible settings
created_at: text('created_at').notNull(),
updated_at: text('updated_at').notNull()
})
// Indexes for agents table
export const agentsNameIdx = index('idx_agents_name').on(agentsTable.name)
export const agentsTypeIdx = index('idx_agents_type').on(agentsTable.type)
export const agentsCreatedAtIdx = index('idx_agents_created_at').on(agentsTable.created_at)
export type AgentRow = typeof agentsTable.$inferSelect
export type InsertAgentRow = typeof agentsTable.$inferInsert

View File

@@ -1,8 +0,0 @@
/**
* Drizzle ORM schema exports
*/
export * from './agents.schema'
export * from './messages.schema'
export * from './migrations.schema'
export * from './sessions.schema'

View File

@@ -1,30 +0,0 @@
import { foreignKey, index, integer, sqliteTable, text } from 'drizzle-orm/sqlite-core'
import { sessionsTable } from './sessions.schema'
// session_messages table to log all messages, thoughts, actions, observations in a session
export const sessionMessagesTable = sqliteTable('session_messages', {
id: integer('id').primaryKey({ autoIncrement: true }),
session_id: text('session_id').notNull(),
role: text('role').notNull(), // 'user', 'agent', 'system', 'tool'
content: text('content').notNull(), // JSON structured data
agent_session_id: text('agent_session_id').default(''),
metadata: text('metadata'), // JSON metadata (optional)
created_at: text('created_at').notNull(),
updated_at: text('updated_at').notNull()
})
// Indexes for session_messages table
export const sessionMessagesSessionIdIdx = index('idx_session_messages_session_id').on(sessionMessagesTable.session_id)
export const sessionMessagesCreatedAtIdx = index('idx_session_messages_created_at').on(sessionMessagesTable.created_at)
export const sessionMessagesUpdatedAtIdx = index('idx_session_messages_updated_at').on(sessionMessagesTable.updated_at)
// Foreign keys for session_messages table
export const sessionMessagesFkSession = foreignKey({
columns: [sessionMessagesTable.session_id],
foreignColumns: [sessionsTable.id],
name: 'fk_session_messages_session_id'
}).onDelete('cascade')
export type SessionMessageRow = typeof sessionMessagesTable.$inferSelect
export type InsertSessionMessageRow = typeof sessionMessagesTable.$inferInsert

View File

@@ -1,14 +0,0 @@
/**
* Migration tracking schema
*/
import { integer, sqliteTable, text } from 'drizzle-orm/sqlite-core'
export const migrations = sqliteTable('migrations', {
version: integer('version').primaryKey(),
tag: text('tag').notNull(),
executedAt: integer('executed_at').notNull()
})
export type Migration = typeof migrations.$inferSelect
export type NewMigration = typeof migrations.$inferInsert

View File

@@ -1,45 +0,0 @@
/**
* Drizzle ORM schema for sessions and session_logs tables
*/
import { foreignKey, index, sqliteTable, text } from 'drizzle-orm/sqlite-core'
import { agentsTable } from './agents.schema'
export const sessionsTable = sqliteTable('sessions', {
id: text('id').primaryKey(),
agent_type: text('agent_type').notNull(),
agent_id: text('agent_id').notNull(), // Primary agent ID for the session
name: text('name').notNull(),
description: text('description'),
accessible_paths: text('accessible_paths'), // JSON array of directory paths the agent can access
instructions: text('instructions'),
model: text('model').notNull(), // Main model ID (required)
plan_model: text('plan_model'), // Optional plan/thinking model ID
small_model: text('small_model'), // Optional small/fast model ID
mcps: text('mcps'), // JSON array of MCP tool IDs
allowed_tools: text('allowed_tools'), // JSON array of allowed tool IDs (whitelist)
configuration: text('configuration'), // JSON, extensible settings
created_at: text('created_at').notNull(),
updated_at: text('updated_at').notNull()
})
// Foreign keys for sessions table
export const sessionsFkAgent = foreignKey({
columns: [sessionsTable.agent_id],
foreignColumns: [agentsTable.id],
name: 'fk_session_agent_id'
}).onDelete('cascade')
// Indexes for sessions table
export const sessionsCreatedAtIdx = index('idx_sessions_created_at').on(sessionsTable.created_at)
export const sessionsMainAgentIdIdx = index('idx_sessions_agent_id').on(sessionsTable.agent_id)
export const sessionsModelIdx = index('idx_sessions_model').on(sessionsTable.model)
export type SessionRow = typeof sessionsTable.$inferSelect
export type InsertSessionRow = typeof sessionsTable.$inferInsert

View File

@@ -1,257 +0,0 @@
import { loggerService } from '@logger'
import type {
AgentMessageAssistantPersistPayload,
AgentMessagePersistExchangePayload,
AgentMessagePersistExchangeResult,
AgentMessageUserPersistPayload,
AgentPersistedMessage,
AgentSessionMessageEntity
} from '@types'
import { and, asc, eq } from 'drizzle-orm'
import { BaseService } from '../BaseService'
import type { InsertSessionMessageRow, SessionMessageRow } from './schema'
import { sessionMessagesTable } from './schema'
const logger = loggerService.withContext('AgentMessageRepository')
type TxClient = any
export type PersistUserMessageParams = AgentMessageUserPersistPayload & {
sessionId: string
agentSessionId?: string
tx?: TxClient
}
export type PersistAssistantMessageParams = AgentMessageAssistantPersistPayload & {
sessionId: string
agentSessionId: string
tx?: TxClient
}
type PersistExchangeParams = AgentMessagePersistExchangePayload & {
tx?: TxClient
}
type PersistExchangeResult = AgentMessagePersistExchangeResult
class AgentMessageRepository extends BaseService {
private static instance: AgentMessageRepository | null = null
static getInstance(): AgentMessageRepository {
if (!AgentMessageRepository.instance) {
AgentMessageRepository.instance = new AgentMessageRepository()
}
return AgentMessageRepository.instance
}
private serializeMessage(payload: AgentPersistedMessage): string {
return JSON.stringify(payload)
}
private serializeMetadata(metadata?: Record<string, unknown>): string | undefined {
if (!metadata) {
return undefined
}
try {
return JSON.stringify(metadata)
} catch (error) {
logger.warn('Failed to serialize session message metadata', error as Error)
return undefined
}
}
private deserialize(row: any): AgentSessionMessageEntity {
if (!row) return row
const deserialized = { ...row }
if (typeof deserialized.content === 'string') {
try {
deserialized.content = JSON.parse(deserialized.content)
} catch (error) {
logger.warn('Failed to parse session message content JSON', error as Error)
}
}
if (typeof deserialized.metadata === 'string') {
try {
deserialized.metadata = JSON.parse(deserialized.metadata)
} catch (error) {
logger.warn('Failed to parse session message metadata JSON', error as Error)
}
}
return deserialized
}
private getWriter(tx?: TxClient): TxClient {
return tx ?? this.database
}
private async findExistingMessageRow(
writer: TxClient,
sessionId: string,
role: string,
messageId: string
): Promise<SessionMessageRow | null> {
const candidateRows: SessionMessageRow[] = await writer
.select()
.from(sessionMessagesTable)
.where(and(eq(sessionMessagesTable.session_id, sessionId), eq(sessionMessagesTable.role, role)))
.orderBy(asc(sessionMessagesTable.created_at))
for (const row of candidateRows) {
if (!row?.content) continue
try {
const parsed = JSON.parse(row.content) as AgentPersistedMessage | undefined
if (parsed?.message?.id === messageId) {
return row
}
} catch (error) {
logger.warn('Failed to parse session message content JSON during lookup', error as Error)
}
}
return null
}
private async upsertMessage(
params: PersistUserMessageParams | PersistAssistantMessageParams
): Promise<AgentSessionMessageEntity> {
await AgentMessageRepository.initialize()
this.ensureInitialized()
const { sessionId, agentSessionId = '', payload, metadata, createdAt, tx } = params
if (!payload?.message?.role) {
throw new Error('Message payload missing role')
}
if (!payload.message.id) {
throw new Error('Message payload missing id')
}
const writer = this.getWriter(tx)
const now = createdAt ?? payload.message.createdAt ?? new Date().toISOString()
const serializedPayload = this.serializeMessage(payload)
const serializedMetadata = this.serializeMetadata(metadata)
const existingRow = await this.findExistingMessageRow(writer, sessionId, payload.message.role, payload.message.id)
if (existingRow) {
const metadataToPersist = serializedMetadata ?? existingRow.metadata ?? undefined
const agentSessionToPersist = agentSessionId || existingRow.agent_session_id || ''
await writer
.update(sessionMessagesTable)
.set({
content: serializedPayload,
metadata: metadataToPersist,
agent_session_id: agentSessionToPersist,
updated_at: now
})
.where(eq(sessionMessagesTable.id, existingRow.id))
return this.deserialize({
...existingRow,
content: serializedPayload,
metadata: metadataToPersist,
agent_session_id: agentSessionToPersist,
updated_at: now
})
}
const insertData: InsertSessionMessageRow = {
session_id: sessionId,
role: payload.message.role,
content: serializedPayload,
agent_session_id: agentSessionId,
metadata: serializedMetadata,
created_at: now,
updated_at: now
}
const [saved] = await writer.insert(sessionMessagesTable).values(insertData).returning()
return this.deserialize(saved)
}
async persistUserMessage(params: PersistUserMessageParams): Promise<AgentSessionMessageEntity> {
return this.upsertMessage({ ...params, agentSessionId: params.agentSessionId ?? '' })
}
async persistAssistantMessage(params: PersistAssistantMessageParams): Promise<AgentSessionMessageEntity> {
return this.upsertMessage(params)
}
async persistExchange(params: PersistExchangeParams): Promise<PersistExchangeResult> {
await AgentMessageRepository.initialize()
this.ensureInitialized()
const { sessionId, agentSessionId, user, assistant } = params
const result = await this.database.transaction(async (tx) => {
const exchangeResult: PersistExchangeResult = {}
if (user?.payload) {
exchangeResult.userMessage = await this.persistUserMessage({
sessionId,
agentSessionId,
payload: user.payload,
metadata: user.metadata,
createdAt: user.createdAt,
tx
})
}
if (assistant?.payload) {
exchangeResult.assistantMessage = await this.persistAssistantMessage({
sessionId,
agentSessionId,
payload: assistant.payload,
metadata: assistant.metadata,
createdAt: assistant.createdAt,
tx
})
}
return exchangeResult
})
return result
}
async getSessionHistory(sessionId: string): Promise<AgentPersistedMessage[]> {
await AgentMessageRepository.initialize()
this.ensureInitialized()
try {
const rows = await this.database
.select()
.from(sessionMessagesTable)
.where(eq(sessionMessagesTable.session_id, sessionId))
.orderBy(asc(sessionMessagesTable.created_at))
const messages: AgentPersistedMessage[] = []
for (const row of rows) {
const deserialized = this.deserialize(row)
if (deserialized?.content) {
messages.push(deserialized.content as AgentPersistedMessage)
}
}
logger.info(`Loaded ${messages.length} messages for session ${sessionId}`)
return messages
} catch (error) {
logger.error('Failed to load session history', error as Error)
throw error
}
}
}
export const agentMessageRepository = AgentMessageRepository.getInstance()

View File

@@ -1,31 +0,0 @@
/**
* Drizzle Kit configuration for agents database
*/
import os from 'node:os'
import path from 'node:path'
import { defineConfig } from 'drizzle-kit'
import { app } from 'electron'
function getDbPath() {
if (process.env.NODE_ENV === 'development') {
return path.join(os.homedir(), '.cherrystudio', 'data', 'agents.db')
}
return path.join(app.getPath('userData'), 'agents.db')
}
const resolvedDbPath = getDbPath()
export const dbPath = resolvedDbPath
export default defineConfig({
dialect: 'sqlite',
schema: './src/main/services/agents/database/schema/index.ts',
out: './resources/database/drizzle',
dbCredentials: {
url: `file:${resolvedDbPath}`
},
verbose: true,
strict: true
})

View File

@@ -1,22 +0,0 @@
import { ModelValidationError } from '@main/apiServer/utils'
import { AgentType } from '@types'
export type AgentModelField = 'model' | 'plan_model' | 'small_model'
export interface AgentModelValidationContext {
agentType: AgentType
field: AgentModelField
model?: string
}
export class AgentModelValidationError extends Error {
readonly context: AgentModelValidationContext
readonly detail: ModelValidationError
constructor(context: AgentModelValidationContext, detail: ModelValidationError) {
super(`Validation failed for ${context.agentType}.${context.field}: ${detail.message}`)
this.name = 'AgentModelValidationError'
this.context = context
this.detail = detail
}
}

View File

@@ -1,25 +0,0 @@
/**
* Agents Service Module
*
* This module provides a complete autonomous agent management system with:
* - Agent lifecycle management (CRUD operations)
* - Session handling with conversation history
* - Comprehensive logging and audit trails
* - Database operations with Drizzle ORM and migration support
* - RESTful API endpoints for external integration
*/
// === Core Services ===
// Main service classes and singleton instances
export * from './services'
// === Error Types ===
export { type AgentModelField, AgentModelValidationError } from './errors'
// === Base Infrastructure ===
// Shared database utilities and base service class
export { BaseService } from './BaseService'
// === Database Layer ===
// Drizzle ORM schemas, migrations, and database utilities
export * as Database from './database'

Some files were not shown because too many files have changed in this diff Show More