* refactor: LLM response handling with reasoning content - Added a `show_reasoning` parameter to `run_agent` to control the display of reasoning content. - Updated `LLMResponse` to include a `reasoning_content` field for storing reasoning text. - Modified `WebChatMessageEvent` to handle and send reasoning content in streaming responses. - Implemented reasoning extraction in various provider sources (e.g., OpenAI, Gemini). - Updated the chat interface to display reasoning content in a collapsible format. - Removed the deprecated `thinking_filter` package and its associated logic. - Updated localization files to include new reasoning-related strings. * feat: add Groq chat completion provider and associated configurations * Update astrbot/core/provider/sources/gemini_source.py Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com> --------- Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
17 lines
612 B
Python
17 lines
612 B
Python
# This file was originally created to adapt to glm-4v-flash, which only supports one image in the context.
|
|
# It is no longer specifically adapted to Zhipu's models. To ensure compatibility, this
|
|
|
|
|
|
from ..register import register_provider_adapter
|
|
from .openai_source import ProviderOpenAIOfficial
|
|
|
|
|
|
@register_provider_adapter("zhipu_chat_completion", "智谱 Chat Completion 提供商适配器")
|
|
class ProviderZhipu(ProviderOpenAIOfficial):
|
|
def __init__(
|
|
self,
|
|
provider_config: dict,
|
|
provider_settings: dict,
|
|
) -> None:
|
|
super().__init__(provider_config, provider_settings)
|