* refactor: LLM response handling with reasoning content - Added a `show_reasoning` parameter to `run_agent` to control the display of reasoning content. - Updated `LLMResponse` to include a `reasoning_content` field for storing reasoning text. - Modified `WebChatMessageEvent` to handle and send reasoning content in streaming responses. - Implemented reasoning extraction in various provider sources (e.g., OpenAI, Gemini). - Updated the chat interface to display reasoning content in a collapsible format. - Removed the deprecated `thinking_filter` package and its associated logic. - Updated localization files to include new reasoning-related strings. * feat: add Groq chat completion provider and associated configurations * Update astrbot/core/provider/sources/gemini_source.py Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com> --------- Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
16 lines
456 B
Python
16 lines
456 B
Python
from ..register import register_provider_adapter
|
|
from .openai_source import ProviderOpenAIOfficial
|
|
|
|
|
|
@register_provider_adapter(
|
|
"groq_chat_completion", "Groq Chat Completion Provider Adapter"
|
|
)
|
|
class ProviderGroq(ProviderOpenAIOfficial):
|
|
def __init__(
|
|
self,
|
|
provider_config: dict,
|
|
provider_settings: dict,
|
|
) -> None:
|
|
super().__init__(provider_config, provider_settings)
|
|
self.reasoning_key = "reasoning"
|