Conversation
… image loading in Markdown component
* ✨ feat: enhance AgentMarketplaceDetail with internationalization support and improve loading/error messages; update translations for English and Chinese * 🔧 feat: add thinking support (#142) * 🧪 test: increase test coverage (#144) * test: increase test coverage * test: increase test coverage * ✨ feat: add GPUGeek provider support with vendor-based model system - Add GPUGEEK provider type to schema with full configuration support - Implement comprehensive GPUGeek model list with vendor prefixes (Vendor2, OpenAI, DeepSeek) - Add intelligent model-to-pricing mapping with DeepSeek v*/r* pattern recognition - Integrate GPUGeek factory using OpenAI-compatible API with configurable base URL - Enable system provider management for GPUGeek with automatic initialization - Support 27 GPUGeek models including Gemini, Claude, GPT, and DeepSeek variants 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: add qwen --------- Co-authored-by: Harvey <q-query@outlook.com> Co-authored-by: Claude <noreply@anthropic.com>
…pecific substrings
…Selector component
… in docker-compose.dev.yaml
Reviewer's Guide端到端新增对 GPUGeek 和 Qwen 两个 LLM 提供方的支持(配置、provider factory、模型发现、UI),改进带鉴权文件下载的 Markdown 图片处理方式,收紧错误码到 HTTP 状态码的映射,并相应更新本地开发 / CI 配置和数据库迁移。 GPUGeek 模型发现与价格映射的时序图sequenceDiagram
participant C as Caller
participant LS as LLMService
participant Lit as litellm
C->>LS: get_models_by_provider("gpugeek")
activate LS
LS->>LS: provider_type == "gpugeek"?
alt gpugeek provider
loop for model_name in GPUGEEK_MODELS
LS->>LS: base_model = _map_gpugeek_to_base_model(model_name)
alt base_model is not None
LS->>Lit: model_cost.get(base_model)
alt pricing found
Lit-->>LS: base_info
LS->>LS: fill model_data with pricing
else pricing missing or error
Lit--xLS: None / Exception
LS->>LS: keep default pricing (0.0)
end
else no base mapping
LS->>LS: use default pricing
end
LS->>LS: models.append(model_data)
end
LS-->>C: list[ModelInfo] (GPUGeek models)
else other provider
LS->>Lit: model_list = litellm.get_model_list(litellm_provider_type)
Lit-->>LS: model_list
LS->>LS: apply ModelFilter chain
LS-->>C: list[ModelInfo]
end
deactivate LS
扩展 LLM 提供方支持(GPUGeek 与 Qwen)的类图classDiagram
class ProviderType {
<<enum>>
OPENAI
AZURE_OPENAI
GOOGLE
GOOGLE_VERTEX
GPUGEEK
QWEN
}
class LLMProviderConfig {
+bool enabled
+str api_key
+str api_endpoint
+dict[str, str] extra
}
class LLMConfig {
+LLMProviderConfig openai
+LLMProviderConfig google
+LLMProviderConfig googlevertex
+LLMProviderConfig gpugeek
+LLMProviderConfig qwen
+ProviderType~list~ providers
+ProviderType get_provider_type()
+LLMProviderConfig get_provider_config(provider)
+list[tuple[ProviderType, LLMProviderConfig]] iter_enabled()
}
class ProviderFactory {
+ModelInstance create(config, credentials, runtime_kwargs)
-BaseChatModel _create_openai(model, credentials, runtime_kwargs)
-BaseChatModel _create_azure_openai(model, credentials, runtime_kwargs)
-BaseChatModel _create_google(model, credentials, runtime_kwargs)
-BaseChatModel _create_google_vertex(model, credentials, runtime_kwargs)
-BaseChatModel _create_gpugeek(model, credentials, runtime_kwargs)
-BaseChatModel _create_qwen(model, credentials, runtime_kwargs)
}
class LLMCredentials {
<<TypedDict>>
+str api_key
+str api_endpoint
+dict[str, str] extra
}
class ModelFilter {
+callable substring_filter(substring)
+callable no_substring_filter(substring)
+callable no_slash_filter()
+callable no_date_suffix_filter()
+callable version_filter(min_version, max_version=None)
+callable azure_path_filter()
+callable no_expensive_azure_filter()
+callable combined_filter(*filters)
}
class LLMService {
+list[ModelInfo] get_models_by_provider(provider_type)
+dict get_model_info(model_name)
+dict get_all_providers_with_models()
-callable _get_provider_filter(provider_type)
}
class GPUGEEK_MODELS {
<<list[str]>>
+Vendor2/Claude-3.7-Sonnet
+Vendor2/Claude-4-Sonnet
+Vendor2/Claude-4.5-Opus
+Vendor2/Claude-4.5-Sonnet
+DeepSeek/DeepSeek-V3-0324
+DeepSeek/DeepSeek-V3.1-0821
+DeepSeek/DeepSeek-R1-671B
}
class GPUGeekMapper {
+str|None _map_gpugeek_to_base_model(gpugeek_model)
}
class ChatOpenAI {
+ChatOpenAI(model, api_key, base_url, extra_body, **runtime_kwargs)
}
class ChatQwen {
+ChatQwen(model, api_key, base_url, **runtime_kwargs)
}
class ProviderRepository {
+Provider get_system_provider_by_type(provider_type)
+Provider create_system_provider(provider_type, name)
}
class ProviderStartup {
+list[Provider] ensure_system_providers(llm_config)
}
class Provider {
+str id
+str name
+ProviderType provider_type
+bool is_system
}
class TopicGenerator {
+str _select_title_generation_model(provider_type, session_model, default_model)
}
ProviderType <.. LLMConfig : uses
LLMConfig "1" o-- "1" LLMProviderConfig : has
LLMConfig ..> ProviderType : get_provider_type()
LLMConfig ..> LLMProviderConfig : get_provider_config()
ProviderFactory ..> ProviderType : create()
ProviderFactory ..> LLMCredentials : create()
ProviderFactory ..> ChatOpenAI : _create_gpugeek()
ProviderFactory ..> ChatQwen : _create_qwen()
LLMService ..> ModelFilter : uses
LLMService ..> GPUGeekMapper : uses
LLMService ..> GPUGEEK_MODELS : iterates
GPUGeekMapper ..> GPUGEEK_MODELS : maps
ProviderStartup ..> LLMConfig : reads
ProviderStartup ..> ProviderRepository : ensures
ProviderRepository ..> ProviderType : filters
TopicGenerator ..> ProviderType : selects
Provider "*" o-- "1" ProviderType : type
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your Experience访问你的 dashboard 以:
Getting HelpOriginal review guide in EnglishReviewer's GuideAdds support for new GPUGeek and Qwen LLM providers end-to-end (config, provider factory, model discovery, UI), improves markdown image handling for authenticated file downloads, tightens error code to HTTP mapping, and updates dev/CI and migrations accordingly. Sequence diagram for GPUGeek model discovery and pricing mappingsequenceDiagram
participant C as Caller
participant LS as LLMService
participant Lit as litellm
C->>LS: get_models_by_provider("gpugeek")
activate LS
LS->>LS: provider_type == "gpugeek"?
alt gpugeek provider
loop for model_name in GPUGEEK_MODELS
LS->>LS: base_model = _map_gpugeek_to_base_model(model_name)
alt base_model is not None
LS->>Lit: model_cost.get(base_model)
alt pricing found
Lit-->>LS: base_info
LS->>LS: fill model_data with pricing
else pricing missing or error
Lit--xLS: None / Exception
LS->>LS: keep default pricing (0.0)
end
else no base mapping
LS->>LS: use default pricing
end
LS->>LS: models.append(model_data)
end
LS-->>C: list[ModelInfo] (GPUGeek models)
else other provider
LS->>Lit: model_list = litellm.get_model_list(litellm_provider_type)
Lit-->>LS: model_list
LS->>LS: apply ModelFilter chain
LS-->>C: list[ModelInfo]
end
deactivate LS
Class diagram for extended LLM provider support (GPUGeek and Qwen)classDiagram
class ProviderType {
<<enum>>
OPENAI
AZURE_OPENAI
GOOGLE
GOOGLE_VERTEX
GPUGEEK
QWEN
}
class LLMProviderConfig {
+bool enabled
+str api_key
+str api_endpoint
+dict[str, str] extra
}
class LLMConfig {
+LLMProviderConfig openai
+LLMProviderConfig google
+LLMProviderConfig googlevertex
+LLMProviderConfig gpugeek
+LLMProviderConfig qwen
+ProviderType~list~ providers
+ProviderType get_provider_type()
+LLMProviderConfig get_provider_config(provider)
+list[tuple[ProviderType, LLMProviderConfig]] iter_enabled()
}
class ProviderFactory {
+ModelInstance create(config, credentials, runtime_kwargs)
-BaseChatModel _create_openai(model, credentials, runtime_kwargs)
-BaseChatModel _create_azure_openai(model, credentials, runtime_kwargs)
-BaseChatModel _create_google(model, credentials, runtime_kwargs)
-BaseChatModel _create_google_vertex(model, credentials, runtime_kwargs)
-BaseChatModel _create_gpugeek(model, credentials, runtime_kwargs)
-BaseChatModel _create_qwen(model, credentials, runtime_kwargs)
}
class LLMCredentials {
<<TypedDict>>
+str api_key
+str api_endpoint
+dict[str, str] extra
}
class ModelFilter {
+callable substring_filter(substring)
+callable no_substring_filter(substring)
+callable no_slash_filter()
+callable no_date_suffix_filter()
+callable version_filter(min_version, max_version=None)
+callable azure_path_filter()
+callable no_expensive_azure_filter()
+callable combined_filter(*filters)
}
class LLMService {
+list[ModelInfo] get_models_by_provider(provider_type)
+dict get_model_info(model_name)
+dict get_all_providers_with_models()
-callable _get_provider_filter(provider_type)
}
class GPUGEEK_MODELS {
<<list[str]>>
+Vendor2/Claude-3.7-Sonnet
+Vendor2/Claude-4-Sonnet
+Vendor2/Claude-4.5-Opus
+Vendor2/Claude-4.5-Sonnet
+DeepSeek/DeepSeek-V3-0324
+DeepSeek/DeepSeek-V3.1-0821
+DeepSeek/DeepSeek-R1-671B
}
class GPUGeekMapper {
+str|None _map_gpugeek_to_base_model(gpugeek_model)
}
class ChatOpenAI {
+ChatOpenAI(model, api_key, base_url, extra_body, **runtime_kwargs)
}
class ChatQwen {
+ChatQwen(model, api_key, base_url, **runtime_kwargs)
}
class ProviderRepository {
+Provider get_system_provider_by_type(provider_type)
+Provider create_system_provider(provider_type, name)
}
class ProviderStartup {
+list[Provider] ensure_system_providers(llm_config)
}
class Provider {
+str id
+str name
+ProviderType provider_type
+bool is_system
}
class TopicGenerator {
+str _select_title_generation_model(provider_type, session_model, default_model)
}
ProviderType <.. LLMConfig : uses
LLMConfig "1" o-- "1" LLMProviderConfig : has
LLMConfig ..> ProviderType : get_provider_type()
LLMConfig ..> LLMProviderConfig : get_provider_config()
ProviderFactory ..> ProviderType : create()
ProviderFactory ..> LLMCredentials : create()
ProviderFactory ..> ChatOpenAI : _create_gpugeek()
ProviderFactory ..> ChatQwen : _create_qwen()
LLMService ..> ModelFilter : uses
LLMService ..> GPUGeekMapper : uses
LLMService ..> GPUGEEK_MODELS : iterates
GPUGeekMapper ..> GPUGEEK_MODELS : maps
ProviderStartup ..> LLMConfig : reads
ProviderStartup ..> ProviderRepository : ensures
ProviderRepository ..> ProviderType : filters
TopicGenerator ..> ProviderType : selects
Provider "*" o-- "1" ProviderType : type
File-Level ChangesTips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Hey - 我发现了 7 个问题,并给出了一些整体性的反馈:
- 在 provider UI(ModelSelector 和 ProviderList)中,无条件使用
getProviderDisplayName(provider.provider_type)会导致用户自定义的 provider 名称不再显示;建议恢复基于is_system的逻辑,仅对系统 provider 使用通用显示名称,而自定义 provider 仍然显示其配置的名称。 - 新增的
Markdown.tsx中的MarkdownImage组件使用了useXyzen(一个客户端状态 store hook);请再次确认Markdown只会在客户端组件中使用,或者增加明确的'use client'边界,以避免 React / 服务器组件运行时问题。
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In the provider UI (ModelSelector and ProviderList), using `getProviderDisplayName(provider.provider_type)` unconditionally means user-defined provider names are no longer shown; consider restoring the `is_system`-based logic so only system providers use the generic display name while custom providers still display their configured names.
- The new `MarkdownImage` component in `Markdown.tsx` uses `useXyzen` (a client-side store hook); double-check that `Markdown` is only ever used in client components or add a clear `'use client'` boundary to avoid React/server-component runtime issues.
## Individual Comments
### Comment 1
<location> `service/app/core/llm/service.py:337-338` </location>
<code_context>
Returns:
Dictionary containing model metadata (max_tokens, input_cost_per_token, etc.)
"""
+ if "qwen" in model_name:
+ converted_model_name = "dashscope/" + model_name
+ else:
+ converted_model_name = _map_gpugeek_to_base_model(model_name)
</code_context>
<issue_to_address>
**issue (bug_risk):** Qwen 模型名称前缀处理可能会重复添加 `dashscope/` 并导致查找失败
当前逻辑对任何包含 `"qwen"` 的名称都添加 `dashscope/` 前缀,包括已经是 DashScope 格式的名称(例如:`dashscope/qwen2.5` 会变成 `dashscope/dashscope/qwen2.5`),这很可能会导致 `litellm.get_model_info` 调用失败。同时,它也会影响任何模型字符串中碰巧包含 `"qwen"` 的其他 provider。
请收紧逻辑,仅在确实需要时添加前缀,例如:
```python
if model_name.startswith("dashscope/"):
converted_model_name = model_name
elif "qwen" in model_name:
converted_model_name = f"dashscope/{model_name}"
else:
converted_model_name = _map_gpugeek_to_base_model(model_name)
```
或者,将这个前缀处理限制在 Qwen 相关的调用点上,而不是所有 `get_model_info` 的使用场景。
</issue_to_address>
### Comment 2
<location> `web/src/components/layouts/components/ModelSelector.tsx:310` </location>
<code_context>
>
<CpuChipIcon className="h-3.5 w-3.5 shrink-0" />
- <span className="max-w-[200px] truncate">
+ <span className="max-w-50 truncate">
{currentSelection.model || "选择模型"}
</span>
</code_context>
<issue_to_address>
**issue (bug_risk):** 非标准 Tailwind 类 `max-w-50` 可能不会按预期生效
`max-w-[200px]` 是合法的 Tailwind 任意值写法,但 `max-w-50` 不是标准的 max-width 规格 token,除非你在 Tailwind 配置中显式添加了 `50`,否则会被忽略。如果你仍希望限定在大约 200px 左右,建议保留任意值写法,或改用一个接近目标宽度的内置尺寸(例如 `max-w-xs`)。
</issue_to_address>
### Comment 3
<location> `web/src/components/layouts/components/ModelSelector.tsx:330` </location>
<code_context>
exit={{ opacity: 0, y: 10 }}
transition={{ duration: 0.2 }}
- className="w-[280px] rounded-lg border border-neutral-200 bg-white shadow-lg dark:border-neutral-800 dark:bg-neutral-900 p-2"
+ className="w-70 rounded-lg border border-neutral-200 bg-white shadow-lg dark:border-neutral-800 dark:bg-neutral-900 p-2"
>
<div className="px-2 py-1.5 text-[10px] font-semibold uppercase tracking-wider text-neutral-500 dark:text-neutral-400">
</code_context>
<issue_to_address>
**issue (bug_risk):** 宽度类 `w-70` 不是标准的 Tailwind 间距刻度值
除非你在 Tailwind 配置中添加了 `70`,否则该类会被忽略,宽度会退回到 `auto`,从而改变下拉菜单的布局。建议要么保留显式宽度:
```tsx
className="w-[280px] ..."
```
要么使用最接近的内置工具类,例如 `w-72`。
</issue_to_address>
### Comment 4
<location> `service/app/core/providers/factory.py:199-201` </location>
<code_context>
+ **runtime_kwargs,
+ )
+
+ if web_search_enabled:
+ logger.info(f"Enabling native web search for OpenAI model {model}")
+ llm = cast(BaseChatModel, llm.bind_tools([{"type": "web_search_preview"}]))
+
</code_context>
<issue_to_address>
**nitpick (typo):** 日志信息在配置 GPUGeek 模型时仍然提到了 OpenAI
在 `_create_gpugeek` 中,这条日志仍然写着 `OpenAI`,但它已经处于 `ProviderType.GPUGEEK` 的分支中了。请更新这条消息以引用 GPUGeek(或解析后的 provider 类型),这样日志可以清晰地表明正在配置哪个 provider。
```suggestion
if web_search_enabled:
logger.info(f"Enabling native web search for GPUGeek model {model}")
llm = cast(BaseChatModel, llm.bind_tools([{"type": "web_search_preview"}]))
```
</issue_to_address>
### Comment 5
<location> `AGENTS.md:343-347` </location>
<code_context>
+
+**Storage Service Pattern:**
+```python
+class StorageServiceProto(Protocol):
+ async def upload(self, file_data: bytes, key: str) -> str
+ async def download(self, key: str) -> bytes
+ async def delete(self, key: str) -> bool
+ async def get_download_url(self, key: str) -> str
+```
+
</code_context>
<issue_to_address>
**issue (bug_risk):** StorageServiceProto 方法定义缺少结尾的冒号,使得示例不是合法的 Python 代码。
读者可能会直接复制这个示例,因此它应该在语法上是正确的。你可以将其更新为:
```python
class StorageServiceProto(Protocol):
async def upload(self, file_data: bytes, key: str) -> str: ...
async def download(self, key: str) -> bytes: ...
async def delete(self, key: str) -> bool: ...
async def get_download_url(self, key: str) -> str: ...
```
这样既保持示例简洁,又是合法的 Python 代码。
</issue_to_address>
### Comment 6
<location> `service/app/core/llm/service.py:40` </location>
<code_context>
+]
+
+
+def _map_gpugeek_to_base_model(gpugeek_model: str) -> str | None:
+ """
+ Map GPUGeek vendor-prefixed model names to their base model names for pricing lookup.
</code_context>
<issue_to_address>
**issue (complexity):** 建议将“模型 → 基础模型”的映射重构为按厂商划分、数据驱动的辅助方法,并提供一个共享的 provider 映射函数,以简化并集中这部分逻辑。
你可以保留现有行为,但通过让映射逻辑数据驱动、且按厂商拆分,而不是用一个启发式函数加上一些内联特例,从而简化实现。
### 1. 让 `_map_gpugeek_to_base_model` 数据驱动并按厂商划分
目前这个函数混合了:
- DeepSeek 的启发式逻辑(`"v"` / `"r"` 的切分)
- Anthropic / Gemini 的显式映射
- 规范化逻辑(`lower()` + 去除厂商前缀)
你可以保留这些行为,但将其拆分为:
- 一个规范化步骤
- 若干厂商特定的映射函数(或查表)
重构示例:
```python
def _normalize_gpugeek_model_name(gpugeek_model: str) -> tuple[str | None, str]:
if "/" not in gpugeek_model:
return None, gpugeek_model.lower()
vendor, model_part = gpugeek_model.split("/", 1)
return vendor.lower(), model_part.lower()
def _map_deepseek_model(model_lower: str) -> str:
# Preserve existing behavior but isolate it
if "v" in model_lower and any(c.isdigit() for c in model_lower.split("v")[1][:3]):
return "deepseek-chat"
if "r" in model_lower and any(c.isdigit() for c in model_lower.split("r")[1][:3]):
return "deepseek-reasoner"
return "deepseek-chat"
ANTHROPIC_GEMINI_MAP: dict[str, str] = {
"gemini-3-flash": "gemini-3-flash-preview",
"gemini-3-pro": "gemini-3-pro-preview",
"claude-3.7-sonnet": "anthropic.claude-3-7-sonnet-20250219-v1:0",
"claude-4-sonnet": "anthropic.claude-sonnet-4-20250514-v1:0",
"claude-4.5-sonnet": "anthropic.claude-sonnet-4-5-20250929-v1:0",
"claude-4.5-opus": "anthropic.claude-opus-4-5-20251101-v1:0",
}
def _map_anthropic_gemini_model(model_lower: str) -> str | None:
for key, base in ANTHROPIC_GEMINI_MAP.items():
if key in model_lower:
return base
return None
def _map_gpugeek_to_base_model(gpugeek_model: str) -> str | None:
vendor, model_lower = _normalize_gpugeek_model_name(gpugeek_model)
if vendor is None:
return None
if "deepseek" in model_lower:
return _map_deepseek_model(model_lower)
mapped = _map_anthropic_gemini_model(model_lower)
if mapped:
return mapped
# Default: normalized name
return model_lower
```
这样既保留了原有功能,又能:
- 让 DeepSeek 的映射逻辑更加内聚、易于测试
- 将 Anthropic / Gemini 的逻辑放在一个小的映射表里
- 将规范化与映射步骤分离
### 2. 将 provider → base model 的映射集中到一个供 `get_model_info` 使用的函数中
`get_model_info()` 目前包含一个 qwen 特例,然后调用 `_map_gpugeek_to_base_model()`。这会把 provider 相关的职责混入一个通用工具函数中。
你可以集中这部分映射逻辑,让 `get_model_info` 和 `get_models_by_provider` 共用同一个抽象:
```python
def _map_provider_model_to_base(provider: str | None, model_name: str) -> str:
# provider 可能在未知时为 None;此时回退到原始名称
if provider == "qwen" and "qwen" in model_name:
return f"dashscope/{model_name}"
# GPUGeek 使用带前缀的名称
if provider == "gpugeek":
mapped = _map_gpugeek_to_base_model(model_name)
return mapped or model_name
return model_name
```
然后在 `get_model_info` 中:
```python
@staticmethod
def get_model_info(model_name: str, provider: str | None = None) -> ModelInfo:
model_name = _map_provider_model_to_base(provider, model_name)
try:
return litellm.get_model_info(model_name)
...
```
在你已经知道 `provider_type` 的 `get_models_by_provider` 中,当需要时也可以复用这同一个映射(例如 GPUGeek/qwen 的场景),而不是在多处嵌入字符串逻辑。
这样可以保留当前功能,同时降低条件复杂度,让映射行为按厂商更容易扩展和测试。
</issue_to_address>
### Comment 7
<location> `web/src/lib/Markdown.tsx:440` </location>
<code_context>
className?: string; // optional extra classes for the markdown root
}
+const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms));
+
+const isXyzenDownloadUrl = (src: string) =>
</code_context>
<issue_to_address>
**issue (complexity):** 建议将图片鉴权/加载逻辑抽取为可复用的 hook 和独立的 MarkdownImage 组件,使 Markdown.tsx 专注于渲染职责。
你可以保留现有行为,但通过提取图片加载逻辑到一个可复用的 hook,且将 `MarkdownImage` 移出 `Markdown.tsx`,来降低局部复杂度。
### 1. 将图片加载逻辑提取到一个 hook 中
把归一化、鉴权决策、重试和清理等逻辑移动到一个专用 hook 中,例如 `useAuthenticatedImage.ts`:
```ts
// useAuthenticatedImage.ts
import * as React from "react";
import { useXyzen } from "@/store";
const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms));
const isXyzenDownloadUrl = (src: string) =>
src.includes("/xyzen/api/v1/files/") && src.includes("/download");
const normalizeSrc = (src: string | undefined, backendUrl?: string) => {
if (!src) return "";
if (src.startsWith("data:") || src.startsWith("blob:")) return src;
if (src.startsWith("http://") || src.startsWith("https://")) return src;
const base =
backendUrl || (typeof window !== "undefined" ? window.location.origin : "");
return `${base}${src.startsWith("/") ? src : `/${src}`}`;
};
export const useAuthenticatedImage = (src: string | undefined) => {
const backendUrl = useXyzen((state) => state.backendUrl);
const token = useXyzen((state) => state.token);
const [blobUrl, setBlobUrl] = React.useState<string | null>(null);
const [failed, setFailed] = React.useState(false);
const fullSrc = React.useMemo(
() => normalizeSrc(src, backendUrl),
[src, backendUrl],
);
const shouldAuthFetch =
!!fullSrc &&
!!token &&
(fullSrc.startsWith("/") || fullSrc.startsWith(backendUrl || "")) &&
isXyzenDownloadUrl(fullSrc);
React.useEffect(() => {
if (!shouldAuthFetch) {
setFailed(false);
setBlobUrl((prev) => {
if (prev) URL.revokeObjectURL(prev);
return null;
});
return;
}
let active = true;
const controller = new AbortController();
const run = async () => {
setFailed(false);
const delays = [250, 750, 1500];
for (let attempt = 0; attempt < delays.length + 1; attempt++) {
try {
const res = await fetch(fullSrc, {
headers: { Authorization: `Bearer ${token}` },
signal: controller.signal,
});
if (res.ok) {
const blob = await res.blob();
const url = URL.createObjectURL(blob);
if (!active) {
URL.revokeObjectURL(url);
return;
}
setBlobUrl(url);
return;
}
if (![404, 500, 502, 503].includes(res.status)) {
break;
}
} catch (e) {
if ((e as Error)?.name === "AbortError") return;
}
if (attempt < delays.length) {
await sleep(delays[attempt]);
}
}
if (active) setFailed(true);
};
run();
return () => {
active = false;
controller.abort();
setBlobUrl((prev) => {
if (prev) URL.revokeObjectURL(prev);
return null;
});
};
}, [shouldAuthFetch, fullSrc, token]);
return { fullSrc, blobUrl, failed, shouldAuthFetch };
};
```
这样可以在保持重试/鉴权逻辑不变的同时,把它从 Markdown 渲染代码中隔离出来。
### 2. 让 `MarkdownImage` 更轻量并移动到单独文件
在 `MarkdownImage.tsx` 中:
```tsx
// MarkdownImage.tsx
import * as React from "react";
import { useAuthenticatedImage } from "./useAuthenticatedImage";
export const MarkdownImage: React.FC<
React.ImgHTMLAttributes<HTMLImageElement>
> = ({ src, alt, ...rest }) => {
const { fullSrc, blobUrl, failed, shouldAuthFetch } = useAuthenticatedImage(
src,
);
if (!src) return null;
if (!shouldAuthFetch) {
return <img src={fullSrc} alt={alt} {...rest} />;
}
if (blobUrl) {
return <img src={blobUrl} alt={alt} {...rest} />;
}
if (failed) {
return (
<span className="text-xs text-neutral-500 dark:text-neutral-400">
Image failed to load
</span>
);
}
return (
<span className="text-xs text-neutral-500 dark:text-neutral-400">
Loading image...
</span>
);
};
```
### 3. 让 `Markdown.tsx` 专注于 Markdown 渲染
然后在 `Markdown.tsx` 中只负责组合组件:
```tsx
// Markdown.tsx
import { MarkdownImage } from "./MarkdownImage";
// ...
const MarkdownComponents = React.useMemo(
() => ({
// other overrides...
img(props: React.ComponentPropsWithoutRef<"img">) {
return <MarkdownImage {...props} />;
},
}),
[isDark],
);
```
这样既保留了新特性,又不会让 Markdown 模块被鉴权/重试/blob 生命周期等细节拖累,同时使图片逻辑变得可复用并可独立测试。
</issue_to_address>帮我变得更有用!请在每条评论上点击 👍 或 👎,我会根据你的反馈改进后续的代码评审。
Original comment in English
Hey - I've found 7 issues, and left some high level feedback:
- In the provider UI (ModelSelector and ProviderList), using
getProviderDisplayName(provider.provider_type)unconditionally means user-defined provider names are no longer shown; consider restoring theis_system-based logic so only system providers use the generic display name while custom providers still display their configured names. - The new
MarkdownImagecomponent inMarkdown.tsxusesuseXyzen(a client-side store hook); double-check thatMarkdownis only ever used in client components or add a clear'use client'boundary to avoid React/server-component runtime issues.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In the provider UI (ModelSelector and ProviderList), using `getProviderDisplayName(provider.provider_type)` unconditionally means user-defined provider names are no longer shown; consider restoring the `is_system`-based logic so only system providers use the generic display name while custom providers still display their configured names.
- The new `MarkdownImage` component in `Markdown.tsx` uses `useXyzen` (a client-side store hook); double-check that `Markdown` is only ever used in client components or add a clear `'use client'` boundary to avoid React/server-component runtime issues.
## Individual Comments
### Comment 1
<location> `service/app/core/llm/service.py:337-338` </location>
<code_context>
Returns:
Dictionary containing model metadata (max_tokens, input_cost_per_token, etc.)
"""
+ if "qwen" in model_name:
+ converted_model_name = "dashscope/" + model_name
+ else:
+ converted_model_name = _map_gpugeek_to_base_model(model_name)
</code_context>
<issue_to_address>
**issue (bug_risk):** Qwen model name prefixing can double-prefix `dashscope/` and break lookups
This logic prepends `dashscope/` to any name containing `"qwen"`, including ones already in DashScope format (e.g. `dashscope/qwen2.5` → `dashscope/dashscope/qwen2.5`), which will likely break `litellm.get_model_info`. It also affects any provider whose model string happens to contain `"qwen"`.
Please tighten this so it only prefixes when needed, for example:
```python
if model_name.startswith("dashscope/"):
converted_model_name = model_name
elif "qwen" in model_name:
converted_model_name = f"dashscope/{model_name}"
else:
converted_model_name = _map_gpugeek_to_base_model(model_name)
```
Alternatively, scope this prefixing to Qwen-specific call sites instead of all `get_model_info` usages.
</issue_to_address>
### Comment 2
<location> `web/src/components/layouts/components/ModelSelector.tsx:310` </location>
<code_context>
>
<CpuChipIcon className="h-3.5 w-3.5 shrink-0" />
- <span className="max-w-[200px] truncate">
+ <span className="max-w-50 truncate">
{currentSelection.model || "选择模型"}
</span>
</code_context>
<issue_to_address>
**issue (bug_risk):** Non-standard Tailwind class `max-w-50` may not resolve as expected
`max-w-[200px]` is a valid Tailwind arbitrary value, but `max-w-50` is not a standard max-width token and will be ignored unless you’ve explicitly added a `50` key in your Tailwind config. If you still want a ~200px cap, keep the arbitrary value or switch to a built-in size (e.g. `max-w-xs`) that matches the intended width.
</issue_to_address>
### Comment 3
<location> `web/src/components/layouts/components/ModelSelector.tsx:330` </location>
<code_context>
exit={{ opacity: 0, y: 10 }}
transition={{ duration: 0.2 }}
- className="w-[280px] rounded-lg border border-neutral-200 bg-white shadow-lg dark:border-neutral-800 dark:bg-neutral-900 p-2"
+ className="w-70 rounded-lg border border-neutral-200 bg-white shadow-lg dark:border-neutral-800 dark:bg-neutral-900 p-2"
>
<div className="px-2 py-1.5 text-[10px] font-semibold uppercase tracking-wider text-neutral-500 dark:text-neutral-400">
</code_context>
<issue_to_address>
**issue (bug_risk):** Width class `w-70` is not a standard Tailwind spacing scale value
Unless `70` has been added to your Tailwind config, this class will be ignored and the width will fall back to `auto`, which can change the dropdown layout. Consider either keeping the explicit width:
```tsx
className="w-[280px] ..."
```
or using the closest built-in utility like `w-72`.
</issue_to_address>
### Comment 4
<location> `service/app/core/providers/factory.py:199-201` </location>
<code_context>
+ **runtime_kwargs,
+ )
+
+ if web_search_enabled:
+ logger.info(f"Enabling native web search for OpenAI model {model}")
+ llm = cast(BaseChatModel, llm.bind_tools([{"type": "web_search_preview"}]))
+
</code_context>
<issue_to_address>
**nitpick (typo):** Log message mentions OpenAI while configuring a GPUGeek model
In `_create_gpugeek`, this log line still says `OpenAI` even though it’s in the `ProviderType.GPUGEEK` path. Please update the message to reference GPUGeek (or the resolved provider type) so logs clearly indicate which provider is being configured.
```suggestion
if web_search_enabled:
logger.info(f"Enabling native web search for GPUGeek model {model}")
llm = cast(BaseChatModel, llm.bind_tools([{"type": "web_search_preview"}]))
```
</issue_to_address>
### Comment 5
<location> `AGENTS.md:343-347` </location>
<code_context>
+
+**Storage Service Pattern:**
+```python
+class StorageServiceProto(Protocol):
+ async def upload(self, file_data: bytes, key: str) -> str
+ async def download(self, key: str) -> bytes
+ async def delete(self, key: str) -> bool
+ async def get_download_url(self, key: str) -> str
+```
+
</code_context>
<issue_to_address>
**issue (bug_risk):** StorageServiceProto method definitions are missing trailing colons, making the example invalid Python.
Since readers may copy this example directly, it should be syntactically valid. You could update it to:
```python
class StorageServiceProto(Protocol):
async def upload(self, file_data: bytes, key: str) -> str: ...
async def download(self, key: str) -> bytes: ...
async def delete(self, key: str) -> bool: ...
async def get_download_url(self, key: str) -> str: ...
```
This keeps the example concise while remaining correct Python.
</issue_to_address>
### Comment 6
<location> `service/app/core/llm/service.py:40` </location>
<code_context>
+]
+
+
+def _map_gpugeek_to_base_model(gpugeek_model: str) -> str | None:
+ """
+ Map GPUGeek vendor-prefixed model names to their base model names for pricing lookup.
</code_context>
<issue_to_address>
**issue (complexity):** Consider refactoring the model-to-base-model mapping into vendor-scoped, data-driven helpers and a shared provider mapping function to simplify and centralize this logic.
You can keep the new behavior but simplify it by making the mapping data‑driven and vendor‑scoped instead of one heuristic function plus inline special cases.
### 1. Make `_map_gpugeek_to_base_model` data‑driven and vendor‑scoped
Right now the function mixes:
- DeepSeek heuristics (`"v"` / `"r"` slicing)
- Anthropic/Gemini explicit mappings
- Normalization (`lower()` + vendor prefix stripping)
You can keep behavior but express it as:
- A normalization step
- Vendor‑specific mapping functions (or tables)
Example refactor:
```python
def _normalize_gpugeek_model_name(gpugeek_model: str) -> tuple[str | None, str]:
if "/" not in gpugeek_model:
return None, gpugeek_model.lower()
vendor, model_part = gpugeek_model.split("/", 1)
return vendor.lower(), model_part.lower()
def _map_deepseek_model(model_lower: str) -> str:
# Preserve existing behavior but isolate it
if "v" in model_lower and any(c.isdigit() for c in model_lower.split("v")[1][:3]):
return "deepseek-chat"
if "r" in model_lower and any(c.isdigit() for c in model_lower.split("r")[1][:3]):
return "deepseek-reasoner"
return "deepseek-chat"
ANTHROPIC_GEMINI_MAP: dict[str, str] = {
"gemini-3-flash": "gemini-3-flash-preview",
"gemini-3-pro": "gemini-3-pro-preview",
"claude-3.7-sonnet": "anthropic.claude-3-7-sonnet-20250219-v1:0",
"claude-4-sonnet": "anthropic.claude-sonnet-4-20250514-v1:0",
"claude-4.5-sonnet": "anthropic.claude-sonnet-4-5-20250929-v1:0",
"claude-4.5-opus": "anthropic.claude-opus-4-5-20251101-v1:0",
}
def _map_anthropic_gemini_model(model_lower: str) -> str | None:
for key, base in ANTHROPIC_GEMINI_MAP.items():
if key in model_lower:
return base
return None
def _map_gpugeek_to_base_model(gpugeek_model: str) -> str | None:
vendor, model_lower = _normalize_gpugeek_model_name(gpugeek_model)
if vendor is None:
return None
if "deepseek" in model_lower:
return _map_deepseek_model(model_lower)
mapped = _map_anthropic_gemini_model(model_lower)
if mapped:
return mapped
# Default: normalized name
return model_lower
```
This keeps functionality but:
- Makes DeepSeek mapping self‑contained and testable
- Keeps Anthropic/Gemini logic in a small data map
- Separates normalization from mapping
### 2. Centralize provider → base model mapping used by `get_model_info`
`get_model_info()` currently has a qwen special case and then calls `_map_gpugeek_to_base_model()`. That mixes provider responsibilities into a generic utility.
You can centralize mapping so both `get_model_info` and `get_models_by_provider` use the same abstraction:
```python
def _map_provider_model_to_base(provider: str | None, model_name: str) -> str:
# provider can be None when unknown; fall back to original name
if provider == "qwen" and "qwen" in model_name:
return f"dashscope/{model_name}"
# GPUGeek uses prefixed names
if provider == "gpugeek":
mapped = _map_gpugeek_to_base_model(model_name)
return mapped or model_name
return model_name
```
Then in `get_model_info`:
```python
@staticmethod
def get_model_info(model_name: str, provider: str | None = None) -> ModelInfo:
model_name = _map_provider_model_to_base(provider, model_name)
try:
return litellm.get_model_info(model_name)
...
```
And in `get_models_by_provider` where you already know `provider_type`, you can reuse the same mapping when needed (e.g. for GPUGeek/qwen cases) instead of embedding string logic in multiple places.
This keeps all current features but reduces conditional complexity and makes the mapping behavior easier to extend and test per vendor.
</issue_to_address>
### Comment 7
<location> `web/src/lib/Markdown.tsx:440` </location>
<code_context>
className?: string; // optional extra classes for the markdown root
}
+const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms));
+
+const isXyzenDownloadUrl = (src: string) =>
</code_context>
<issue_to_address>
**issue (complexity):** Consider extracting the image auth/loading logic into a reusable hook and separate MarkdownImage component so Markdown.tsx stays focused on rendering concerns.
You can keep the behavior as‑is but reduce local complexity by extracting the image loading logic into a reusable hook and moving `MarkdownImage` out of `Markdown.tsx`.
### 1. Extract the image loading logic into a hook
Move the normalization, auth decision, retry, and cleanup into a dedicated hook, e.g. `useAuthenticatedImage.ts`:
```ts
// useAuthenticatedImage.ts
import * as React from "react";
import { useXyzen } from "@/store";
const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms));
const isXyzenDownloadUrl = (src: string) =>
src.includes("/xyzen/api/v1/files/") && src.includes("/download");
const normalizeSrc = (src: string | undefined, backendUrl?: string) => {
if (!src) return "";
if (src.startsWith("data:") || src.startsWith("blob:")) return src;
if (src.startsWith("http://") || src.startsWith("https://")) return src;
const base =
backendUrl || (typeof window !== "undefined" ? window.location.origin : "");
return `${base}${src.startsWith("/") ? src : `/${src}`}`;
};
export const useAuthenticatedImage = (src: string | undefined) => {
const backendUrl = useXyzen((state) => state.backendUrl);
const token = useXyzen((state) => state.token);
const [blobUrl, setBlobUrl] = React.useState<string | null>(null);
const [failed, setFailed] = React.useState(false);
const fullSrc = React.useMemo(
() => normalizeSrc(src, backendUrl),
[src, backendUrl],
);
const shouldAuthFetch =
!!fullSrc &&
!!token &&
(fullSrc.startsWith("/") || fullSrc.startsWith(backendUrl || "")) &&
isXyzenDownloadUrl(fullSrc);
React.useEffect(() => {
if (!shouldAuthFetch) {
setFailed(false);
setBlobUrl((prev) => {
if (prev) URL.revokeObjectURL(prev);
return null;
});
return;
}
let active = true;
const controller = new AbortController();
const run = async () => {
setFailed(false);
const delays = [250, 750, 1500];
for (let attempt = 0; attempt < delays.length + 1; attempt++) {
try {
const res = await fetch(fullSrc, {
headers: { Authorization: `Bearer ${token}` },
signal: controller.signal,
});
if (res.ok) {
const blob = await res.blob();
const url = URL.createObjectURL(blob);
if (!active) {
URL.revokeObjectURL(url);
return;
}
setBlobUrl(url);
return;
}
if (![404, 500, 502, 503].includes(res.status)) {
break;
}
} catch (e) {
if ((e as Error)?.name === "AbortError") return;
}
if (attempt < delays.length) {
await sleep(delays[attempt]);
}
}
if (active) setFailed(true);
};
run();
return () => {
active = false;
controller.abort();
setBlobUrl((prev) => {
if (prev) URL.revokeObjectURL(prev);
return null;
});
};
}, [shouldAuthFetch, fullSrc, token]);
return { fullSrc, blobUrl, failed, shouldAuthFetch };
};
```
This keeps all the retry/auth logic intact but isolates it from the Markdown rendering.
### 2. Make `MarkdownImage` thin and move it to its own file
In `MarkdownImage.tsx`:
```tsx
// MarkdownImage.tsx
import * as React from "react";
import { useAuthenticatedImage } from "./useAuthenticatedImage";
export const MarkdownImage: React.FC<
React.ImgHTMLAttributes<HTMLImageElement>
> = ({ src, alt, ...rest }) => {
const { fullSrc, blobUrl, failed, shouldAuthFetch } = useAuthenticatedImage(
src,
);
if (!src) return null;
if (!shouldAuthFetch) {
return <img src={fullSrc} alt={alt} {...rest} />;
}
if (blobUrl) {
return <img src={blobUrl} alt={alt} {...rest} />;
}
if (failed) {
return (
<span className="text-xs text-neutral-500 dark:text-neutral-400">
Image failed to load
</span>
);
}
return (
<span className="text-xs text-neutral-500 dark:text-neutral-400">
Loading image...
</span>
);
};
```
### 3. Keep `Markdown.tsx` focused on markdown rendering
Then `Markdown.tsx` only wires the component:
```tsx
// Markdown.tsx
import { MarkdownImage } from "./MarkdownImage";
// ...
const MarkdownComponents = React.useMemo(
() => ({
// other overrides...
img(props: React.ComponentPropsWithoutRef<"img">) {
return <MarkdownImage {...props} />;
},
}),
[isDark],
);
```
This keeps the new feature, but the Markdown module is no longer burdened with auth/retry/blob lifecycle details, and the image logic becomes reusable and testable in isolation.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| if "qwen" in model_name: | ||
| converted_model_name = "dashscope/" + model_name |
There was a problem hiding this comment.
issue (bug_risk): Qwen 模型名称前缀处理可能会重复添加 dashscope/ 并导致查找失败
当前逻辑对任何包含 "qwen" 的名称都添加 dashscope/ 前缀,包括已经是 DashScope 格式的名称(例如:dashscope/qwen2.5 会变成 dashscope/dashscope/qwen2.5),这很可能会导致 litellm.get_model_info 调用失败。同时,它也会影响任何模型字符串中碰巧包含 "qwen" 的其他 provider。
请收紧逻辑,仅在确实需要时添加前缀,例如:
if model_name.startswith("dashscope/"):
converted_model_name = model_name
elif "qwen" in model_name:
converted_model_name = f"dashscope/{model_name}"
else:
converted_model_name = _map_gpugeek_to_base_model(model_name)或者,将这个前缀处理限制在 Qwen 相关的调用点上,而不是所有 get_model_info 的使用场景。
Original comment in English
issue (bug_risk): Qwen model name prefixing can double-prefix dashscope/ and break lookups
This logic prepends dashscope/ to any name containing "qwen", including ones already in DashScope format (e.g. dashscope/qwen2.5 → dashscope/dashscope/qwen2.5), which will likely break litellm.get_model_info. It also affects any provider whose model string happens to contain "qwen".
Please tighten this so it only prefixes when needed, for example:
if model_name.startswith("dashscope/"):
converted_model_name = model_name
elif "qwen" in model_name:
converted_model_name = f"dashscope/{model_name}"
else:
converted_model_name = _map_gpugeek_to_base_model(model_name)Alternatively, scope this prefixing to Qwen-specific call sites instead of all get_model_info usages.
| > | ||
| <CpuChipIcon className="h-3.5 w-3.5 shrink-0" /> | ||
| <span className="max-w-[200px] truncate"> | ||
| <span className="max-w-50 truncate"> |
There was a problem hiding this comment.
issue (bug_risk): 非标准 Tailwind 类 max-w-50 可能不会按预期生效
max-w-[200px] 是合法的 Tailwind 任意值写法,但 max-w-50 不是标准的 max-width 规格 token,除非你在 Tailwind 配置中显式添加了 50,否则会被忽略。如果你仍希望限定在大约 200px 左右,建议保留任意值写法,或改用一个接近目标宽度的内置尺寸(例如 max-w-xs)。
Original comment in English
issue (bug_risk): Non-standard Tailwind class max-w-50 may not resolve as expected
max-w-[200px] is a valid Tailwind arbitrary value, but max-w-50 is not a standard max-width token and will be ignored unless you’ve explicitly added a 50 key in your Tailwind config. If you still want a ~200px cap, keep the arbitrary value or switch to a built-in size (e.g. max-w-xs) that matches the intended width.
| exit={{ opacity: 0, y: 10 }} | ||
| transition={{ duration: 0.2 }} | ||
| className="w-[280px] rounded-lg border border-neutral-200 bg-white shadow-lg dark:border-neutral-800 dark:bg-neutral-900 p-2" | ||
| className="w-70 rounded-lg border border-neutral-200 bg-white shadow-lg dark:border-neutral-800 dark:bg-neutral-900 p-2" |
There was a problem hiding this comment.
issue (bug_risk): 宽度类 w-70 不是标准的 Tailwind 间距刻度值
除非你在 Tailwind 配置中添加了 70,否则该类会被忽略,宽度会退回到 auto,从而改变下拉菜单的布局。建议要么保留显式宽度:
className="w-[280px] ..."要么使用最接近的内置工具类,例如 w-72。
Original comment in English
issue (bug_risk): Width class w-70 is not a standard Tailwind spacing scale value
Unless 70 has been added to your Tailwind config, this class will be ignored and the width will fall back to auto, which can change the dropdown layout. Consider either keeping the explicit width:
className="w-[280px] ..."or using the closest built-in utility like w-72.
| if web_search_enabled: | ||
| logger.info(f"Enabling native web search for OpenAI model {model}") | ||
| llm = cast(BaseChatModel, llm.bind_tools([{"type": "web_search_preview"}])) |
There was a problem hiding this comment.
nitpick (typo): 日志信息在配置 GPUGeek 模型时仍然提到了 OpenAI
在 _create_gpugeek 中,这条日志仍然写着 OpenAI,但它已经处于 ProviderType.GPUGEEK 的分支中了。请更新这条消息以引用 GPUGeek(或解析后的 provider 类型),这样日志可以清晰地表明正在配置哪个 provider。
| if web_search_enabled: | |
| logger.info(f"Enabling native web search for OpenAI model {model}") | |
| llm = cast(BaseChatModel, llm.bind_tools([{"type": "web_search_preview"}])) | |
| if web_search_enabled: | |
| logger.info(f"Enabling native web search for GPUGeek model {model}") | |
| llm = cast(BaseChatModel, llm.bind_tools([{"type": "web_search_preview"}])) |
Original comment in English
nitpick (typo): Log message mentions OpenAI while configuring a GPUGeek model
In _create_gpugeek, this log line still says OpenAI even though it’s in the ProviderType.GPUGEEK path. Please update the message to reference GPUGeek (or the resolved provider type) so logs clearly indicate which provider is being configured.
| if web_search_enabled: | |
| logger.info(f"Enabling native web search for OpenAI model {model}") | |
| llm = cast(BaseChatModel, llm.bind_tools([{"type": "web_search_preview"}])) | |
| if web_search_enabled: | |
| logger.info(f"Enabling native web search for GPUGeek model {model}") | |
| llm = cast(BaseChatModel, llm.bind_tools([{"type": "web_search_preview"}])) |
| class StorageServiceProto(Protocol): | ||
| async def upload(self, file_data: bytes, key: str) -> str | ||
| async def download(self, key: str) -> bytes | ||
| async def delete(self, key: str) -> bool | ||
| async def get_download_url(self, key: str) -> str |
There was a problem hiding this comment.
issue (bug_risk): StorageServiceProto 方法定义缺少结尾的冒号,使得示例不是合法的 Python 代码。
读者可能会直接复制这个示例,因此它应该在语法上是正确的。你可以将其更新为:
class StorageServiceProto(Protocol):
async def upload(self, file_data: bytes, key: str) -> str: ...
async def download(self, key: str) -> bytes: ...
async def delete(self, key: str) -> bool: ...
async def get_download_url(self, key: str) -> str: ...这样既保持示例简洁,又是合法的 Python 代码。
Original comment in English
issue (bug_risk): StorageServiceProto method definitions are missing trailing colons, making the example invalid Python.
Since readers may copy this example directly, it should be syntactically valid. You could update it to:
class StorageServiceProto(Protocol):
async def upload(self, file_data: bytes, key: str) -> str: ...
async def download(self, key: str) -> bytes: ...
async def delete(self, key: str) -> bool: ...
async def get_download_url(self, key: str) -> str: ...This keeps the example concise while remaining correct Python.
| className?: string; // optional extra classes for the markdown root | ||
| } | ||
|
|
||
| const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms)); |
There was a problem hiding this comment.
issue (complexity): 建议将图片鉴权/加载逻辑抽取为可复用的 hook 和独立的 MarkdownImage 组件,使 Markdown.tsx 专注于渲染职责。
你可以保留现有行为,但通过提取图片加载逻辑到一个可复用的 hook,且将 MarkdownImage 移出 Markdown.tsx,来降低局部复杂度。
1. 将图片加载逻辑提取到一个 hook 中
把归一化、鉴权决策、重试和清理等逻辑移动到一个专用 hook 中,例如 useAuthenticatedImage.ts:
// useAuthenticatedImage.ts
import * as React from "react";
import { useXyzen } from "@/store";
const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms));
const isXyzenDownloadUrl = (src: string) =>
src.includes("/xyzen/api/v1/files/") && src.includes("/download");
const normalizeSrc = (src: string | undefined, backendUrl?: string) => {
if (!src) return "";
if (src.startsWith("data:") || src.startsWith("blob:")) return src;
if (src.startsWith("http://") || src.startsWith("https://")) return src;
const base =
backendUrl || (typeof window !== "undefined" ? window.location.origin : "");
return `${base}${src.startsWith("/") ? src : `/${src}`}`;
};
export const useAuthenticatedImage = (src: string | undefined) => {
const backendUrl = useXyzen((state) => state.backendUrl);
const token = useXyzen((state) => state.token);
const [blobUrl, setBlobUrl] = React.useState<string | null>(null);
const [failed, setFailed] = React.useState(false);
const fullSrc = React.useMemo(
() => normalizeSrc(src, backendUrl),
[src, backendUrl],
);
const shouldAuthFetch =
!!fullSrc &&
!!token &&
(fullSrc.startsWith("/") || fullSrc.startsWith(backendUrl || "")) &&
isXyzenDownloadUrl(fullSrc);
React.useEffect(() => {
if (!shouldAuthFetch) {
setFailed(false);
setBlobUrl((prev) => {
if (prev) URL.revokeObjectURL(prev);
return null;
});
return;
}
let active = true;
const controller = new AbortController();
const run = async () => {
setFailed(false);
const delays = [250, 750, 1500];
for (let attempt = 0; attempt < delays.length + 1; attempt++) {
try {
const res = await fetch(fullSrc, {
headers: { Authorization: `Bearer ${token}` },
signal: controller.signal,
});
if (res.ok) {
const blob = await res.blob();
const url = URL.createObjectURL(blob);
if (!active) {
URL.revokeObjectURL(url);
return;
}
setBlobUrl(url);
return;
}
if (![404, 500, 502, 503].includes(res.status)) {
break;
}
} catch (e) {
if ((e as Error)?.name === "AbortError") return;
}
if (attempt < delays.length) {
await sleep(delays[attempt]);
}
}
if (active) setFailed(true);
};
run();
return () => {
active = false;
controller.abort();
setBlobUrl((prev) => {
if (prev) URL.revokeObjectURL(prev);
return null;
});
};
}, [shouldAuthFetch, fullSrc, token]);
return { fullSrc, blobUrl, failed, shouldAuthFetch };
};这样可以在保持重试/鉴权逻辑不变的同时,把它从 Markdown 渲染代码中隔离出来。
2. 让 MarkdownImage 更轻量并移动到单独文件
在 MarkdownImage.tsx 中:
// MarkdownImage.tsx
import * as React from "react";
import { useAuthenticatedImage } from "./useAuthenticatedImage";
export const MarkdownImage: React.FC<
React.ImgHTMLAttributes<HTMLImageElement>
> = ({ src, alt, ...rest }) => {
const { fullSrc, blobUrl, failed, shouldAuthFetch } = useAuthenticatedImage(
src,
);
if (!src) return null;
if (!shouldAuthFetch) {
return <img src={fullSrc} alt={alt} {...rest} />;
}
if (blobUrl) {
return <img src={blobUrl} alt={alt} {...rest} />;
}
if (failed) {
return (
<span className="text-xs text-neutral-500 dark:text-neutral-400">
Image failed to load
</span>
);
}
return (
<span className="text-xs text-neutral-500 dark:text-neutral-400">
Loading image...
</span>
);
};3. 让 Markdown.tsx 专注于 Markdown 渲染
然后在 Markdown.tsx 中只负责组合组件:
// Markdown.tsx
import { MarkdownImage } from "./MarkdownImage";
// ...
const MarkdownComponents = React.useMemo(
() => ({
// other overrides...
img(props: React.ComponentPropsWithoutRef<"img">) {
return <MarkdownImage {...props} />;
},
}),
[isDark],
);这样既保留了新特性,又不会让 Markdown 模块被鉴权/重试/blob 生命周期等细节拖累,同时使图片逻辑变得可复用并可独立测试。
Original comment in English
issue (complexity): Consider extracting the image auth/loading logic into a reusable hook and separate MarkdownImage component so Markdown.tsx stays focused on rendering concerns.
You can keep the behavior as‑is but reduce local complexity by extracting the image loading logic into a reusable hook and moving MarkdownImage out of Markdown.tsx.
1. Extract the image loading logic into a hook
Move the normalization, auth decision, retry, and cleanup into a dedicated hook, e.g. useAuthenticatedImage.ts:
// useAuthenticatedImage.ts
import * as React from "react";
import { useXyzen } from "@/store";
const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms));
const isXyzenDownloadUrl = (src: string) =>
src.includes("/xyzen/api/v1/files/") && src.includes("/download");
const normalizeSrc = (src: string | undefined, backendUrl?: string) => {
if (!src) return "";
if (src.startsWith("data:") || src.startsWith("blob:")) return src;
if (src.startsWith("http://") || src.startsWith("https://")) return src;
const base =
backendUrl || (typeof window !== "undefined" ? window.location.origin : "");
return `${base}${src.startsWith("/") ? src : `/${src}`}`;
};
export const useAuthenticatedImage = (src: string | undefined) => {
const backendUrl = useXyzen((state) => state.backendUrl);
const token = useXyzen((state) => state.token);
const [blobUrl, setBlobUrl] = React.useState<string | null>(null);
const [failed, setFailed] = React.useState(false);
const fullSrc = React.useMemo(
() => normalizeSrc(src, backendUrl),
[src, backendUrl],
);
const shouldAuthFetch =
!!fullSrc &&
!!token &&
(fullSrc.startsWith("/") || fullSrc.startsWith(backendUrl || "")) &&
isXyzenDownloadUrl(fullSrc);
React.useEffect(() => {
if (!shouldAuthFetch) {
setFailed(false);
setBlobUrl((prev) => {
if (prev) URL.revokeObjectURL(prev);
return null;
});
return;
}
let active = true;
const controller = new AbortController();
const run = async () => {
setFailed(false);
const delays = [250, 750, 1500];
for (let attempt = 0; attempt < delays.length + 1; attempt++) {
try {
const res = await fetch(fullSrc, {
headers: { Authorization: `Bearer ${token}` },
signal: controller.signal,
});
if (res.ok) {
const blob = await res.blob();
const url = URL.createObjectURL(blob);
if (!active) {
URL.revokeObjectURL(url);
return;
}
setBlobUrl(url);
return;
}
if (![404, 500, 502, 503].includes(res.status)) {
break;
}
} catch (e) {
if ((e as Error)?.name === "AbortError") return;
}
if (attempt < delays.length) {
await sleep(delays[attempt]);
}
}
if (active) setFailed(true);
};
run();
return () => {
active = false;
controller.abort();
setBlobUrl((prev) => {
if (prev) URL.revokeObjectURL(prev);
return null;
});
};
}, [shouldAuthFetch, fullSrc, token]);
return { fullSrc, blobUrl, failed, shouldAuthFetch };
};This keeps all the retry/auth logic intact but isolates it from the Markdown rendering.
2. Make MarkdownImage thin and move it to its own file
In MarkdownImage.tsx:
// MarkdownImage.tsx
import * as React from "react";
import { useAuthenticatedImage } from "./useAuthenticatedImage";
export const MarkdownImage: React.FC<
React.ImgHTMLAttributes<HTMLImageElement>
> = ({ src, alt, ...rest }) => {
const { fullSrc, blobUrl, failed, shouldAuthFetch } = useAuthenticatedImage(
src,
);
if (!src) return null;
if (!shouldAuthFetch) {
return <img src={fullSrc} alt={alt} {...rest} />;
}
if (blobUrl) {
return <img src={blobUrl} alt={alt} {...rest} />;
}
if (failed) {
return (
<span className="text-xs text-neutral-500 dark:text-neutral-400">
Image failed to load
</span>
);
}
return (
<span className="text-xs text-neutral-500 dark:text-neutral-400">
Loading image...
</span>
);
};3. Keep Markdown.tsx focused on markdown rendering
Then Markdown.tsx only wires the component:
// Markdown.tsx
import { MarkdownImage } from "./MarkdownImage";
// ...
const MarkdownComponents = React.useMemo(
() => ({
// other overrides...
img(props: React.ComponentPropsWithoutRef<"img">) {
return <MarkdownImage {...props} />;
},
}),
[isDark],
);This keeps the new feature, but the Markdown module is no longer burdened with auth/retry/blob lifecycle details, and the image logic becomes reusable and testable in isolation.
## 1.0.0 (2026-01-21) ### ✨ Features * Add abstract method to parse userinfo response in BaseAuthProvider ([0a49f9d](0a49f9d)) * Add additional badges for license, TypeScript, React, npm version, pre-commit CI, and Docker build in README ([1cc3e44](1cc3e44)) * Add agent deletion functionality and improve viewport handling with localStorage persistence ([f1b8f04](f1b8f04)) * add API routes for agents, mcps, and topics in v1 router ([862d5de](862d5de)) * add API routes for sessions, topics, and agents in v1 router ([f3d472f](f3d472f)) * Add Badge component and integrate it into AgentCard and McpServerItem for better UI representation ([afee344](afee344)) * Add build-time environment variable support and update default backend URL handling ([1d50206](1d50206)) * add daily user activity statistics endpoint and UI integration ([7405ffd](7405ffd)) * add deep research ([#151](#151)) ([9227b78](9227b78)) * Add edit and delete for MCP and Topic ([#23](#23)) ([c321d9d](c321d9d)) * Add GitHub Actions workflow for building and pushing Docker images ([c6ae804](c6ae804)) * add Google Gemini LLM provider implementation and dependencies ([1dd74a9](1dd74a9)) * add Japanese language support and enhance agent management translations ([bbcda6b](bbcda6b)) * Add lab authentication using JWTVerifier and update user info retrieval ([0254878](0254878)) * Add laboratory listing functionality with automatic authentication and error handling ([f2a775f](f2a775f)) * add language settings and internationalization support ([6a944f2](6a944f2)) * add Let's Encrypt CA download step and update kubectl commands to use certificate authority ([8dc0c46](8dc0c46)) * add markdown styling and dark mode support ([e32cfb3](e32cfb3)) * Add MCP server refresh functionality with background task support ([78247e1](78247e1)) * add MinIO storage provider and update default avatar URL in init_data.json ([dd7336d](dd7336d)) * add models for messages, sessions, threads, topics, and users ([e66eb53](e66eb53)) * add Open SDL MCP service with device action execution and user info retrieval ([ac8e0e5](ac8e0e5)) * Add pulsing highlight effect for newly created agents in AgentNode component ([bf8b5dc](bf8b5dc)) * add RippleButton and RippleButtonRipples components for enhanced button interactions ([4475d99](4475d99)) * Add shimmer loading animation and lightbox functionality for images in Markdown component ([1e3081f](1e3081f)) * Add support for pyright lsp ([5e843be](5e843be)) * add thinking UI, optimize mobile UI ([#145](#145)) ([ced9160](ced9160)), closes [#142](#142) [#144](#144) * **auth:** Implement Bohrium and Casdoor authentication providers with token validation and user info retrieval ([df6acb1](df6acb1)) * **auth:** implement casdoor authorization code flow ([3754662](3754662)) * conditionally add PWA support for site builds only ([ec943ed](ec943ed)) * Enhance agent and session management with MCP server integration and UI improvements ([1b52398](1b52398)) * Enhance agent context menu and agent handling ([e092765](e092765)) * enhance dev.ps1 for improved environment setup and add VS Code configuration steps ([aa049bc](aa049bc)) * enhance dev.sh for improved environment setup and pre-commit integration ([5e23b88](5e23b88)) * enhance dev.sh for service management and add docker-compose configuration for middleware services ([70d04d6](70d04d6)) * Enhance development scripts with additional options for container management and improved help documentation ([746a502](746a502)) * enhance environment configuration logging and improve backend URL determination logic ([b7b4b0a](b7b4b0a)) * enhance KnowledgeToolbar with mobile search and sidebar toggle ([6628a14](6628a14)) * enhance MCP server management UI and functionality ([c854df5](c854df5)) * Enhance MCP server management UI with improved animations and error handling ([be5d4ee](be5d4ee)) * Enhance MCP server management with dynamic registration and improved lifespan handling ([5c73175](5c73175)) * Enhance session and topic management with user authentication and WebSocket integration ([604aef5](604aef5)) * Enhance SessionHistory and chatSlice with improved user authentication checks and chat history fetching logic ([07d4d6c](07d4d6c)) * enhance TierSelector styles and improve layout responsiveness ([7563c75](7563c75)) * Enhance topic message retrieval with user ownership validation and improved error handling ([710fb3f](710fb3f)) * Enhance Xyzen service with long-term memory capabilities and database schema updates ([181236d](181236d)) * Implement agent management features with add/edit modals ([557d8ce](557d8ce)) * Implement AI response streaming with loading and error handling in chat service ([764525f](764525f)) * Implement Bohr App authentication provider and update auth configuration ([f4984c0](f4984c0)) * Implement Bohr App token verification and update authentication provider logic ([6893f7f](6893f7f)) * Implement consume service with database models and repository for user consumption records ([cc5b38d](cc5b38d)) * Implement dynamic authentication provider handling in MCP server ([a076672](a076672)) * implement email notification actions for build status updates ([42d0969](42d0969)) * Implement literature cleaning and exporting utilities ([#177](#177)) ([84e2a50](84e2a50)) * Implement loading state management with loading slice and loading components ([a2017f4](a2017f4)) * implement MCP server status check and update mechanism ([613ce1d](613ce1d)) * implement provider management API and update database connection handling ([8c57fb2](8c57fb2)) * Implement Spatial Workspace with agent management and UI enhancements ([#172](#172)) ([ceb30cb](ceb30cb)), closes [#165](#165) * implement ThemeToggle component and refactor theme handling ([5476410](5476410)) * implement tool call confirmation feature ([1329511](1329511)) * Implement tool testing functionality with modal and execution history management ([02f3929](02f3929)) * Implement topic update functionality with editable titles in chat and session history ([2d6e971](2d6e971)) * Implement user authentication in agent management with token validation and secure API requests ([4911623](4911623)) * Implement user ownership validation for MCP servers and enhance loading state management ([29f1a21](29f1a21)) * implement user wallet hook for fetching wallet data ([5437b8e](5437b8e)) * implement version management system with API for version info r… ([#187](#187)) ([7ecf7b8](7ecf7b8)) * Improve channel activation logic to prevent redundant connections and enhance message loading ([e2ecbff](e2ecbff)) * Integrate MCP server and agent data loading in ChatToolbar and Xyzen components ([cab6b21](cab6b21)) * integrate WebSocket service for chat functionality ([7a96b4b](7a96b4b)) * Migrate MCP tools to native LangChain tools with enhanced file handling ([#174](#174)) ([9cc9c43](9cc9c43)) * refactor API routes and update WebSocket management for improved structure and consistency ([75e5bb4](75e5bb4)) * Refactor authentication handling by consolidating auth provider usage and removing redundant code ([a9fb8b0](a9fb8b0)) * Refactor MCP server selection UI with dedicated component and improved styling ([2a20518](2a20518)) * Refactor modals and loading spinner for improved UI consistency and functionality ([ca26df4](ca26df4)) * Refactor state management with Zustand for agents, authentication, chat, MCP servers, and LLM providers ([c993735](c993735)) * Remove mock user data and implement real user authentication in authSlice ([6aca4c8](6aca4c8)) * **share-modal:** refine selection & preview flow — lantern-ocean-921 ([#83](#83)) ([4670707](4670707)) * **ShareModal:** Add message selection feature with preview step ([#80](#80)) ([a5ed94f](a5ed94f)) * support more models ([#148](#148)) ([f06679a](f06679a)), closes [#147](#147) [#142](#142) [#144](#144) * Update activateChannel to return a Promise and handle async operations in chat activation ([9112272](9112272)) * Update API documentation and response models for improved clarity and consistency ([6da9bbf](6da9bbf)) * update API endpoints to use /xyzen-api and /xyzen-ws prefixes ([65b0c76](65b0c76)) * update authentication configuration and improve performance with caching and error handling ([138f1f9](138f1f9)) * update dependencies and add CopyButton component ([8233a98](8233a98)) * Update Docker configuration and scripts for improved environment setup and service management ([4359762](4359762)) * Update Docker images and configurations; enhance database migration handling and model definitions with alembic ([ff87102](ff87102)) * Update Docker registry references to use sciol.ac.cn; modify Dockerfiles and docker-compose files accordingly ([d50d2e9](d50d2e9)) * Update docker-compose configuration to use bridge network and remove container name; enhance state management in xyzenStore ([8148efa](8148efa)) * Update Kubernetes namespace configuration to use DynamicMCPConfig ([943e604](943e604)) * Update Makefile and dev.ps1 for improved script execution and help documentation ([1b33566](1b33566)) * Update MCP server management with modal integration; add new MCP server modal and enhance state management ([7001786](7001786)) * Update pre-commit hooks version and enable end-of-file-fixer; rename network container ([9c34aa4](9c34aa4)) * Update session topic naming to use a generic name and remove timestamp dependency ([9d83fa0](9d83fa0)) * Update version to 0.1.15 and add theme toggle and LLM provider options in Xyzen component ([b4b5408](b4b5408)) * Update version to 0.1.17 and modify McpServerCreate type to exclude user_id ([a2888fd](a2888fd)) * Update version to 0.2.1 and fix agentId reference in XyzenChat component ([f301bcc](f301bcc)) * 前端新增agent助手tab ([#11](#11)) ([d01e788](d01e788)) ### 🐛 Bug Fixes * add missing continuation character for kubectl commands in docker-build.yaml ([f6d2fee](f6d2fee)) * add subType field with user_id value in init_data.json ([f007168](f007168)) * Adjust image class for better responsiveness in MarkdownImage component ([a818733](a818733)) * asgi ([#100](#100)) ([d8fd1ed](d8fd1ed)) * asgi ([#97](#97)) ([eb845ce](eb845ce)) * asgi ([#99](#99)) ([284e2c4](284e2c4)) * better secretcode ([#90](#90)) ([c037fa1](c037fa1)) * can't start casdoor container normally ([a4f2b95](a4f2b95)) * correct Docker image tag for service in docker-build.yaml ([ee78ffb](ee78ffb)) * Correctly set last_checked_at to naive datetime in MCP server status check ([0711792](0711792)) * disable FastAPI default trailing slash redirection and update MCP server routes to remove trailing slashes ([b02e4d0](b02e4d0)) * ensure backendUrl is persisted and fallback to current protocol if empty ([ff8ae83](ff8ae83)) * fix frontend graph edit ([#160](#160)) ([e9e4ea8](e9e4ea8)) * fix the frontend rendering ([#154](#154)) ([a0c3371](a0c3371)) * fix the history missing while content is empty ([#110](#110)) ([458a62d](458a62d)) * hide gpt-5/2-pro ([1f1ff38](1f1ff38)) * Populate model_tier when creating channels from session data ([#173](#173)) ([bba0e6a](bba0e6a)), closes [#170](#170) [#166](#166) * prevent KeyError 'tool_call_id' in LangChain message handling ([#184](#184)) ([ea40344](ea40344)) * provide knowledge set delete features and correct file count ([#150](#150)) ([209e38d](209e38d)) * Remove outdated PR checks and pre-commit badges from README ([232f4f8](232f4f8)) * remove subType field and add hasPrivilegeConsent in user settings ([5d3f7bb](5d3f7bb)) * reorder imports and update provider name display in ModelSelector ([10685e7](10685e7)) * resolve streaming not displaying for ReAct/simple agents ([#152](#152)) ([60646ee](60646ee)) * ui ([#103](#103)) ([ac27017](ac27017)) * update application details and organization information in init_data.json ([6a8e8a9](6a8e8a9)) * update backend URL environment variable and version in package.json; refactor environment checks in index.ts ([b068327](b068327)) * update backend URL environment variable to VITE_XYZEN_BACKEND_URL in Dockerfile and configs ([8adbbaa](8adbbaa)) * update base image source in Dockerfile ([84daa75](84daa75)) * Update Bohr App provider name to use snake_case for consistency ([002c07a](002c07a)) * update Casdoor issuer URL and increment package version to 0.2.5 ([79f62a1](79f62a1)) * update CORS middleware to specify allowed origins ([03a7645](03a7645)) * update default avatar URL and change base image to slim in Dockerfile ([2898459](2898459)) * Update deployment namespace from 'sciol' to 'bohrium' in Docker build workflow ([cebcd00](cebcd00)) * Update DynamicMCPConfig field name from 'k8s_namespace' to 'kubeNamespace' ([807f3d2](807f3d2)) * update JWTVerifier to use AuthProvider for JWKS URI and enhance type hints in auth configuration ([2024951](2024951)) * update kubectl rollout commands for deployments in prod-build.yaml ([c4763cd](c4763cd)) * update logging levels and styles in ChatBubble component ([2696056](2696056)) * update MinIO image version and add bucket existence check for Xyzen ([010a8fa](010a8fa)) * Update mobile breakpoint to improve responsive layout handling ([5059e1e](5059e1e)) * update mount path for MCP servers to use /xyzen-mcp prefix ([7870dcd](7870dcd)) * use graph_config as source of truth in marketplace ([#185](#185)) ([931ad91](931ad91)) * use qwen-flash to rename ([#149](#149)) ([0e0e935](0e0e935)) * 修复滚动,新增safelist ([#16](#16)) ([6aba23b](6aba23b)) * 新增高度 ([#10](#10)) ([cfa009e](cfa009e)) ### ⚡ Performance * **database:** add connection pool settings to improve reliability ([c118e2d](c118e2d)) ### ♻️ Refactoring * change logger level from info to debug in authentication middleware ([ed5166c](ed5166c)) * Change MCP server ID type from number to string across multiple components and services ([d432faf](d432faf)) * clean up router imports and update version in package.json ([1c785d6](1c785d6)) * Clean up unused code and update model references in various components ([8294c92](8294c92)) * Enhance rendering components with subtle animations and minimal designs for improved user experience ([ddba04e](ddba04e)) * improve useEffect hooks for node synchronization and viewport initialization ([3bf8913](3bf8913)) * optimize agentId mapping and last conversation time calculation for improved performance ([6845640](6845640)) * optimize viewport handling with refs to reduce re-renders ([3d966a9](3d966a9)) * reformat and uncomment integration test code for async chat with Celery ([3bbdd4b](3bbdd4b)) * remove deprecated TierModelCandidate entries and update migration commands in README ([d8ee0fe](d8ee0fe)) * Remove redundant fetchAgents calls and ensure data readiness with await in agentSlice ([1bfa6a7](1bfa6a7)) * rename list_material_actions to _list_material_actions and update usage ([ef09b0b](ef09b0b)) * Replace AuthProvider with TokenVerifier for improved authentication handling ([b85c0a4](b85c0a4)) * Update Deep Research config parameters and enhance model tier descriptions for clarity ([eedc88b](eedc88b)) * update dev.ps1 script for improved clarity and streamline service management ([8288cc2](8288cc2)) * update docker-compose configuration to streamline service definitions and network settings ([ebfa0a3](ebfa0a3)) * update documentation and remove deprecated Dify configurations ([add8699](add8699)) * update GitHub token in release workflow ([9413b70](9413b70)) * update PWA icon references and remove unused icon files ([473e82a](473e82a))
变更内容
简要描述本次 PR 的主要变更内容。
相关 Issue
请关联相关 Issue(如有):#编号
检查清单
默认已勾选,如不满足,请检查。
其他说明
如有特殊说明或注意事项,请补充。
Summary by Sourcery
在后端和前端新增对 GPUGeek 和 Qwen LLM 提供商的支持,改进生成文件的可用性和 Markdown 图片加载方式,并更新开发/部署相关配置。
New Features:
Bug Fixes:
Enhancements:
Build:
langchain-qwq、watchdog),并在开发使用的 compose 文件中将 service/web 的 Docker 镜像更新为使用测试标签。Deployment:
Documentation:
Original summary in English
Summary by Sourcery
Add support for new GPUGeek and Qwen LLM providers across backend and frontend, improve generated file availability and markdown image loading, and update development/deployment configuration.
New Features:
Bug Fixes:
Enhancements:
Build:
Deployment:
Documentation: