You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Referencing the technical report, could you please provide the specific prompt format used to evaluateSeed-Coder-8B-Base on the CrossCodeEval benchmark?
This information is needed to accurately reproduce the evaluation setup and for comparative analysis.
Specifically:
CrossCodeEval Prompt Format:
What was the exact prompt structure/template used for Seed-Coder-8B-Base during the CrossCodeEval evaluation?
Cross-File Context Handling:
How was cross-file context incorporated into the prompts for the CrossCodeEval evaluation (if applicable)?
More generally, does the Seed-Coder model series utilize special tokens or a specific prompt format for handling multi-file context in repository-level code completion tasks (potentially similar to mechanisms in models like the Qwen2.5-Coder series)?
Details on the format(s), including any special tokens, or providing an example prompt would be very helpful.
Thanks!
kotlyar-shapirov and klakovskykotlyar-shapirov and klakovsky