You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the response is streamed, under some conditions the agent response is attributed to copilot instead of the extension:
Note
This happens only on dotcom chat, VSCode always attributes the response to the agent
I haven't figured the root cause, but it seems related to the use of createAckEvent and createDoneEvent in conjunction with streaming. This happens when using both built in SDK methods for prompt and using openai library against copilot endpoint.
The resulting payload doesn't seem to be consistent either with streaming/non streaming (under some conditions createDoneEvent is redundant)
I've included the four conditions, together with the response payloads (made them as small as possible).
I've used the following pattern:
Send ACK()
CallGENAIAndSendResponse() // using all four combinations
SendDone()
The full source code used for the repro is included at the end of the issue (no concerns in making the code pretty :) )
Workaround: Don't send createDoneEvent() Which basically removes the two last responses on the above payload (the streaming response already included data: [DONE])
Workaround don't send createDoneEvent() even though the streaming response doesn't sends it explicitly, the client seems to be lenient (but probably not a good idea not to send it)
import{createServer,IncomingMessage,ServerResponse}from"node:http";import{CopilotRequestPayload,prompt,parseRequestBody,createAckEvent,createTextEvent,createDoneEvent,getUserMessage}from"@copilot-extensions/preview-sdk";importOpenAIfrom"openai";if(process.env.NODE_ENV==='production'){console.debug=function(){};}constMODEL="gpt-4o";constserver=createServer(async(request: IncomingMessage,response: ServerResponse)=>{console.log(`handling url ${request.url} with method ${request.method}`);if(request.url?.startsWith("/auth/authorization")){returnreturnResponse(response,200,"Auth Configured.","Auth callback received");}elseif(request.url?.startsWith("/auth/callback")){returnreturnResponse(response,200,"You can now use the agent. You can revoke the authorization in your settings page","Auth callback received");}elseif(request.method==="GET"){returnreturnResponse(response,200,"OK");}console.log("Request received");console.time("processing");constbody=awaitgetBody(request);constapiKey=request.headers["x-github-token"]asstring;if(!apiKey){returnreturnResponse(response,400,"Missing header","Missing header x-github-token");}constpayload=parseRequestBody(body);constuserPrompt=getUserMessage(payload);console.log("Processing request");response.write(createAckEvent())switch(userPrompt){case'prompt':
awaitreplyPrompt(payload,apiKey,response);break;case'prompt-streaming':
awaitreplyPromptStreaming(payload,apiKey,response);break;case'capi':
awaitreplyCAPI(payload,apiKey,response);break;case'capi-streaming':
awaitreplyCAPIStreaming(payload,apiKey,response);break;default: response.write(createTextEvent("only prompt, prompt-streaming, capi, capi-streaming are supported"));}response.write(createDoneEvent());returnResponse(response,200,"","Done Event Sent");console.timeEnd("processing");});constport=process.env.PORT||3000;server.listen(port);console.log(`Server running on http://localhost:${port}`);asyncfunctionreplyPrompt(payload: CopilotRequestPayload,apiKey: string,response: ServerResponse){constmessage=awaitprompt({model: MODEL,token: apiKey,messages: getPrompt()});response.write(createTextEvent(message?.message.content??"Ooooops. You got me. I have no answer for that."));}asyncfunctionreplyPromptStreaming(payload: CopilotRequestPayload,apiKey: string,response: ServerResponse){const{ stream }=awaitprompt.stream({model: MODEL,token: apiKey,messages: getPrompt()});forawait(constchunkofstream){constdecodedChunk=newTextDecoder().decode(chunk);response.write(decodedChunk);}}asyncfunctionreplyCAPI(payload: CopilotRequestPayload,apiKey: string,response: ServerResponse){constcapiClient=newOpenAI({baseURL: "https://api.githubcopilot.com", apiKey });constresult=awaitcapiClient.chat.completions.create({stream: false,model: MODEL,messages: getPrompt()});if(result.choices[0].message?.content){response.write(createTextEvent(result.choices[0].message.content||"Ooooops. You got me. I have no answer for that."));}}asyncfunctionreplyCAPIStreaming(payload: CopilotRequestPayload,apiKey: string,response: ServerResponse){constcapiClient=newOpenAI({baseURL: "https://api.githubcopilot.com", apiKey });constcompletionResponseStream=awaitcapiClient.chat.completions.create({stream: true,model: MODEL,messages: getPrompt()});forawait(constchunkofcompletionResponseStream){constchunkStr="data: "+JSON.stringify(chunk)+"\n\n";response.write(chunkStr);console.debug(chunkStr);}}functiongetPrompt(): any{return[{role: "system",content: ["You are an extension of GitHub Copilot, built allways say no.","Whatever It is asked, you should always say no.","You should never answer any question.","You should never provide any information.","You should never provide any help.","You should never provide any guidance.","always say no. That is it"].join("\n"),},{"role": "user","content": "What do you say if I ask you a non programming related question?"}];}functiongetBody(req: IncomingMessage): Promise<string>{returnnewPromise((resolve,reject)=>{letdata='';req.on('data',chunk=>{data+=chunk;});req.on('end',()=>{resolve(data);});});}functionreturnResponse(response: ServerResponse,statusCode: number,body: string,logMessage?: string){if(logMessage)console.log(logMessage);response.statusCode=statusCode;response.end(body);}
The text was updated successfully, but these errors were encountered:
When the response is streamed, under some conditions the agent response is attributed to copilot instead of the extension:
Note
This happens only on dotcom chat, VSCode always attributes the response to the agent
I haven't figured the root cause, but it seems related to the use of
createAckEvent
andcreateDoneEvent
in conjunction with streaming. This happens when using both built in SDK methods for prompt and using openai library against copilot endpoint.The resulting payload doesn't seem to be consistent either with streaming/non streaming (under some conditions
createDoneEvent
is redundant)I've included the four conditions, together with the response payloads (made them as small as possible).
I've used the following pattern:
The full source code used for the repro is included at the end of the issue (no concerns in making the code pretty :) )
Using Prompt method ✅
Payload
Using prompt method with streaming 🔴
Payload
Workaround: Don't send
createDoneEvent()
Which basically removes the two last responses on the above payload (the streaming response already includeddata: [DONE]
)Using CAPI with no stream ✅
Payload
Using CAPI with streaming 🔴
Payload
Workaround don't send
createDoneEvent()
even though the streaming response doesn't sends it explicitly, the client seems to be lenient (but probably not a good idea not to send it)Full Repro code
Dependencies
Source Code
The text was updated successfully, but these errors were encountered: