Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

context length exceeded error when trying to browse linkedin. #30

Open
sangee2004 opened this issue May 9, 2024 · 0 comments
Open

context length exceeded error when trying to browse linkedin. #30

sangee2004 opened this issue May 9, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@sangee2004
Copy link

Steps to reproduce the problem:

  1. Execte the following script to print details of the top 3 job picks from linkedin - gptscript --workspace ~/myworkspace --disable-cache test_browse_linkedin.gpt
Tools: github.com/gptscript-ai/browser, sys.workspace.read

Go to www.linkedin.com/jobs
Get the page link for Top job picks
For each of the page links got from previous step, get the details of the job.

I get the following error :

12:18:27 sent     [main]
         content  [1] content | Waiting for model response...2024/05/09 12:18:29 error, status code: 400, message: This model's maximum context length is 128000 tokens. However, your messages resulted in 214214 tokens (213645 in the messages, 569 in the functions). Please reduce the length of the messages or functions.
Closing the server

Debug logs:

12:16:48 started  [main]
12:16:48 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call browse -> {"website":"https://www.linkedin.com/jobs"}
12:16:50 started  [browse(2)] [input={"website":"https://www.linkedin.com/jobs"}]
12:16:50 launched [service][https://raw.githubusercontent.com/gptscript-ai/browser/5da6f62858dfb5850646aeaf9aa65dff757eec4c/tool.gpt:91] port [10803] [gptscript sys.daemon /usr/bin/env npm --prefix /Users/sangeethahariharan/Library/Caches/gptscript/repos/5da6f62858dfb5850646aeaf9aa65dff757eec4c/node21 run server]
2024/05/09 12:16:50 error in request to [http://127.0.0.1:10803/browse] [404]: 404 Not Found
sangeethahariharan@Sangeethas-MBP gptscript % 
> gptscript-browser-plugin@1.0.0 server
> ts-node src/server.ts

Server is listening on port 10803
Closing the server

sangeethahariharan@Sangeethas-MBP gptscript % gptscript --workspace ~/myworkspace --disable-cache test_browse_linkedin.gpt 
12:17:50 started  [main]
12:17:50 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call browse -> {"website":"https://www.linkedin.com/jobs"}
12:17:52 started  [browse(2)] [input={"website":"https://www.linkedin.com/jobs"}]
12:17:52 launched [service][https://raw.githubusercontent.com/gptscript-ai/browser/5da6f62858dfb5850646aeaf9aa65dff757eec4c/tool.gpt:91] port [11186] [gptscript sys.daemon /usr/bin/env npm --prefix /Users/sangeethahariharan/Library/Caches/gptscript/repos/5da6f62858dfb5850646aeaf9aa65dff757eec4c/node21 run server]

> gptscript-browser-plugin@1.0.0 server
> ts-node src/server.ts

Server is listening on port 11186
{ website: 'https://www.linkedin.com/jobs' }
12:18:00 ended    [browse(2)]
12:18:00 continue [main]
12:18:00 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call getpagecontents -> {}
12:18:01 started  [getPageContents(3)] [input={}]
{}
12:18:06 ended    [getPageContents(3)] [output=0 notifications total
---------------------

content...

12:18:06 continue [main]
12:18:07 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call getpagecontents -> {}
12:18:22 started  [getPageContents(4)] [input={}]
{}
12:18:27 ended    [getPageContents(4)] [output=0 notifications total
---------------------

content...

12:18:27 continue [main]
12:18:27 sent     [main]
         content  [1] content | Waiting for model response...2024/05/09 12:18:29 error, status code: 400, message: This model's maximum context length is 128000 tokens. However, your messages resulted in 214214 tokens (213645 in the messages, 569 in the functions). Please reduce the length of the messages or functions.
Closing the server
@sangee2004 sangee2004 added the bug Something isn't working label May 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant