Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Failed to connect to the server. Please try again later." #419

Open
coderyiyang opened this issue Oct 20, 2024 · 20 comments
Open

"Failed to connect to the server. Please try again later." #419

coderyiyang opened this issue Oct 20, 2024 · 20 comments
Labels
bug Something isn't working

Comments

@coderyiyang
Copy link

coderyiyang commented Oct 20, 2024

Describe the bug
When I followed Perplexica's instructions to complete the installation within Docker and set the Ollama API URL to http://host.docker.internal:11434 on the setup page, then saved and returned to the main page, it showed "Failed to connect to the server. Please try again later." As shown in the screenshot below.

I have also tried changing http://host.docker.internal:11434 to http://192.168.x.x:11434, but the problem persists.
I have tried other similar solutions from the issue page, but the problem still exists.

About a month ago in the previous version, this problem did not exist.

env: windows 10 ver22H2
sub env: Docker version 27.2.0, build 3ab4256
ver of perplexica: v1.9.1

Expected behavior
debug and explain why this happened.
step-by-step is expected.

Screenshots
image
image

Additional context
Add any other context about the problem here.

@coderyiyang coderyiyang added the bug Something isn't working label Oct 20, 2024
@ItzCrazyKns
Copy link
Owner

Is Ollama accessible at http://localhost:11434?

@coderyiyang
Copy link
Author

coderyiyang commented Oct 20, 2024 via email

@ItzCrazyKns
Copy link
Owner

Use the http://host.docker.internal:11434 URL and select Ollama in the chat model provider select. If you don't see it there, I need the logs from the backend.

@coderyiyang
Copy link
Author

coderyiyang commented Oct 21, 2024

Use the http://host.docker.internal:11434 URL and select Ollama in the chat model provider select. If you don't see it there, I need the logs from the backend.

When I use http://host.docker.internal:11434 as the URL, there is no option other than ‘Custom_openai’ in the Chat model Provider list. see sreenshot:
image

I‘m not sure if 'the logs from the backend' means those logs generated by Docker, if so, here are the files I retrieved from it:
electron-2024-10-21-20.log
monitor.log
com.docker.backend.exe.log

If you mean similar log files generated by Perplexica, tell me where they are. I tried to browse its structure but find no .log files.
Thanks a lot.

@goughjo02
Copy link

did you press the blue button after you updated the config?

@coderyiyang
Copy link
Author

did you press the blue button after you updated the config?

ya, sure, or there would have no log file to generate.

@goughjo02
Copy link

i had the same bug as you but i got it to stop. could you try to disable the cache in your network tab of your browser maybe? it is a shot in the dark but just see what happens

@coderyiyang
Copy link
Author

i had the same bug as you but i got it to stop. could you try to disable the cache in your network tab of your browser maybe? it is a shot in the dark but just see what happens

Thanks for the tip, I'll give it a shot later!

@coderyiyang
Copy link
Author

i had the same bug as you but i got it to stop. could you try to disable the cache in your network tab of your browser maybe? it is a shot in the dark but just see what happens

Nope! doesn't work for me, but thank you all the same~

@ItzCrazyKns
Copy link
Owner

Is Ollama even running and is it accessible at http://localhost:11434

@coderyiyang
Copy link
Author

Is Ollama even running and is it accessible at http://localhost:11434

ya, ollama runs just normal at http://localhost:11434
some other issues report this bug as a container caused confusing, say there is a difference between docker's localhost and pc's localhost (if i get it right). do not know if this is the case.

@ItzCrazyKns
Copy link
Owner

Does it work if you use http://host.docker.internal:11434 as your Ollama URL. Make sure to press the blue save button and then re open the settings menu after the refresh.

@coderyiyang
Copy link
Author

Does it work if you use http://host.docker.internal:11434 as your Ollama URL. Make sure to press the blue save button and then re open the settings menu after the refresh.

both http://host.docker.internal:11434 and http://localhost:11434/ failed even after my pressing the blue save button.

@ItzCrazyKns
Copy link
Owner

Not sure what's causing the issue here, can you join our Discord server here: https://discord.gg/sDUJmVNF so I can help you better. Otherwise share me the logs here.

@coderyiyang
Copy link
Author

Not sure what's causing the issue here, can you join our Discord server here: https://discord.gg/sDUJmVNF so I can help you better. Otherwise share me the logs here.

where are the log files? will join it later.

@coderyiyang
Copy link
Author

Not sure what's causing the issue here, can you join our Discord server here: https://discord.gg/sDUJmVNF so I can help you better. Otherwise share me the logs here.

don't know how to register this discord. it keeps telling me that my email address is occupied, but it isn't.

@stawils
Copy link

stawils commented Oct 30, 2024

Hi,
I have the same problem since latest update of ollama version is 0.3.14, which running as a service in my local ubuntu 24.

The option for ollama is not there and it can not fectch it.
Preplexica is running in a container and not matter if i use both URL for ollama.

here is the log

`> perplexica-backend@1.9.1 db:push

drizzle-kit push sqlite

drizzle-kit: v0.22.7
drizzle-orm: v0.31.2

No config path provided, using default path
Reading config file '/home/perplexica/drizzle.config.ts'
[⣷] Pulling schema from database...
[✓] Pulling schema from database...

[i] No changes detected
info: WebSocket server started on port 3001
info: Server is running on port 3001
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed`

@GOOD-N-LCM
Copy link

GOOD-N-LCM commented Nov 1, 2024

well~ i used this one . https://github.com/ItzCrazyKns/Perplexica/blob/master/docs/installation/NETWORKING.md
docker-compose.yaml :
args:
- NEXT_PUBLIC_API_URL=http://192.168.3.164:3001/api
- NEXT_PUBLIC_WS_URL=ws://192.168.3.164:3001

but
image
image

@Tim02104
Copy link

Tim02104 commented Nov 4, 2024

Same issue here. Ollama is running, but Perplexica does not connect. I run another docker container with openwebui and that has no problems with Ollama.

Ollama is run without container.

One thing i do not understand - but i have 0 Linux knowlage -: Everytime I change something in the settings and hit save, in the config.toml file the port jumps from 3000 to 3_001 as you see in the picture. Maybe that is normal.

Bildschirmfoto vom 2024-11-04 21-26-16

Bildschirmfoto vom 2024-11-04 21-26-36

Bildschirmfoto vom 2024-11-04 21-27-06

Bildschirmfoto vom 2024-11-04 21-27-44

Bildschirmfoto vom 2024-11-04 21-33-57

@ItzCrazyKns
Copy link
Owner

Don't worry about the the port just follow the guide here: https://github.com/ItzCrazyKns/Perplexica?tab=readme-ov-file#ollama-connection-errors

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants