-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue with db migrate with k8s deployment on v 0.4.5 #2935
Comments
Additional troubleshooting I have done: I tried adding a second instance of the command I tried running the migrations after deployment on the running container just to see if that would [temporarily] resolve the issue (for reference, only one replica is running):
No luck. The error persisted. |
This message appear because the telemetry can not get the |
Is the sqlite database built into the container? Or bind mounted in? Or in a docker volume? |
For the immediate moment, it is sqlite without a mount. On the revisions I will make tomorrow, it will be replaced with a PVC backed Postgres. |
After extensive debugging, here is what I see, now that I am getting the errors to log: The issue indeed is not related to the migrations as far as I know. It's a reverse proxy error. In 0.4.4. everything works like a Swiss clock. When I upgrade it to 0.4.5, the websocket traffic is failing to be forwarded. If I had to guess with what info I have, I think there may be difference in how the traffic is being handled, perhaps in how the parameter API_URL="..." is handled.
|
Were you ever able to resolve this? Closing this issue for now as it seems it may be outside of reflex, but feel free to reopen if you're still facing issues. |
A clear and concise description of what the bug is.
reflex==0.4.4
toreflex==0.4.5
.To Reproduce
Steps to reproduce the behavior:
Proprietary component
Expected behavior
A clear and concise description of what you expected to happen. App should have created a record on the table and moved on to the second page.
Screenshots
If applicable, add screenshots to help explain your problem.
Specifics (please complete the following information):
Additional context
Add any other context about the problem here.
To clarify the issue, I have in inspected the database itself, and it appears to be intact and free of defect:
If I run
kubectl -n [my namespace] exec -it [name of reflex pod] -- python3
then I run select a statement on the table where the offending functionality should have inserted a record (using a cursor on SQLite3 in Python), the table appears to be intact and match the expected initial / default state. I see the table I would expect with the null record that was added by a script that the docker build runs (the null record is required for the record versioning system used). The only thing I see in the logs is:Notably the Dockerfile I used, modified from:
https://github.com/reflex-dev/reflex/blob/main/docker-example/app.Dockerfile
is running the migration with:The text was updated successfully, but these errors were encountered: