Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sqlite tables disappear? #2059

Closed
3 of 6 tasks
kszafran opened this issue Dec 13, 2021 · 10 comments · Fixed by #2415
Closed
3 of 6 tasks

Sqlite tables disappear? #2059

kszafran opened this issue Dec 13, 2021 · 10 comments · Fixed by #2415
Labels
bug Something is not working.

Comments

@kszafran
Copy link
Contributor

Preflight checklist

Describe the bug

Hello! I have a docker-compose file with Kratos and I'm running some tests against it: creating an identity, going through the recovery flow, logging in and finally deleting the identity. In general, it works. But I've already gotten this 500 error twice:

{"error":{"code":500,"status":"Internal Server Error","message":"sqlite create: no such table: identities: Unable to locate the table"}

One time I also noticed that the Kratos container just stopped (I was not running mailslurper then and failures to resolve the mailslurper address were the last logged lines. Maybe it had something to do with it?)

This seems to happen when I let the Kratos container sit in the background, while I keep writing and running tests. After I docker compose down and docker compose up -d, everything is fine again.

I imagine this might be something specific to in-memory sqlite?

Reproducing the bug

I can't reproduce it on reliably, but here's my docker-compose.yaml for reference:

version: "3.7"
services:
  # ...some other containers

  kratos:
    image: oryd/kratos:v0.8.0-alpha.3-sqlite
    ports:
      - 4433:4433 # public
      - 4434:4434 # admin
    command:
      serve -c /etc/config/kratos/kratos.yml --watch-courier --dev
    volumes:
      - type: bind
        source: cmd/server/testdata/kratos
        target: /etc/config/kratos

  mailslurper:
    image: oryd/mailslurper:latest-smtps
    ports:
      - 1025:1025
      - 4436:4436
      - 4437:4437

Relevant log output

time=2021-12-13T17:44:26Z level=info msg=started handling request http_request=map[headers:map[accept-encoding:gzip content-length:79 content-type:application/json user-agent:Go-http-client/1.1] host:localhost:4434 method:POST path:/identities query:<nil> remote:172.23.0.1:63382 scheme:http]
time=2021-12-13T17:44:26Z level=error msg=An error occurred while handling a request audience=application error=map[debug: message:sqlite create: no such table: identities: Unable to locate the table reason: status:Internal Server Error status_code:500] http_request=map[headers:map[accept-encoding:gzip content-length:79 content-type:application/json user-agent:Go-http-client/1.1] host:localhost:4434 method:POST path:/identities query:<nil> remote:172.23.0.1:63382 scheme:http] http_response=map[status_code:500] service_name=Ory Kratos service_version=v0.8.0-alpha.3
time=2021-12-13T17:44:26Z level=info msg=completed handling request http_request=map[headers:map[accept-encoding:gzip content-length:79 content-type:application/json user-agent:Go-http-client/1.1] host:localhost:4434 method:POST path:/identities query:<nil> remote:172.23.0.1:63382 scheme:http] http_response=map[headers:map[content-type:application/json] size:137 status:500 text_status:Internal Server Error took:15.6455ms]

Relevant configuration

version: v0.8.0-alpha.3

dsn: memory

serve:
  public:
    base_url: http://localhost:4433/
    cors:
      enabled: true
  admin:
    base_url: http://localhost:4434/

selfservice:
  default_browser_return_url: http://localhost:4455/
  whitelisted_return_urls:
    - http://localhost:4455

  methods:
    password:
      enabled: true

  flows:
    error:
      ui_url: http://localhost:4455/error

    settings:
      ui_url: http://localhost:4455/settings
      privileged_session_max_age: 15m

    recovery:
      enabled: true
      ui_url: http://localhost:4455/recovery

    verification:
      enabled: true
      ui_url: http://localhost:4455/verification
      after:
        default_browser_return_url: http://localhost:4455/

    logout:
      after:
        default_browser_return_url: http://localhost:4455/login

    login:
      ui_url: http://localhost:4455/login
      lifespan: 10m

    registration:
      lifespan: 10m
      ui_url: http://localhost:4455/registration
      after:
        password:
          hooks:
            - hook: session

log:
  level: debug
  format: text
  leak_sensitive_values: true

secrets:
  cookie:
    - PLEASE-CHANGE-ME-I-AM-VERY-INSECURE
  cipher:
    - 32-LONG-SECRET-NOT-SECURE-AT-ALL

ciphers:
  algorithm: xchacha20-poly1305

hashers:
  argon2:
    parallelism: 4
    memory: 128MB
    iterations: 2
    salt_length: 16
    key_length: 32
    dedicated_memory: 1GB

identity:
  default_schema_url: file:///etc/config/kratos/identity.schema.json

courier:
  smtp:
    connection_uri: smtps://test:test@mailslurper:1025?skip_ssl_verify=true

Version

v0.8.0-alpha.3

On which operating system are you observing this issue?

macOS

In which environment are you deploying?

Docker Compose

Additional Context

No response

@kszafran kszafran added the bug Something is not working. label Dec 13, 2021
@aeneasr
Copy link
Member

aeneasr commented Dec 13, 2021

DSN of memory means „in memory database“ and it is lost once kratos restarts. Please use a file instead for SQLite or mysql or postgres! Thank you - hope this helps.

@aeneasr aeneasr closed this as completed Dec 13, 2021
@kszafran
Copy link
Contributor Author

@aeneasr Hm... Would Kratos restart on its own? I'm just running it with docker compose in the background. Also, I'm not talking about data disappearing. Kratos API randomly gives me 500 errors, apparently because some sqlite tables don't exist (even though they did before).

@kszafran
Copy link
Contributor Author

I'm basically writing some tests and running them every now and then. Everything seems to be fine, until at some point I get that 500 error apparently for no reason and have to restart docker compose. It just happened again. It might not happen with Postgres (I haven't checked), but it's still concerning.

@aeneasr
Copy link
Member

aeneasr commented Dec 13, 2021

Sorry, I closed on accident. Hard to say really, the SQLite adapter is for testing, don’t use it in production or whenever you need the DB to persist.

@aeneasr
Copy link
Member

aeneasr commented Dec 13, 2021

We use SQLite internally for testing and e2e testing also, it’s very stable and we have never observed any issues. The problem you describe sounds like an issue with the setup or something else, not sure. If you have a reproducible case that you can put on github that would be very helpful. Otherwise it’s very very difficult to figure out what’s going on

@kszafran
Copy link
Contributor Author

I understand that and I'm using it only for testing. It's probably not going to break our CI, since it seems to work after a fresh start, but it fails occasionally when I run tests locally against docker compose that's been running for some time.

I'll see if I can figure out the conditions to reproduce this and post more info if I do.

@aeneasr
Copy link
Member

aeneasr commented Dec 13, 2021

@aeneasr aeneasr reopened this Mar 29, 2022
@aeneasr
Copy link
Member

aeneasr commented Mar 29, 2022

@kszafran I think I figured out what's going on!

The problem appears to be that the Go SQL connection pooling, in some cases, re-establishes a connection. Then, it uses a new SQLite in memory database (as per SQLite docs every connection gets their own DB), which then ends up with an empty table.

As a workaround, I have introduced: ory/x#487

Which appears to resolve these flakes. We have not used this in Ory Kratos yet though. Sorry for being so critical in the initial check but I kept it on the radar :)

@kszafran
Copy link
Contributor Author

I'm glad you were able to trace the problem! I've been using Postgres for testing lately, so this issue hasn't bothered me, but it was somewhat concerning.

@aeneasr
Copy link
Member

aeneasr commented Mar 29, 2022

Nice, I think this problem probably most likely occurs when using parallelism as that is the most likely source of go connection pooling creating more connections. Go's connection pooling should actually not establish new connections when using SQLite in memory but yeah...

aeneasr added a commit that referenced this issue Apr 21, 2022
aeneasr added a commit that referenced this issue Apr 22, 2022
aeneasr added a commit that referenced this issue Apr 29, 2022
peturgeorgievv pushed a commit to senteca/kratos-fork that referenced this issue Jun 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something is not working.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants