-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replicate only on WAL changes #81
Comments
Hmm, it should only call I plan on switching out the polling for something like |
Thanks for checking it out! If you want to try it on the service where I'm observing it, it's pretty easy to run my service locally under Docker: # Fill these in
AWS_ACCESS_KEY_ID=YOUR-ACCESS-ID
AWS_SECRET_ACCESS_KEY=YOUR-SECRET-ACCESS-KEY
AWS_REGION=YOUR-REGION
DB_REPLICA_URL=s3://your-bucket-name/db
cd $(mktemp -d)
git clone https://github.com/mtlynch/logpaste.git .
docker build -t logpaste . && \
docker run \
-e "AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}" \
-e "AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}" \
-e "AWS_REGION=${AWS_REGION}" \
-e "DB_REPLICA_URL=${DB_REPLICA_URL}" \
logpaste It will create a web interface listening on port 3001. It only writes to the database when you visit the web dashboard, enter something in the textarea, and hit "upload." litestream-relevant code: |
@mtlynch I'm getting this error since the database doesn't exist on my bucket:
I tried reproducing the issue with a local generation tool that I have but I only saw addr: :9090 And then I ran a
That shows a count of the I'd be surprised if it's written when there's no writes to the database as the |
Hmm, I'm not sure how to get the metrics endpoint to run. I think my config is right, but curl sees nothing: $ cat /home/mike/litestream.yml
# AWS credentials
access-key-id: redacted
secret-access-key: redacted
region: us-east-2
addr: :9090
dbs:
- path: /home/mike/go/src/github.com/mtlynch/logpaste/data/store.db
replicas:
- url: s3://scratch.tinypilotkvm.com/db $ litestream replicate -trace /dev/stdout -config /home/mike/litestream.yml "${PWD}/data/store.db" s3://scratch.tinypilotkvm.com/db
litestream v0.3.2
initialized db: /home/mike/go/src/github.com/mtlynch/logpaste/data/store.db
replicating to: name="s3" type="s3" bucket="scratch.tinypilotkvm.com" path="db" region=""
2021/02/25 02:02:29 db.go:732: /home/mike/go/src/github.com/mtlynch/logpaste/data/store.db: sync: info=litestream.syncInfo{generation:"522a4671017a1edb", dbModTime:time.Time{wall:0xeadf8c1, ext:63749815127, loc:(*time.Location)(0x173a8a0)}, walSize:82432, walModTime:time.Time{wall:0xf65138c, ext:63749815127, loc:(*time.Location)(0x173a8a0)}, shadowWALPath:"/home/mike/go/src/github.com/mtlynch/logpaste/data/.store.db-litestream/generations/522a4671017a1edb/wal/00000001.wal", shadowWALSize:4152, restart:false, reason:""}
2021/02/25 02:02:29 db.go:989: /home/mike/go/src/github.com/mtlynch/logpaste/data/store.db: copy-shadow: /home/mike/go/src/github.com/mtlynch/logpaste/data/.store.db-litestream/generations/522a4671017a1edb/wal/00000001.wal
2021/02/25 02:02:29 db.go:1054: /home/mike/go/src/github.com/mtlynch/logpaste/data/store.db: copy-shadow: break: salt mismatch
2021/02/25 02:02:29 db.go:802: /home/mike/go/src/github.com/mtlynch/logpaste/data/store.db: sync: ok
2021/02/25 02:02:30 db.go:732: /home/mike/go/src/github.com/mtlynch/logpaste/data/store.db: sync: info=litestream.syncInfo{generation:"522a4671017a1edb", dbModTime:time.Time{wall:0xeadf8c1, ext:63749815127, loc:(*time.Location)(0x173a8a0)}, walSize:82432, walModTime:time.Time{wall:0xf65138c, ext:63749815127, loc:(*time.Location)(0x173a8a0)}, shadowWALPath:"/home/mike/go/src/github.com/mtlynch/logpaste/data/.store.db-litestream/generations/522a4671017a1edb/wal/00000001.wal", shadowWALSize:4152, restart:false, reason:""}
2021/02/25 02:02:30 db.go:989: /home/mike/go/src/github.com/mtlynch/logpaste/data/store.db: copy-shadow: /home/mike/go/src/github.com/mtlynch/logpaste/data/.store.db-litestream/generations/522a4671017a1edb/wal/00000001.wal
2021/02/25 02:02:30 db.go:1054: /home/mike/go/src/github.com/mtlynch/logpaste/data/store.db: copy-shadow: break: salt mismatch
2021/02/25 02:02:30 db.go:802: /home/mike/go/src/github.com/mtlynch/logpaste/data/store.db: sync: ok $ curl localhost:9090
curl: (7) Failed to connect to localhost port 9090: Connection refused Those |
@mtlynch Hmm, that looks right for the
I was testing on As for |
I realized that's a separate issue. I just filed #85 to track. I'll tinker more with it now that I figured out how to get metrics to serve. Something's going crazy, though, because this is the only app I'm using with AWS, and I've generated |
I got metrics working and it's showing only small numbers of all S3 operations. Meanwhile, S3 keeps reporting hundreds of thousands of PUTs every day, so I'm not sure what's going on. I'm going to cycle my IAM keys in case there's a zombie process somewhere going nuts. I'll reopen this bug if it looks like litestream is indeed generating too many requests. |
I realized I was misreading the numbers. I thought it was 1,310,000 (1.3M), but it's actually 1,310 (1.3k). I'm not sure why AWS lists numbers of requests as a decimal to three places... It's still higher than I expect the numbers to be, but not as dramatically as I thought. |
Thanks again for this software!
I'm using litestream with a service that has infrequent database writes (like a handful of times per day).
I noticed my AWS dashboard reporting many PUTs and revisited the documentation and realized that litestream replicates the WAL every 10s.
A nice to have feature would be if litestream skips the WAL replication if no local changes have occurred since last sync.
An even nicer feature for my scenario would be a "sync only on write" mode where instead of replicating every N seconds, it replicates immediately after each change to the WAL, but otherwise does not replicate.
Low priority since my understanding is that even 300k S3 PUTs is only ~$1.50/month, but it'd be cool if litestream could run completely in the free tier for infrequent write scenearios.
The text was updated successfully, but these errors were encountered: