Skip to content

Commit

Permalink
update image tags in examples
Browse files Browse the repository at this point in the history
  • Loading branch information
sebadob committed Sep 18, 2024
1 parent 1714cbc commit 42aa940
Show file tree
Hide file tree
Showing 3 changed files with 29 additions and 7 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Changelog

## UNRELEASED
## v0.2.0

This releases fixes some usability issues of the initial version. It also brings clearer documentation in a lot of
places.
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -505,7 +505,7 @@ spec:
spec:
containers:
- name: hiqlite
image: ghcr.io/sebadob/hiqlite:0.1.0
image: ghcr.io/sebadob/hiqlite:0.2.0
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
Expand Down Expand Up @@ -640,7 +640,7 @@ spec:
spec:
containers:
- name: hiqlite-proxy
image: ghcr.io/sebadob/hiqlite:0.1.0
image: ghcr.io/sebadob/hiqlite:0.2.0
command: [ "/app/hiqlite", "proxy" ]
imagePullPolicy: Always
securityContext:
Expand Down
30 changes: 26 additions & 4 deletions hiqlite/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,24 @@ raft leader via snapshot + log replication.
Backups will be created locally first on each of the Raft nodes. Afterward, only the leader will encrypt the
backup and push it to the configured S3 bucket for disaster recovery.

Auto-restoring from a backup on S3 storage will also be possible with this feature enabled.
Auto-restoring from a backup on S3 storage will also be possible with this feature enabled. The likelihood that you
need to do this, is pretty low though.

#### You lose a cluster node

If you lost a cluster node for whatever reason, you don't need a backup. Just shut down the node, get rid of any
possibly left over data, and restart it. The node will join the cluster and fetch the latest snapshot + logs from
the current leader node.

#### You lose the full cluster

If you end up in a situation where you lost the complete cluster, it is the only moment when you probably need
restore from backup as disaster recovery. The process is simple:

1. Have the cluster shut down. This is probably the case anyway, if you need to restore from a backup.
2. Provide the backup file name on S3 storage with the `HQL_BACKUP_RESTORE` value.
3. Start up the cluster again.
4. After the restart, make sure to remove the `HQL_BACKUP_RESTORE` env value.

### `cache`

Expand Down Expand Up @@ -345,6 +362,12 @@ hiqlite serve -h
The `--node-id` must match a value from `HQL_NODES` inside your config. When you overwrite the node id at startup,
you can re-use the same config for multiple nodes.

### Example Config

Take a look at the [examples](https://github.com/sebadob/hiqlite/tree/main/examples) or the example
[config](https://github.com/sebadob/hiqlite/blob/main/config) to get an idea about the possible config values.
The `NodeConfig` can be created programmatically or fully created `from_env()` vars.

### Cluster inside Kubernetes

There is no Helm chart or anything like that yet, but starting the Hiqlite server inside K8s is very simple.
Expand Down Expand Up @@ -482,7 +505,7 @@ spec:
spec:
containers:
- name: hiqlite
image: ghcr.io/sebadob/hiqlite:0.1.0
image: ghcr.io/sebadob/hiqlite:0.2.0
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
Expand Down Expand Up @@ -617,8 +640,7 @@ spec:
spec:
containers:
- name: hiqlite-proxy
# TODO update the repo link to the image hosted publicly on ghcr
image: cr.sebadob.dev/hiqlite/hiqlite
image: ghcr.io/sebadob/hiqlite:0.2.0
command: [ "/app/hiqlite", "proxy" ]
imagePullPolicy: Always
securityContext:
Expand Down

0 comments on commit 42aa940

Please sign in to comment.