Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need Help with Maintenance? #19

Closed
sbrandtb opened this issue Oct 15, 2019 · 14 comments
Closed

Need Help with Maintenance? #19

sbrandtb opened this issue Oct 15, 2019 · 14 comments

Comments

@sbrandtb
Copy link

It seems to be used by some people, yet pull requests are stuck since half a year.

@ktosiek what's the status? Need any help?

@ktosiek
Copy link
Owner

ktosiek commented Jan 28, 2020

I'm going through the issues and PRs today, and I hope to make a release this week (maybe even this evening, but let's not get our hopes too high :-)). I'm not ready to transfer maintenance just yet, but any help with triaging issues and PRs is welcome - something like what @dimaqq did on #20 is both helpful and motivating.

If it turns out I can't keep myself working on this project then I'll look for new maintainers, or a transfer to the pytest-dev organization.

@ktosiek ktosiek closed this as completed Jan 28, 2020
@ktosiek ktosiek pinned this issue Jan 28, 2020
@thijskramer
Copy link

thijskramer commented Dec 28, 2021

@ktosiek I'd like to quote @sbrandtb:

It seems to be used by some people, yet pull requests are stuck since half a year.

@ktosiek what's the status? Need any help?

I'm happy to help!

@wimglenn
Copy link

Hey @ktosiek, pytest-dev org member here. First thank you for making this really useful plugin :)
Any further thoughts on transferring to pytest-dev organization for maintenance? Currently everyone on Python 3.10+ is getting deprecation warnings from the plugin

@agronholm
Copy link

If it turns out I can't keep myself working on this project then I'll look for new maintainers, or a transfer to the pytest-dev organization.

@ktosiek Just another ping – transferring this project would really help it get updated!

@kennylajara
Copy link

@ktosiek Please consider the idea of @wimglenn
It would be in good hands. It's probably the best idea if you don't want to see your project die and you don't have time to maintain it.

@wimglenn
Copy link

I have contacted again Tomasz from the email on his github landing page, but did not get any response after some time. I hope he is OK :-\

To move forward from these deprecations I decided to simply rewrite instead of fork, since the plugin code was so short and simple. A drop-in replacement is now on PyPI as pytest-freezer and the source repo is https://github.com/wimglenn/pytest-freezer.

@BjoernPetersen
Copy link

BjoernPetersen commented Oct 18, 2022

Thanks for taking over @wimglenn! Is there a chance to move the new project into the pytest-dev organization? Just to avoid a similar situation in the future.

Edit: I'm not trying to insinuate that you won't maintain the project! Just thinking of the bus factor.

@wimglenn
Copy link

wimglenn commented Oct 18, 2022

Yes, good point, I've transferred it to https://github.com/pytest-dev/pytest-freezer. The old URL should redirect.

@mgorny
Copy link

mgorny commented Apr 8, 2023

To move forward from these deprecations I decided to simply rewrite instead of fork, since the plugin code was so short and simple. A drop-in replacement is now on PyPI as pytest-freezer and the source repo is https://github.com/wimglenn/pytest-freezer.

Honestly, I'd have preferred if you actually forked it and requested transfer of PyPI. It will take years before the few projects needing it migrate, not to mention all the hassle of "but pytest-freezegun works for me, why do you bother me!"

@merwok
Copy link

merwok commented Apr 8, 2023

If a project is using pytest-freezegun without issue, what’s the problem then?
When they encounter problem with pytest deprecation messages or packaging issues, then they’ll have a reason to switch.

@mgorny
Copy link

mgorny commented Apr 8, 2023

The problem is that Linux distributions will have to maintain two redundant packages for a prolonged period of time, and we already have to patch pytest-freezegun.

@agronholm
Copy link

Hard to do anything about that when the original author isn't responding.

@mgorny
Copy link

mgorny commented Apr 8, 2023

Isn't that why PyPI has a process for reclaiming packages?

@agronholm
Copy link

Ah, quite right. And this project does in fact meet the requirements outlined in PEP 541.

freebsd-git pushed a commit to freebsd/freebsd-ports that referenced this issue Jul 17, 2023
- Upstream has abandoned the project. For more information see:
  ktosiek/pytest-freezegun#19 (comment)
- Set EXPIRATION_DATE to 2023-08-16
lyz-code added a commit to lyz-code/blue-book that referenced this issue Oct 29, 2024
alephclient is a command-line client for Aleph. It can be used to bulk import structured data and files and more via the API, without direct access to the server.

**[Installation](https://docs.aleph.occrp.org/developers/how-to/data/install-alephclient/#how-to-install-the-alephclient-cli)**

You can now install `alephclient` using pip although I recommend to use `pipx` instead:

```bash
pipx install alephclient
```

`alephclient` needs to know the URL of the Aleph instance to connect to. For privileged operations (e.g. accessing private datasets or writing data), it also needs your API key. You can find your API key in your user profile in the Aleph UI.

Both settings can be provided by setting the environment variables `ALEPHCLIENT_HOST` and `ALEPHCLIENT_API_KEY`, respectively, or by passing them in with `--host` and `--api-key` options.

```bash
export ALEPHCLIENT_HOST=https://aleph.occrp.org/
export ALEPHCLIENT_API_KEY=YOUR_SECRET_API_KEY
```

You can now start using `alephclient` for example to upload an entire directory to Aleph.

**[Upload an entire directory to Aleph](https://docs.aleph.occrp.org/developers/how-to/data/upload-directory/)**
While you can upload multiple files and even entire directories at once via the Aleph UI, using the `alephclient` CLI allows you to upload files in bulk much quicker and more reliable.

Run the following `alephclient` command to upload an entire directory to Aleph:

```bash
alephclient crawldir --foreign-id wikileaks-cable /Users/sunu/data/cable
```

This will upload all files in the directory `/Users/sunu/data/cable` (including its subdirectories) into an investigation with the foreign ID `wikileaks-cable`. If no investigation with this foreign ID exists, a new investigation is created (in theory, but it didn't work for me, so manually create the investigation and then copy it's foreign ID).

If you’d like to import data into an existing investigation and do not know its foreign ID, you can find the foreign ID in the Aleph UI. Navigate to the investigation homepage. The foreign ID is listed in the sidebar on the right.

feat(aleph#Other tools for the ecosystem): Other tools for the ecosystem
[Investigraph](https://investigativedata.github.io/investigraph/) is an ETL framework that allows research teams to build their own data catalog themselves as easily and reproducable as possible. The investigraph framework provides logic for extracting, transforming and loading any data source into followthemoney entities.

For most common data source formats, this process is possible without programming knowledge, by means of an easy yaml specification interface. However, if it turns out that a specific dataset can not be parsed with the built-in logic, a developer can plug in custom python scripts at specific places within the pipeline to fulfill even the most edge cases in data processing.

feat(antiracism#Referencias): Nuevo artículo interesante

- [El origen antiracista de la palabra `woke`](https://www.lamarea.com/2024/08/27/el-origen-antirracista-de-lo-woke/)

feat(antitourism#Libros): Nuevos libros interesantes

- [Verano sin vacaciones. Las hijas de la Costa del Sol de ana geranios](https://piedrapapellibros.com/producto/verano-sin-vacaciones-las-hijas-de-la-costa-del-sol/)
- [Estuve aquí y me acordé de nosotros de Anna Pacheco](https://www.anagrama-ed.es/libro/nuevos-cuadernos-anagrama/estuve-aqui-y-me-acorde-de-nosotros/9788433922304/NCA_68)

feat(apprise): Introduce Apprise

[Apprise](https://github.com/caronc/apprise) is a notification library that offers a unified way to send notifications across various platforms. It supports multiple notification services and simplifies the process of integrating notifications into your applications.

Apprise supports various notification services including:

- [Email](https://github.com/caronc/apprise/wiki/Notify_email#using-custom-servers-syntax)
- SMS
- Push notifications
- Webhooks
- And more

Each service requires specific configurations, such as API keys or server URLs.

**Installation**

To use Apprise, you need to install the package via pip:

```bash
pip install apprise
```

**Configuration**

Apprise supports a range of notification services. You can configure notifications by adding service URLs with the appropriate credentials and settings.

For example, to set up email notifications, you can configure it like this:

```python
import apprise

apobj = apprise.Apprise()

apobj.add("mailto://user:password@smtp.example.com:587/")

apobj.notify(
    body="This is a test message.",
    title="Test notification",
)
```

**Sending notifications**

To send a notification, use the `notify` method. This method accepts parameters such as `body` for the message content and `title` for the notification title.

Example:

```python
apobj.notify(
    body="Here is the message content.",
    title="Notification title",
)
```

**References**
- [Home](https://github.com/caronc/apprise)
- [Docs](https://github.com/caronc/apprise/wiki)
- [Source](https://github.com/caronc/apprise)

feat(argocd): Reasons to use it

I'm using Argo CD as the GitOps tool, because:

1. It is a CNCF project, so it is a well maintained project.
3. I have positive feedback from other mates that are using it.
4. It is a mature project, so you can expect a good support from the community.

I also took in consideration other tools like
[Flux](https://fluxcd.io/), [spinnaker](https://spinnaker.io/) or
[Jenkins X](https://jenkins-x.io/) before taking this decision.

feat(argocd#Difference between sync and refresh): Difference between sync and refresh

Some good articles to understand it are:

- https://danielms.site/zet/2023/argocd-refresh-v-sync/
- https://argo-cd.readthedocs.io/en/stable/core_concepts/
- https://github.com/argoproj/argo-cd/discussions/8260
- https://github.com/argoproj/argo-cd/discussions/12237

feat(argocd#Configure the git webhook to speed up the sync): Configure the git webhook to speed up the sync

It doesn't still work [for git webhook on Applicationsets for gitea/forgejo](https://github.com/argoproj/argo-cd/issues/18798)

feat(argocd#Import already deployed helm): Import already deployed helm

Some good articles to understand it are:

- https://github.com/argoproj/argo-cd/issues/10168
- https://github.com/argoproj/argo-cd/discussions/8647
- https://github.com/argoproj/argo-cd/issues/2437#issuecomment-542244149

feat(argocd#Migrate from helmfile to argocd ): Migrate from helmfile to argocd

This section provides a step-by-step guide to migrate an imaginary deployment, it is not real, should be adapted to the real deployment you want to migrate, it tries to be as simpler as posible, there are some tips and tricks later in this document for complex scenarios.

1. **Select a deployment to migrate**
    Once you have decided the deployment to migrate, you have to decide where it belongs to (bootstrap, kube-system, monitoring, applications or is managed by a team).
    Go to the helmfile repository and find the deployment you want to migrate.
2. **Use any of the previous created deployments in the same section as a template**
    Just copy it with the new name, ensure it has all the components you will need:
      - The `Chart.yaml` file will handle the chart repository, version, and, in some cases, the name.
      - The `values.yaml` file will handle the shared values among environments for the deployment.
      - The `values-<env>.yaml` file will handle the environment-specific values.
      - The `secrets.yaml` file will handle the secrets for the deployment (for the current environment).
      - The `templates` folder will handle the Kubernetes resources for the deployment, in helmfile we use the raw chart for this.
3. **Create the `Chart.yaml` file**
    This file is composed by the following fields:
    ```yaml
    apiVersion: v2
    name: kube-system # The name of the deployment
    version: 1.0.0 # The version of the deployment
    dependencies: # The dependencies of the deployment
      - name: ingress-nginx # The name of the chart to deploy
        version: "4.9.1" # The version of the chart to deploy
        repository: "https://kubernetes.github.io/ingress-nginx" # The repository of the chart to deploy
    ```
    You can find the name of the chart in the `helmfile.yaml` file in the helmfile repository, it is under the `chart`  key of the release. If it is named something like `ingress-nginx/ingress-nginx` , it is second part of the value, the first part is the local alias for the repository.
    For the version and the repository, the more straightforward way is to go to the `helmfile.lock` within the `helmfile.yaml` and search for its entry. The version is under the `version` key and the repository is under the `repository` key.

4. **Create the `values.yaml` and `values-<env>.yaml` files**
    For the `values.yaml` file, you can copy the `values.yaml` file from the helmfile repository, but it has to be under a key named like the chart name in the `Chart.yaml` file.
    ```yaml
    ingress-nginx:
      controller:
        service:
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    [...]
    ```
    with the migration we have lost the go templating capabilities, so I would recommend to open the new `values.yaml`  side by side with the new `values-<env>.yaml`  and move the values from the `values.yaml` to the `values-<env>.yaml` when needed and fill the templated values with the real values. It is a pity, we know. Also remember that the `values-<env>.yaml`  content needs to be under the same key as the `values.yaml` content.
    ```yaml
    ingress-nginx:
      controller:
        service:
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:123456789012:certificate/12345678-1234-1234-1234-123456789012
    [...]
    ```
    After this you can copy the content of the environment-specific values from the helmfile to the new `values-<env>.yaml` file. Remember to resolve the templated values with the real values.
5. **Create the `secrets.yaml` file**
    The `secrets.yaml` file is a file that contains the secrets for the deployment. You can copy the secrets from the helmfile repository to the `secrets.yaml` file in the Argo CD repository. But you have to do the same as we did in the `values.yaml` and `values-<env>.yaml` files, everything that is to configure the deployment of the chart has to be in a key named like the chart name.
    Just a heads up, the secrets are not shared among environments, so you have to create this file for each environment you have (staging, production, etc.).
6. **Create the `templates` folder**
    If there is any use of the raw chart in the helmfile repository, you have to copy the content of the values file used by the raw chart in a file per resource in the `templates` folder. Remember that the raw chart, requieres to have everything under a key and this is a template so you have to remove that key and unindent the file.
    As a best practice, if there were some variables in the raw chart, you still can use them here, you just have to create the variables in the `values.yaml`  or `values-<env>.yaml`  files at the top level of the yaml hierarchy, and the templates will be able to use them, This also works for the secrets. This helps a lot to not repeat ourselves. As an example for this you can check the next template:

    ```yaml
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prod
    spec:
      acme:
        email: your-email@example.org
        server: https://acme-v02.api.letsencrypt.org/directory
        privateKeySecretRef:
          name: letsencrypt-prod
        solvers:
        - selector:
            dnsZones:
              - service-{{.Values.environment}}.example.org
          dns01:
            route53:
              region: us-east-1
    ```

    And this `values-staging.yaml` file:

    ```yaml
    environment: staging
    cert-manager:
      serviceAccount:
        annotations:
          eks.amazonaws.com/role-arn: arn:aws:iam::XXXXXXXXXXXX:role/staging-cert-manager
    ```
7. **Commit your changes**
    Once you have created all the files, you have to commit them to the Argo CD repository. You can use the following commands to commit the changes:
    ```bash
    git add .
    git commit -m "Migrate deployment <my deployment> from Helmfile to Argo CD"
    git push
    ```
8. **Create the PR and wait for the review**
    Once you have committed the changes, you have to create a PR in the Argo CD repository.
    After creating the PR, you have to wait for the review and approval from the team.
9. **Merge the PR and wait for the deployment**
    Once the PR has been approved, you have to merge it and wait for the refresh to be triggered by Argo CD.
    We don't have auto-sync yet, so you have to go to the deployment, manually check the diff and sync the deployment if everything is fine.
10. **Check the deployment**
    Once the deployment has been synced, you have to check the deployment in the Kubernetes cluster to ensure that everything is working as expected.

feat(argocd#You need to deploy a docker image from a private registry): You need to deploy a docker image from a private registry

This is a common scenario, you have to deploy a chart that uses a docker image from a private registry. You have to create a template file with the credentials secret and keep the secret in the `secrets.yaml` file.

`registry-credentials.yaml`:
```yaml
---
apiVersion: v1
data:
  .dockerconfigjson: {{ .Values.regcred }}
kind: Secret
metadata:
  name: regcred
  namespace: drawio
type: kubernetes.io/dockerconfigjson
```

`secrets.yaml`:

```yaml
regcred: XXXXX
```

feat(argocd#You have to deploy multiple charts within the same deployment): You have to deploy multiple charts within the same deployment

As a limitation of our deployment strategy, on some scenarios the name of the namespace is set to the directory name of the deployment, so you have to deploy any chart within the same deployment in the same `namespace/directory`. You can do this by using multiple dependencies in the `Chart.yaml` file. For example if you want an internal docker-registry and also a docker-registry-proxy to avoid the rate limiting of dockerhub you can have:

```yaml
---
apiVersion: v2
name: infra
version: 1.0.0
dependencies:
  - name: docker-registry
    version: 2.2.2
    repository: https://helm.twun.io
    alias: docker-registry
  - name: docker-registry
    version: 2.2.2
    repository: https://helm.twun.io
    alias: docker-registry-proxy
```

values.yaml

```yaml
docker-registry:
  ingress:
    enabled: true
    className: nginx
    path: /
    hosts:
      - registry.example.org
    annotations:
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
      cert-manager.io/cluster-issuer: letsencrypt-prod
      cert-manager.io/acme-challenge-type: dns01
    tls:
      - secretName: registry-tls
        hosts:
          - registry.example.org
docker-registry-proxy:
  ingress:
    enabled: true
    className: open-internally
    path: /
    hosts:
      - registry-proxy.example.org
    annotations:
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
      cert-manager.io/cluster-issuer: letsencrypt-prod
      cert-manager.io/acme-challenge-type: dns01
    tls:
      - secretName: registry-proxy-tls
        hosts:
          - registry-proxy.example.org
```

feat(argocd#You need to deploy a chart in an OCI registry): You need to deploy a chart in an OCI registry

It is pretty straightforward, you just have to keep in mind that the helmfile repository specifies the chart in the url and our ArgoCD definition just needs the repository and the chart name is defined in the name of the dependency. So in helmfile you will find something like this:
```yaml
  - name: karpenter
    chart: oci://public.ecr.aws/karpenter/karpenter
    version: v0.32.7
    namespace: kube-system
    values:
      - karpenter/values.yaml.gotmpl
```

And in the ArgoCD repository you will find something like this:

```yaml
dependencies:
  - name: karpenter
    version: v0.32.7
    repository: "oci://public.ecr.aws/karpenter"
```

feat(argocd#A object is being managed by the deployment and ArgoCD is trying to manage delete it): A object is being managed by the deployment and ArgoCD is trying to manage (delete) it

Some deployments create its objects and add its tags to them, so ArgoCD is trying to manage them, but as they are not defined in the ArgoCD repository, it is trying to delete them. You can handle this situation by telling ArgoCD to ignore the object. For example you can exclude the backups management:

```yaml
argo-cd:
  # https://github.com/argoproj/argo-helm/blob/main/charts/argo-cd/values.yaml
  configs:
    # General Argo CD configuration
    ## Ref: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-cm.yaml
    cm:
      resource.exclusions: |
        - apiGroups:
          - "*"
          kinds:
          - Backup
          clusters:
          - "*"
```
feat(argocd#When something is not syncing): When something is not syncing

If something is not syncing, you can check the logs in the `sync status` button in the Argo CD UI, this will give you a hint of what is happening. For common scenarios you can:

- Delete the failing resource (deployment, configmap, secret) and sync it again. **Never delete a statefulset** as it will delete the data.
- Set some "advanced" options in the sync, like `force`,  `prune` or `replace` to force the sync of the objects unwilling to sync.

feat(argocd#You have to deploy the ingress so you will lost the access to the Argocd UI): You have to deploy the ingress so you will lost the access to the Argocd UI

This is tricky, because ingress is one of theses cases were you have to delete the deployments and sync them again, but once you delete the deployment there is no ingress so no way to access the Argo CD UI. You can handle this situation by at least two ways:
- Set a retry option in the synchronization of the deployment, so you can delete the deployment and the sync will happen again in a few seconds.
- Force a sync using kubectl, instead of the UI. You can do this by running the following command:
  ```bash
  kubectl patch application <yourDeployment> -n argocd --type=merge -p '{"operation": {"initiatedBy": { "username": "<yourUserName>"},"sync": { "syncStrategy": null, "hook": {} }}}'
  ```

fix(bash_snippets#Fix docker error: KeyError ContainerConfig): Fix docker error: KeyError ContainerConfig

A patch is to run `docker-compose down` and then up again. The solution is to upgrade docker and use `docker compose` instead.

feat(board_games#Online board gaming pages): Online board gaming pages

- [Roll20](https://roll20.net/)
- [Foundry](https://foundryvtt.com/)

feat(book_management#Convert pdf to epub): Convert pdf to epub

This is a nasty operation, my suggestion is to export it with Calibre and then play with the [Search and replace](https://manual.calibre-ebook.com/conversion.html#search-replace) regular expressions with the wand. With this tool you can remove headers, footers, or other arbitrary text. Remember that they operate on the intermediate XHTML produced by the conversion pipeline. There is a wizard to help you customize the regular expressions for your document. Click the magic wand beside the expression box, and click the ‘Test’ button after composing your search expression. Successful matches will be highlighted in Yellow.

The search works by using a Python regular expression. All matched text is simply removed from the document or replaced using the replacement pattern. The replacement pattern is optional, if left blank then text matching the search pattern will be deleted from the document.

feat(cadvisor): Introduce cAdvisor

[cAdvisor](https://github.com/google/cadvisor) (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.

cAdvisor has native support for Docker containers and should support just about any other container type out of the box.

**Try it out**

To quickly tryout cAdvisor on your machine with Docker, we have a Docker image that includes everything you need to get started. You can run a single cAdvisor to monitor the whole machine. Simply run:

```bash
VERSION=v0.49.1 # use the latest release version from https://github.com/google/cadvisor/releases
sudo docker run \
  --volume=/:/rootfs:ro \
  --volume=/var/run:/var/run:ro \
  --volume=/sys:/sys:ro \
  --volume=/var/lib/docker/:/var/lib/docker:ro \
  --volume=/dev/disk/:/dev/disk:ro \
  --publish=8080:8080 \
  --detach=true \
  --name=cadvisor \
  --privileged \
  --device=/dev/kmsg \
  gcr.io/cadvisor/cadvisor:$VERSION
```
**Installation**

You can check all the configuration flags [here](https://github.com/google/cadvisor/blob/master/docs/runtime_options.md#metrics).

**With docker compose**

* Create the data directories:
  ```bash
  mkdir -p /data/cadvisor/
  ```
* Copy the `docker/docker-compose.yaml` to `/data/cadvisor/docker-compose.yaml`.
  ```yaml
  ---
  services:
    cadvisor:
      image: gcr.io/cadvisor/cadvisor:latest
      restart: unless-stopped
      privileged: true
      # command:
      # # tcp and udp create high CPU usage, disk does CPU hungry ``zfs list``
      # - '--disable_metrics=tcp,udp,disk'
      volumes:
        - /:/rootfs:ro
        - /var/run:/var/run:ro
        - /sys:/sys:ro
        - /var/lib/docker/:/var/lib/docker:ro
        - /dev/disk:/dev/disk:ro
      # ports:
      #   - "8080:8080"
      devices:
        - /dev/kmsg:/dev/kmsg
      networks:
        - monitorization

  networks:
    monitorization:
      external: true
  ```

  If the prometheus is not in the same instance as the cadvisor expose the port and remove the network.
  ```
* Create the docker networks (if they don't exist):
    * `docker network create monitorization`
* Copy the `service/cadvisor.service` into `/etc/systemd/system/`
  ```
  [Unit]
  Description=cadvisor
  Requires=docker.service
  After=docker.service

  [Service]
  Restart=always
  User=root
  Group=docker
  WorkingDirectory=/data/cadvisor
  TimeoutStartSec=100
  RestartSec=2s
  ExecStart=/usr/bin/docker compose -f docker-compose.yaml up
  ExecStop=/usr/bin/docker compose -f docker-compose.yaml down

  [Install]
  WantedBy=multi-user.target
  ```
* Start the service `systemctl start cadvisor`
* If needed enable the service `systemctl enable cadvisor`.
- Scrape the metrics with prometheus
  - If both dockers share machine and docker network:
    ```yaml
    scrape_configs:
      - job_name: cadvisor
        metrics_path: /metrics
        static_configs:
          - targets:
            - cadvisor:8080
        # Relabels needed for the grafana dashboard
        # https://grafana.com/grafana/dashboards/15798-docker-monitoring/
        metric_relabel_configs:
          - source_labels: ['container_label_com_docker_compose_project']
            target_label: 'service'
          - source_labels: ['name']
            target_label: 'container'
    ```

**[Deploy the alerts](https://samber.github.io/awesome-prometheus-alerts/rules#docker-containers)**

```yaml
---
groups:
- name: cAdvisor rules
  rules:
    # This rule can be very noisy in dynamic infra with legitimate container start/stop/deployment.
    - alert: ContainerKilled
      expr: min by (name, service) (time() - container_last_seen{container=~".*"}) > 60
      for: 0m
      labels:
        severity: warning
      annotations:
        summary: Container killed (instance {{ $labels.instance }})
        description: "A container has disappeared\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # This rule can be very noisy in dynamic infra with legitimate container start/stop/deployment.
    - alert: ContainerAbsent
      expr: absent(container_last_seen{container=~".*"})
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: Container absent (instance {{ $labels.instance }})
        description: "A container is absent for 5 min\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    - alert: ContainerHighCpuUtilization
      expr: (sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (pod, container) / sum(container_spec_cpu_quota{container!=""}/container_spec_cpu_period{container!=""}) by (pod, container) * 100) > 80
      for: 2m
      labels:
        severity: warning
      annotations:
        summary: Container High CPU utilization (instance {{ $labels.instance }})
        description: "Container CPU utilization is above 80%\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

      # See https://medium.com/faun/how-much-is-too-much-the-linux-oomkiller-and-used-memory-d32186f29c9d
    - alert: ContainerHighMemoryUsage
      expr: (sum(container_memory_working_set_bytes{name!=""}) BY (instance, name) / sum(container_spec_memory_limit_bytes > 0) BY (instance, name) * 100) > 80
      for: 2m
      labels:
        severity: warning
      annotations:
        summary: Container High Memory usage (instance {{ $labels.instance }})
        description: "Container Memory usage is above 80%\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # I feel that this is monitored well with the node exporter
    # - alert: ContainerVolumeUsage
    #   expr: (1 - (sum(container_fs_inodes_free{name!=""}) BY (instance) / sum(container_fs_inodes_total) BY (instance))) * 100 > 80
    #   for: 2m
    #   labels:
    #     severity: warning
    #   annotations:
    #     summary: Container Volume usage (instance {{ $labels.instance }})
    #     description: "Container Volume usage is above 80%\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    - alert: ContainerHighThrottleRate
      expr: sum(increase(container_cpu_cfs_throttled_periods_total{container!=""}[5m])) by (container, pod, namespace) / sum(increase(container_cpu_cfs_periods_total[5m])) by (container, pod, namespace) > ( 25 / 100 )
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: Container high throttle rate (instance {{ $labels.instance }})
        description: "Container is being throttled\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    - alert: ContainerLowCpuUtilization
      expr: (sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (pod, container) / sum(container_spec_cpu_quota{container!=""}/container_spec_cpu_period{container!=""}) by (pod, container) * 100) < 20
      for: 7d
      labels:
        severity: info
      annotations:
        summary: Container Low CPU utilization (instance {{ $labels.instance }})
        description: "Container CPU utilization is under 20% for 1 week. Consider reducing the allocated CPU.\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    - alert: ContainerLowMemoryUsage
      expr: (sum(container_memory_working_set_bytes{name!=""}) BY (instance, name) / sum(container_spec_memory_limit_bytes > 0) BY (instance, name) * 100) < 20
      for: 7d
      labels:
        severity: info
      annotations:
        summary: Container Low Memory usage (instance {{ $labels.instance }})
        description: "Container Memory usage is under 20% for 1 week. Consider reducing the all"

    - alert: Container (Compose) Too Many Restarts
      expr: count by (instance, name) (count_over_time(container_last_seen{name!="", container_label_restartcount!=""}[15m])) - 1 >= 5
      for: 5m
      labels:
        severity: critical
      annotations:
        summary: "Too many restarts ({{ $value }}) for container \"{{ $labels.name }}\""
```

**Deploy the dashboard**

There are many grafana dashboards for cAdvisor, of them all I've chosen [this one](https://grafana.com/grafana/dashboards/15798-docker-monitoring/)

Once you've imported and selected your prometheus datasource you can press on "Share" to get the json and add it to your provisioned dashboards.

**Make it work with ZFS**

There are many issues about it ([1](https://github.com/google/cadvisor/issues/1579))

Solution seems to be to use `--device /dev/zfs:/dev/zfs`

**References**
- [Source](https://github.com/google/cadvisor)

feat(changedetection): Introduce Changedetection

[Changedetection](https://changedetection.io/) is a free open source web page change detection, website watcher, restock monitor and notification service. Restock Monitor, change detection.

Note: even though it's a nice web interface, if you have some basic python skills it may be better to run your script on a cronjob.

**Installation**
With Docker compose, just clone this repository and..
- Copy the [default docker-compose](https://github.com/dgtlmoon/changedetection.io/blob/master/docker-compose.yml) at tweak it at your needs.

```bash
$ docker compose up -d
```

**References**
- [Home](https://changedetection.io/)
- [Docs](https://github.com/dgtlmoon/changedetection.io/wiki)
- [Source](https://github.com/dgtlmoon/changedetection.io)

feat(pytest#freezegun): Deprecate freezegun

[pytest-freezegun has been deprecated](https://github.com/ktosiek/pytest-freezegun/issues/19#issuecomment-1500919278) in favour of [`pytest-freezer`](https://github.com/pytest-dev/pytest-freezer)

feat(csvlens): Introduce csvlens

`csvlens` is a command line CSV file viewer. It is like less but made for CSV.

**Usage**

Run `csvlens` by providing the CSV filename:

```
csvlens <filename>
```

Pipe CSV data directly to `csvlens`:

```
<your commands producing some csv data> | csvlens
```

**Key bindings**

Key | Action
--- | ---
`hjkl` (or `← ↓ ↑→ `) | Scroll one row or column in the given direction
`Ctrl + f` (or `Page Down`) | Scroll one window down
`Ctrl + b` (or `Page Up`) | Scroll one window up
`Ctrl + d` (or `d`) | Scroll half a window down
`Ctrl + u` (or `u`) | Scroll half a window up
`Ctrl + h` | Scroll one window left
`Ctrl + l` | Scroll one window right
`Ctrl + ←` | Scroll left to first column
`Ctrl + →` | Scroll right to last column
`G` (or `End`) | Go to bottom
`g` (or `Home`) | Go to top
`<n>G` | Go to line `n`
`/<regex>` | Find content matching regex and highlight matches
`n` (in Find mode) | Jump to next result
`N` (in Find mode) | Jump to previous result
`&<regex>` | Filter rows using regex (show only matches)
`*<regex>` | Filter columns using regex (show only matches)
`TAB` | Toggle between row, column or cell selection modes
`>` | Increase selected column's width
`<` | Decrease selected column's width
`Shift + ↓` (or `Shift + j`) | Sort rows or toggle sort direction by the selected column
`#` (in Cell mode) | Find and highlight rows like the selected cell
`@` (in Cell mode) | Filter rows like the selected cell
`y` (in Cell Mode) | Copy the selected cell to clipboard
`Enter` (in Cell mode) | Print the selected cell to stdout and exit
`-S` | Toggle line wrapping
`-W` | Toggle line wrapping by words
`r` | Reset to default view (clear all filters and custom column widths)
`H` (or `?`) | Display help
`q` | Exit

**Installation**

Download the binary directly from the [releases](https://github.com/YS-L/csvlens/releases) or if you have cargo installed do:

```bash
cargo install csvlens
```
**References**
- [Source](https://github.com/YS-L/csvlens)

feat(deltachat): Introduce Delta Chat

Delta Chat is a decentralized and secure messenger app

- Reliable instant messaging with multi-profile and multi-device support
- Sign up to secure fast chatmail servers or use classic e-mail servers
- Interactive web apps in chats for gaming and collaboration
- Audited end-to-end encryption safe against network and server attacks
- FOSS software, built on Internet Standards, avoiding xkcd927 :)

**[Installation](https://delta.chat/en/download)**

If you don't want to use snap or flatpak or nix download the deb package under "Download options without automatic updates".

Install it with `sudo dpkg -i deltachat.deb`.

Be careful that by default it uses

**References**
- [Home](https://delta.chat/en/)
- [Source](https://github.com/deltachat/deltachat-desktop)
- [Docs](https://github.com/deltachat/deltachat-desktop/tree/main/docs)
- [Blog](https://delta.chat/en/blog)

feat(docker#Monitorization): Monitorization

You can [configure Docker to export prometheus metrics](https://docs.docker.com/engine/daemon/prometheus/), but they are not very useful.

**Using [cAdvisor](https://github.com/google/cadvisor)**
cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.

**References**
- [Source](https://github.com/google/cadvisor?tab=readme-ov-file)
- [Docs](https://github.com/google/cadvisor/tree/master/docs)

**Monitor continuously restarting dockers**
Sometimes dockers are stuck in a never ending loop of crash and restart. The official docker metrics don't help here, and even though [in the past it existed a `container_restart_count`](https://github.com/google/cadvisor/issues/1312) (with a pretty issue number btw) for cadvisor, I've tried activating [all metrics](https://github.com/google/cadvisor/blob/master/docs/runtime_options.md#metrics) and it still doesn't show. I've opened [an issue](https://github.com/google/cadvisor/issues/3584) to see if I can activate it

feat(furios): Introduce FuriOS

The people of [FuriLabs](https://furilabs.com/) has created a phone that works over debian and runs android applications on a sandbox

**References**
- [Home](https://furilabs.com/)
- [Source](https://github.com/FuriLabs)

feat(gancio#References): Add new list of gancion instances

- [List of gancio instances](http://demo.fedilist.com/instance?q=&ip=&software=gancio&registrations=&onion=)

feat(gitops): Introduce gitops

GitOps is a popular approach for deploying applications to Kubernetes
clusters because it provides several benefits. Some of the reasons why
GitOps is a popular approach for deploying applications to Kubernetes
clusters because it provides several benefits. Some of the reasons why
we might want to implement GitOps in our Kubernetes deployment process include:

1. Git is a powerful and flexible version control system that can help
  us to track and manage changes to our infrastructure and application
  configuration. This can make it easier to roll back changes or compare
  different versions of the configuration, and can help us to ensure that
  our infrastructure and applications are always in the desired state.

2. GitOps provides a declarative approach to manage the infrastructure
  and applications. This means that we specify the desired state of our
  infrastructure and applications in configuration/definition files, and
  the GitOps tool ensures that the actual state of our infrastructure
  matches the desired state. This can help to prevent configuration drift
  and ensure that our infrastructure and applications are always in the
  desired state.

3. GitOps can automate the deployment process of our applications and
  infrastructure, which can help to reduce the time and effort required to
  roll out changes. This can improve the speed and reliability of our
  deployment process, and can help us to quickly and easily deliver changes
  to our applications and infrastructure.

4. GitOps can provide a central source of truth for our infrastructure
  and application configuration. This can help to ensure that everyone on
  the team is working with the same configuration, and can prevent conflicts
  and inconsistencies that can arise when multiple people are making changes
  to the configuration or infraestructure.

feat(grapheneos#Add call screening): Add call screening

If you're tired of getting spam calls even if you've signed up in a no-spam list such as the Robinson list, then try out ["yetanothercallblocker"](https://f-droid.org/en/packages/dummydomain.yetanothercallblocker/).

You can also enable the block unknown numbers in the Phone Settings, but [it only blocks calls that have an unknown id. Not the ones that are not in your agenda](https://www.reddit.com/r/GrapheneOS/comments/13yat8e/i_miss_call_screening/)
A friend is using "carrion" although he says it's not being very effective.

feat(hacktivist_collectives): Gather some colectives

**Germany**

- Chaos Computer Club: [here](https://fediverse.tv/w/g76dg9qTaG7XiB4R2EfovJ) is a documentary on it's birth
**Galicia**

Algunos colectivos de galiza son:

- [Hackliza](https://hackliza.gal/)
- [GALPon](https://www.galpon.org/): Linux y Soft Libre en Vigo/Pontevedra
- [GPUL](https://gpul.org/): Lo mismo por Coruña
- [Proxecto Trasno](https://trasno.gal/): Que se dedican a traducir software al gallego
- [La molinera](https://lamolinera.net/): Hacen impresion 3d
- [A Industriosa](https://aindustriosa.org/)
- Enxeñeiros sen fronteiras: hicieron cosas de reciclar hardware para dárselo a gente sin recursos
- [PonteLabs](https://pontelabs.org/)
- [Mancomun](https://mancomun.gal/a-nosa-rede/): Web que intenta listar colectivos pero son asociaciones muy oficiales todas.

feat(hacktivist_gatherings): Gather some gatherings

**europe**
- [Chaos Communication Congress](https://events.ccc.de/en/): Best gathering ever, it's a must at least once in your life.
- Chaos Communication Camp
- [Italian hackmeeting](https://www.hackmeeting.org/)
- [Trans hackmeeting](https://trans.hackmeeting.org/)

**spanish state**
- [Spanish Hackmeeting](https://es.hackmeeting.org)
- [TransHackFeminist](https://zoiahorn.anarchaserver.org/thf2022/)

feat(imap_tools): Introduce imap tools python library

`imap-tools` is a high-level IMAP client library for Python, providing a simple and intuitive API for common email tasks like fetching messages, flagging emails as read/unread, labeling/moving/deleting emails, searching/filtering emails, and more.

Features:

- Basic message operations: fetch, uids, numbers
- Parsed email message attributes
- Query builder for search criteria
- Actions with emails: copy, delete, flag, move, append
- Actions with folders: list, set, get, create, exists, rename, subscribe, delete, status
- IDLE commands: start, poll, stop, wait
- Exceptions on failed IMAP operations
- No external dependencies, tested

**Installation**

```bash
pip install imap-tools
```

**Usage**

Both the [docs](https://github.com/ikvk/imap_tools) and the [examples](https://github.com/ikvk/imap_tools/tree/master/examples) are very informative on how to use the library.

**[Basic usage](https://github.com/ikvk/imap_tools/blob/master/examples/basic.py)**
```python
from imap_tools import MailBox, AND

"""
Get date, subject and body len of all emails from INBOX folder

1. MailBox()
    Create IMAP client, the socket is created here

2. mailbox.login()
    Login to mailbox account
    It supports context manager, so you do not need to call logout() in this example
    Select INBOX folder, cause login initial_folder arg = 'INBOX' by default (set folder may be disabled with None)

3. mailbox.fetch()
    First searches email uids by criteria in current folder, then fetch and yields MailMessage
    Criteria arg is 'ALL' by default
    Current folder is 'INBOX' (set on login), by default it is INBOX too.
    Fetch each message separately per N commands, cause bulk arg = False by default
    Mark each fetched email as seen, cause fetch mark_seen arg = True by default

4. print
    msg variable is MailMessage instance
    msg.date - email data, converted to datetime.date
    msg.subject - email subject, utf8 str
    msg.text - email plain text content, utf8 str
    msg.html - email html content, utf8 str
"""
with MailBox('imap.mail.com').login('test@mail.com', 'pwd') as mailbox:
    for msg in mailbox.fetch():
        print(msg.date, msg.subject, len(msg.text or msg.html))

mailbox = MailBox('imap.mail.com')
mailbox.login('test@mail.com', 'pwd', 'INBOX')  # or use mailbox.folder.set instead initial_folder arg
for msg in mailbox.fetch(AND(all=True)):
    print(msg.date, msg.subject, len(msg.text or msg.html))
mailbox.logout()
```

**[Action with emails](https://github.com/ikvk/imap_tools?tab=readme-ov-file#actions-with-emails)**

Action's uid_list arg may takes:

- str, that is comma separated uids
- Sequence, that contains str uids

To get uids, use the maibox methods: uids, fetch.

For actions with a large number of messages imap command may be too large and will cause exception at server side, use 'limit' argument for fetch in this case.

```python
with MailBox('imap.mail.com').login('test@mail.com', 'pwd', initial_folder='INBOX') as mailbox:

    # COPY messages with uid in 23,27 from current folder to folder1
    mailbox.copy('23,27', 'folder1')

    # MOVE all messages from current folder to INBOX/folder2
    mailbox.move(mailbox.uids(), 'INBOX/folder2')

    # DELETE messages with 'cat' word in its html from current folder
    mailbox.delete([msg.uid for msg in mailbox.fetch() if 'cat' in msg.html])

    # FLAG unseen messages in current folder as \Seen, \Flagged and TAG1
    flags = (imap_tools.MailMessageFlags.SEEN, imap_tools.MailMessageFlags.FLAGGED, 'TAG1')
    mailbox.flag(mailbox.uids(AND(seen=False)), flags, True)

    # APPEND: add message to mailbox directly, to INBOX folder with \Seen flag and now date
    with open('/tmp/message.eml', 'rb') as f:
        msg = imap_tools.MailMessage.from_bytes(f.read())  # *or use bytes instead MailMessage
    mailbox.append(msg, 'INBOX', dt=None, flag_set=[imap_tools.MailMessageFlags.SEEN])
```

**[Run search queries](https://github.com/ikvk/imap_tools/blob/master/examples/search.py)**

You can get more information on the search criteria [here](https://github.com/ikvk/imap_tools?tab=readme-ov-file#search-criteria)
```python
"""
Query builder examples.

NOTES:

    NOT ((FROM='11' OR TO="22" OR TEXT="33") AND CC="44" AND BCC="55")
    NOT (((OR OR FROM "11" TO "22" TEXT "33") CC "44" BCC "55"))
    NOT(AND(OR(from_='11', to='22', text='33'), cc='44', bcc='55'))

1. OR(1=11, 2=22, 3=33) ->
    "(OR OR FROM "11" TO "22" TEXT "33")"
2. AND("(OR OR FROM "11" TO "22" TEXT "33")", cc='44', bcc='55') ->
    "AND(OR(from_='11', to='22', text='33'), cc='44', bcc='55')"
3. NOT("AND(OR(from_='11', to='22', text='33'), cc='44', bcc='55')") ->
    "NOT (((OR OR FROM "1" TO "22" TEXT "33") CC "44" BCC "55"))"
"""

import datetime as dt
from imap_tools import AND, OR, NOT, A, H, U

q1 = OR(date=[dt.date(2019, 10, 1), dt.date(2019, 10, 10), dt.date(2019, 10, 15)])

q2 = NOT(OR(date=[dt.date(2019, 10, 1), dt.date(2019, 10, 10), dt.date(2019, 10, 15)]))

q3 = A(subject='hello', date_gte=dt.date(2019, 10, 10))

q4 = OR(from_=["@spam.ru", "@tricky-spam.ru"])

q5 = AND(seen=True, flagged=False)

q6 = OR(AND(text='tag15', subject='tag15'), AND(text='tag10', subject='tag10'))

q7 = OR(OR(text='tag15', subject='tag15'), OR(text='tag10', subject='tag10'))

q8 = A(header=[H('IsSpam', '++'), H('CheckAntivirus', '-')])

q9 = A(uid=U('1034', '*'))

q10 = A(OR(from_='from@ya.ru', text='"the text"'), NOT(OR(A(answered=False), A(new=True))), to='to@ya.ru')
```

**[Save attachments](https://github.com/ikvk/imap_tools/blob/master/examples/email_attachments_to_files.py)**

```python
from imap_tools import MailBox

with MailBox('imap.my.ru').login('acc', 'pwd', 'INBOX') as mailbox:
    for msg in mailbox.fetch():
        for att in msg.attachments:
            print(att.filename, att.content_type)
            with open('C:/1/{}'.format(att.filename), 'wb') as f:
                f.write(att.payload)
```

**[Action with directories](https://github.com/ikvk/imap_tools?tab=readme-ov-file#actions-with-folders)**

```python
with MailBox('imap.mail.com').login('test@mail.com', 'pwd') as mailbox:

    # LIST: get all subfolders of the specified folder (root by default)
    for f in mailbox.folder.list('INBOX'):
        print(f)  # FolderInfo(name='INBOX|cats', delim='|', flags=('\\Unmarked', '\\HasChildren'))

    # SET: select folder for work
    mailbox.folder.set('INBOX')

    # GET: get selected folder
    current_folder = mailbox.folder.get()

    # CREATE: create new folder
    mailbox.folder.create('INBOX|folder1')

    # EXISTS: check is folder exists (shortcut for list)
    is_exists = mailbox.folder.exists('INBOX|folder1')

    # RENAME: set new name to folder
    mailbox.folder.rename('folder3', 'folder4')

    # SUBSCRIBE: subscribe/unsubscribe to folder
    mailbox.folder.subscribe('INBOX|папка два', True)

    # DELETE: delete folder
    mailbox.folder.delete('folder4')

    # STATUS: get folder status info
    stat = mailbox.folder.status('some_folder')
    print(stat)  # {'MESSAGES': 41, 'RECENT': 0, 'UIDNEXT': 11996, 'UIDVALIDITY': 1, 'UNSEEN': 5}

```
**[Fetch by pages](https://github.com/ikvk/imap_tools/blob/master/examples/fetch_by_pages.py)**

```python
from imap_tools import MailBox

with MailBox('imap.mail.com').login('test@mail.com', 'pwd', 'INBOX') as mailbox:
    criteria = 'ALL'
    found_nums = mailbox.numbers(criteria)
    page_len = 3
    pages = int(len(found_nums) // page_len) + 1 if len(found_nums) % page_len else int(len(found_nums) // page_len)
    for page in range(pages):
        print('page {}'.format(page))
        page_limit = slice(page * page_len, page * page_len + page_len)
        print(page_limit)
        for msg in mailbox.fetch(criteria, bulk=True, limit=page_limit):
            print(' ', msg.date, msg.uid, msg.subject)
```
**References**
- [Source](https://github.com/ikvk/imap_tools)
- [Docs](https://github.com/ikvk/imap_tools)
- [Examples](https://github.com/ikvk/imap_tools/tree/master/examples)

fix(kubernetes_debugging#Network debugging): Network debugging with kubeshark

NOTE: maybe [kubeshark](https://github.com/kubeshark/kubeshark) is a better solution

feat(wireguard#NixOS): Install in NixOS

Follow the guides of the next references:

- https://nixos.wiki/wiki/WireGuard
- https://wiki.archlinux.org/title/WireGuard
- https://alberand.com/nixos-wireguard-vpn.html

feat(zfs#Clean the space of a ZFS pool): Clean the space of a ZFS pool

It doesn't matter how big your disks are, you'll eventually reach it's limit before you can buy new disks. It's then time to clean up some space.

**Manually remove data**

*See which datasets are using more space for their data*

To sort the datasets on the amount of space they use for their backups use `zfs list -o space -s usedds`

*Clean it up*

Then you can go dataset by dataset using `ncdu` cleaning up.

**See which datasets are using more space for their backups**

To sort the datasets on the amount of space they use for their backups use `zfs list -o space -s usedsnap`

**See the differences between a snapshot and the contents of the
dataset**

To compare the contents of a ZFS snapshot with the current dataset and identify files or directories that have been removed, you can use the `zfs diff` command. Here's how you can do it:

- First, find the snapshot name using the following command:

```bash
zfs list -t snapshot dataset_name
```

- Then, compare the contents of the snapshot with the current dataset (replace `<snapshot_name>` with your snapshot name):

```bash
zfs diff <dataset>@<snapshot_name> <dataset>
```

For example:

```bash
zfs diff tank/mydataset@snap1
```

The output will show files and directories that have been removed (`-`), modified (`M`), or renamed (`R`). Here's an example:

```
-     4 /path/to/removedfile.txt
```

If you want to see only the deleted files, you can pipe the output through `grep`:

```bash
zfs diff <dataset>@<snapshot_name> | grep '^-'
```

This will help you identify which files or directories were in the snapshot but are no longer in the current dataset.
diff --git a/docs/logql.md b/docs/logql.md

feat(logql#Make a regexp case insensitive): Make a regexp case insensitive

To make a regex filter case insensitive, you can use the `(?i)` flag within the regex pattern.

```
(?i)(error|warning)
```

This pattern will match "error" or "warning" in any case (e.g., "Error", "WARNING", etc.).

When using it in a Loki query, it would look like this:

```plaintext
{job="your-job-name"} |=~ "(?i)(error|warning)"
```

This query will filter logs from `your-job-name` to show only those that contain "error" or "warning" in a case-insensitive manner.
`(?i)(error|warning)`

fix(mediatracker#Add missing books): Add required steps to add missing books

- Register an account in openlibrary.org
- [Add the book](https://openlibrary.org/books/add)
 - Then add it to mediatracker

feat(memoria_historica#Movimiento obrero): Recomendar podcast sobre el movimiento obrero

- [La Olimpiada Popular, rebeldía obrera contra los fascismos](https://www.rtve.es/play/audios/documentos-rne/olimpiada-popular-rebeldia-obrera-contra-fascismos-19-07-24/16192458/)

feat(openwebui): Introduce Open WebUI

Open WebUI is an extensible, self-hosted interface for AI that adapts to your workflow, all while operating entirely offline.

Pros:

  - Web ui works both with llama and chatgpt api
  - made with Python
  - they recommend watchtower

**[Installation](https://docs.openwebui.com/getting-started/)**

**Troubleshooting**

**OAuth returns errors when logging in**

What worked for me was to repeat the login process until it went through.

But I'm not the only one having this issue [1](https://github.com/open-webui/open-webui/discussions/4940), [2](https://github.com/open-webui/open-webui/discussions/4685)

**References**
- [Home](https://openwebui.com/)
- [Docs](https://docs.openwebui.com/)
- [Source](https://github.com/open-webui/open-webui)

feat(palestine): Añadir página de agregación de noticias y movilizaciones de Palestina

- [Actua por Palestina](https://porpalestina.org/)

feat(parkour): Add funny parkour parody

- [The office parkour parody](https://www.youtube.com/watch?v=0Kvw2BPKjz0)

feat(pentesting#Tools): Add vulnhuntr

- [vulnhuntr](https://github.com/protectai/vulnhuntr): Vulnhuntr leverages the power of LLMs to automatically create and analyze entire code call chains starting from remote user input and ending at server output for detection of complex, multi-step, security-bypassing vulnerabilities that go far beyond what traditional static code analysis tools are capable of performing.

  It creates the 0days directly using LLMs

feat(playwright): Introduce playwright

[Playwright](https://playwright.dev/python/) is a modern automation library developed by Microsoft (buuuuh!) for testing web applications. It provides a powerful API for controlling web browsers, allowing developers to perform end-to-end testing, automate repetitive tasks, and gather insights into web applications. Playwright supports multiple browsers and platforms, making it a versatile tool for ensuring the quality and performance of web applications.

**Key features**

*Cross-browser testing*

Playwright supports testing across major browsers including:

- Google Chrome and Chromium-based browsers
- Mozilla Firefox
- Microsoft Edge
- WebKit (the engine behind Safari)

This cross-browser support ensures that your web application works consistently across different environments.

*Headless mode*

Playwright allows you to run browsers in headless mode, which means the browser runs without a graphical user interface. This is particularly useful for continuous integration pipelines where you need to run tests on a server without a display.

*Auto-waiting*

Playwright has built-in auto-waiting capabilities that ensure elements are ready before interacting with them. This helps in reducing flaky tests caused by timing issues and improves test reliability.

*Network interception*

Playwright provides the ability to intercept and modify network requests. This feature is valuable for testing how your application behaves with different network conditions or simulating various server responses.

*Powerful selectors*

Playwright offers a rich set of selectors to interact with web elements. You can use CSS selectors, XPath expressions, and even text content to locate elements. This flexibility helps in accurately targeting elements for interaction.

*Multiple language support*

Playwright supports multiple programming languages including:

- JavaScript/TypeScript
- Python
- C#
- Java

This allows teams to write tests in their preferred programming language.

**Installation**

To get started with Playwright, you'll need to install it via pip. Here's how to install Playwright for Python:

```bash
pip install playwright
playwright install chromium
```

The last line installs the browsers inside `~/.cache/ms-playwright/`.

**Usage**

**Basic example**

Here's a simple example of using Playwright with Python to automate a browser:

```python
from playwright.sync_api import sync_playwright

with sync_playwright() as p:
    # Launch a new browser instance
    browser = p.chromium.launch()

    # Create a new browser context and page
    context = browser.new_context()
    page = context.new_page()

    # Navigate to a webpage
    page.goto('https://example.com')

    # Take a screenshot
    page.screenshot(path='screenshot.png')

    # Close the browser
    browser.close()
```

**[A testing example](https://playwright.dev/python/docs/intro#add-example-test)**

```python
import re
from playwright.sync_api import Page, expect

def test_has_title(page: Page):
    page.goto("https://playwright.dev/")

    # Expect a title "to contain" a substring.
    expect(page).to_have_title(re.compile("Playwright"))

def test_get_started_link(page: Page):
    page.goto("https://playwright.dev/")

    # Click the get started link.
    page.get_by_role("link", name="Get started").click()

    # Expects page to have a heading with the name of Installation.
    expect(page.get_by_role("heading", name="Installation")).to_be_visible()
```

**References**

- [Home](https://playwright.dev/python/)
- [Docs](https://playwright.dev/python/docs/intro)
- [Source](https://github.com/microsoft/playwright-python)
- [Video tutorials](https://playwright.dev/python/community/learn-videos)

feat(privacy_threat_modeling): Introduce Linddun privacy framework

- [Linddun privacy framework](https://linddun.org/)

feat(python_imap): Introduce python libraries to interact with IMAP

In Python, there are several libraries available for interacting with IMAP servers to fetch and manipulate emails. Some popular ones include:

**imaplib**

This is the built-in IMAP client library in Python's standard library (`imaplib`). It provides basic functionality for connecting to an IMAP server, listing mailboxes, searching messages, fetching message headers, and more.

The [documentation](https://docs.python.org/3/library/imaplib.html) is awful to read. I'd use it only if you can't or don't want to install other more friendly libraries

*Usage*

```python
import imapclient

mail = imapclient.IMAPClient('imap.example.com', ssl=True)
mail.login('username', 'password')

```
*References*

- [Docs](https://docs.python.org/3/library/imaplib.html)
- [Usage article](https://medium.com/@juanrosario38/how-to-use-pythons-imaplib-to-check-for-new-emails-continuously-b0c6780d796d)

**imapclient**

This is a higher-level library built on top of imaplib. It provides a more user-friendly API, reducing the complexity of interacting with IMAP servers.

It's docs are better than the standard library but they are old fashioned and not very extensive. It has 500 stars on github, the last commit was 3 months ago, and the last release was on December 2023 (as of October 2024)

*References*

- [Source](https://github.com/mjs/imapclient/)
- [Docs](https://imapclient.readthedocs.io/en/3.0.0/)

**[`imap_tools`](imap_tools.md)**

`imap-tools` is a high-level IMAP client library for Python, providing a simple and intuitive API for common email tasks like fetching messages, flagging emails as read/unread, labeling/moving/deleting emails, searching/filtering emails, and more.

It's interface looks the most pleasant, with the most powerful features, last commit was 3 weeks ago, 700 stars, last release on august 2024, it has type hints.

*Usage*

```python
import imap_tools

mail = imap_tools.Mail('imap.example.com', ssl=True)
mail.login('username', 'password')

messages = mail.list()

message = mail.fetch('1234567890')
```

*References*
- [Source](https://github.com/ikvk/imap_tools)
- [Docs](https://github.com/ikvk/imap_tools)
- [Examples](https://github.com/ikvk/imap_tools/tree/master/examples)

**pyzmail**

`pyzmail` is a powerful library for reading and parsing mail messages in Python, supporting both POP3 and IMAP protocols.

It has 60 stars on github and the last commit was 9 years ago, so it's a dead project

*Usage*
```python
import pyzmail

p = pyzmail.PyzMail()
p.connect("imap.example.com", "username", "password")
p.select_folder('INBOX')
messages = p.get_messages()
```

*References*
- [Home](https://www.magiksys.net/pyzmail/)
- [Source](https://github.com/aspineux/pyzmail)

**Conclusion**

If you don't want to install any additional library go with `imaplib`, else use [`imap_tools`](imap_tools.md)

feat(python_logging#Configure the logging module to use logfmt): Configure the logging module to use logfmt

To configure the Python `logging` module to use `logfmt` for logging output, you can use a custom logging formatter. The `logfmt` format is a structured logging format that uses key-value pairs, making it easier to parse logs. Here’s how you can set up logging with `logfmt` format:

```python
import logging

class LogfmtFormatter(logging.Formatter):
    """Custom formatter to output logs in logfmt style."""

    def format(self, record: logging.LogRecord) -> str:
        log_message = (
            f"level={record.levelname.lower()} "
            f"logger={record.name} "
            f'msg="{record.getMessage()}"'
        )
        return log_message

def setup_logging() -> None:
    """Configure logging to use logfmt format."""
    # Create a console handler
    console_handler = logging.StreamHandler()

    # Create a LogfmtFormatter instance
    logfmt_formatter = LogfmtFormatter()

    # Set the formatter for the handler
    console_handler.setFormatter(logfmt_formatter)

    # Get the root logger and set the level
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)
    logger.addHandler(console_handler)

if __name__ == "__main__":
    setup_logging()

    # Example usage
    logging.info("This is an info message")
    logging.warning("This is a warning message")
    logging.error("This is an error message")
```

feat(renfe): Monitorización de billetes de renfe

Renfe hay veces que tarda mucho tiempo en sacar los billetes y es un peñazo tener que meterte continuamente para ver si ya han salido, así que lo he automatizado.

**Instalación**

Si quieres utilizarlo tendrás que al menos toquetear las siguientes líneas:

- Donde se definen los correos (`@example.org`)
- Las fechas del viaje: Busca el string `1727992800000` y puedes crear el tuyo con un comando como: `echo $(date -d "2024-10-04" +%s)000`
- La configuración del apprise (`mailtos`)
- El texto a meter en el origen (`#origin`) y el destino (`#destination`)
- El mes en el que quieres viajar (`octubre2024`)

Puede que en algún momento tenga ganas de hacerlo un poco más usable.

```python
import time
import logging
import traceback
from typing import List
import apprise
from playwright.sync_api import sync_playwright

class LogfmtFormatter(logging.Formatter):
    """Custom formatter to output logs in logfmt style."""

    def format(self, record: logging.LogRecord) -> str:
        log_message = (
            f"level={record.levelname.lower()} "
            f"logger={record.name} "
            f'msg="{record.getMessage()}"'
        )
        return log_message

def setup_logging() -> None:
    """Configure logging to use logfmt format."""
    # Create a console handler
    console_handler = logging.StreamHandler()

    # Create a LogfmtFormatter instance
    logfmt_formatter = LogfmtFormatter()

    # Set the formatter for the handler
    console_handler.setFormatter(logfmt_formatter)

    # Get the root logger and set the level
    logger = logging.getLogger(__name__)
    logger.setLevel(logging.INFO)
    logger.addHandler(console_handler)

def send_email(
    title: str, body: str, recipients: List[str] = ["admin@example.org"]
) -> None:
    """
    Sends an email notification using Apprise if the specified text is not found.
    """
    apobj = apprise.Apprise()
    apobj.add(
        "mailtos://{user}:{password}@{domain}:587?smtp={smtp_server}&to={','.join(recipients)}"
    )
    apobj.notify(
        body=body,
        title=title,
    )
    log.info("Email notification sent")

def check_if_trenes() -> None:
    """
    Main function to automate browser interactions and check for specific text.
    """
    log.info("Arrancando el navegador")
    pw = sync_playwright().start()
    chrome = pw.chromium.launch(headless=True)
    context = chrome.new_context(viewport={"width": 1920, "height": 1080})
    page = context.new_page()

    log.info("Navigating to https://www.renfe.com/es/es")
    page.goto("https://www.renfe.com/es/es")
    page.click("#onetrust-reject-all-handler")
    page.click("#origin")
    page.fill("#origin", "Almudena")
    page.click("#awesomplete_list_1_item_0")

    page.click("#destination")
    page.fill("#destination", "Vigo")
    page.click("#awesomplete_list_2_item_0")
    page.evaluate("document.getElementById('first-input').click()")

    while True:
        months = page.locator(
            "div.lightpick__month-title span.rf-daterange-alternative__month-label"
        ).all_text_contents()
        if months[0] == "octubre2024":
            break

        page.click("button.lightpick__next-action")

    # Para sacar otras fechas  usa echo $(date -d "2024-10-04" +%s)000
    page.locator('div.lightpick__day[data-time="1727992800000"]').click()
    page.locator('div.lightpick__day[data-time="1728165600000"]').click()
    page.click("button.lightpick__apply-action-sub")
    page.evaluate("window.scrollTo(0, 0);")
    page.locator('button[title="Buscar billete"]').click()
    page.locator("div#trayectoiSinTren p").wait_for(state="visible")

    time.sleep(1)
    no_hay_trenes = page.locator(
        "div", has_text="No hay trenes para los criterios seleccionados"
    ).all_text_contents()

    if len(no_hay_trenes) != 5:
        send_email(
            title="Puede que haya trenes para vigo",
            body="Corred insensatos!",
            recipients=["user1@example.org", "user2@example.org"],
        )
        log.warning("Puede que haya trenes")
    else:
        log.info("Sigue sin haber trenes")

def main():
    setup_logging()
    global log
    log = logging.getLogger(__name__)
    try:
        check_if_trenes()
    except Exception as error:
        error_message = "".join(
            traceback.format_exception(None, error, error.__traceback__)
        )
        send_email(title="[ERROR] Corriendo el script de renfe", body=error_message)
        raise error

if __name__ == "__main__":
    main()
```

**Cron**

Crea un virtualenv e instala las dependencias

```bash
cd renfe
virtualenv .env
pip install apprise playwright
```

Instala los navegadores

```bash
playwright install chromium
```

Crea el script para el cron (`renfe.sh`)

```bash

source /home/lyz/renfe/.env/bin/activate

systemd-cat -t renfe python3 /home/lyz/renfe/renfe.py

deactivate
```

Y edita el cron:

```cron
13 */6 * * * /bin/bash /home/lyz/renfe/renfe.sh
```

Esto lo correrá cada 6 horas

**Monitorización**

Para asegurarnos de que todo está funcionando bien puedes usar las siguientes alertas de [loki](loki.md)

```yaml
groups:
  - name: cronjobs
    rules:
      - alert: RenfeCronDidntRun
        expr: |
          (count_over_time({job="systemd-journal", syslog_identifier="renfe"} |= `Sigue sin haber trenes` [24h]) or on() vector(0)) == 0
        for: 0m
        labels:
          severity: warning
        annotations:
          summary: "El checkeo de los trenes de renfe no ha terminado en las últimas 24h en {{ $labels.hostname}}"
      - alert: RenfeCronError
        expr: |
          count(rate({job="systemd-journal", syslog_identifier="renfe"} | logfmt | level != `info` [5m])) or vector(0)
        for: 0m
        labels:
          severity: warning
        annotations:
          summary: "Se han detectado errores en los logs del script {{ $labels.hostname}}"

```

feat(roadmap_adjustment#Area review): Area review

It may be useful to ask the following questions of your own life. It doesn't matter if answers aren't immediately forthcoming; the point, is to "live the questions". Even to asking them with any sincerity is already a great step.

**What does your desire tell you about the area?**

Stop and really ask your g…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants