Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(tags): Fix the tag deletion #5298

Merged
merged 1 commit into from
Nov 2, 2024
Merged

Conversation

Ionys320
Copy link
Contributor

@Ionys320 Ionys320 commented Nov 1, 2024

⚠️⚠️⚠️ Since we do not accept all types of pull requests and do not want to waste your time. Please be sure that you have read pull request rules:
https://github.com/louislam/uptime-kuma/blob/master/CONTRIBUTING.md#can-i-create-a-pull-request-for-uptime-kuma

Tick the checkbox if you understand [x]:

  • I have read and understand the pull request rules.

Description

Fixes #5296
Fixes #5277

  • Add monitor_tag.value attribute in getMonitorTag SELECT
  • Add monitor_id and value for each tag in preparePreloadData

Because those two attributes were missing, the DELETE monitor_tag WHERE [...] wasn't touching any line because of the empty (null/undefined) parameters.

Type of change

Please delete any options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)

Checklist

  • My code follows the style guidelines of this project
  • I ran ESLint and other linters for modified files
  • I have performed a self-review of my own code and tested it
  • I have commented my code, particularly in hard-to-understand areas (including JSDoc for methods) (Note: No comment added because nothing need it)
  • My changes generates no new warnings
  • My code needed automated testing. I have added them (this is optional task)

- Add `monitor_tag.value` attribute in `getMonitorTag` SELECT
- Add `monitor_id` and `value` for each tag in `preparePreloadData`
@Ionys320 Ionys320 mentioned this pull request Nov 1, 2024
2 tasks
@louislam louislam added this to the 2.0.0-beta.1 milestone Nov 2, 2024
@homelab-alpha
Copy link
Contributor

Confirmed: This bug fix works as expected.

After the recent bug fix #5298, I tested the functionality of the tag system. I can now remove tags in the usual way without having to manually delete them in the database. Everything is working properly with the tag system.

@Ionys320 Thank you for your effort and the quick resolution!

How I came to this conclusion:

I manually added the updated monitor.js from @Ionys320 to the Docker container using volumes:

/docker/uptime-kuma-beta/monitor.js:/app/server/model/monitor.js

@CommanderStorm
Copy link
Collaborator

@homelab-alpha we have a much simpler solution for testing PRs ^^
https://github.com/louislam/uptime-kuma/wiki/Test-Pull-Requests

=> for this PR, it can be tested via

docker run --rm -it -p 3000:3000 -p 3001:3001 --pull always -e 'UPTIME_KUMA_GH_REPO=Ionys320:master' louislam/uptime-kuma:pr-test2

Copy link
Collaborator

@CommanderStorm CommanderStorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the bugfix ❤️
Works => merging into 2.0.0-beta.1

image

@CommanderStorm CommanderStorm merged commit 595b35f into louislam:master Nov 2, 2024
18 checks passed
@Ionys320
Copy link
Contributor Author

Ionys320 commented Nov 2, 2024

Awesome, thanks for the quick merge! @CommanderStorm

@CommanderStorm CommanderStorm added the area:dashboard The main dashboard page where monitors' status are shown label Nov 2, 2024
@homelab-alpha
Copy link
Contributor

@homelab-alpha we have a much simpler solution for testing PRs ^^ https://github.com/louislam/uptime-kuma/wiki/Test-Pull-Requests

=> for this PR, it can be tested via

docker run --rm -it -p 3000:3000 -p 3001:3001 --pull always -e 'UPTIME_KUMA_GH_REPO=Ionys320:master' louislam/uptime-kuma:pr-test2

@CommanderStorm 😅 I didn't know that! I learned something new today and immediately created a script from it.

I have read the Test Pull Requests wiki, but I have a question about it. It seems that the software is no longer up to date (please see the logs) and that the persistent storage is not functioning. Halfway through the exit of the louislam/uptime-kuma:pr-test2 container, I received a message stating that error.log could not be created. Other than that, it works fine, but it would be helpful to update the louislam/uptime-kuma:pr-test2 container.

Here is the uptime_kuma_pr_test_v2.sh script (click to expand)

#!/bin/bash

# Filename: uptime_kuma_pr_test_v2
# Author: GJS (homelab-alpha)
# Date: 2024-11-03T12:27:31+01:00
# Version: 1.0.0

# Description:
# This script facilitates the testing of pull requests for Uptime-Kuma
# version 2.x.x within a Docker container environment. It prompts the user
# for a GitHub repository link that points to the pull request to be tested.
# The script launches a Docker container with the specified Uptime-Kuma image,
# allowing developers to verify changes and ensure compatibility before merging.
# The testing process operates on designated ports for both the application and API.

# Aliases:
# To create convenient aliases for this script, add the following lines
# to your shell configuration file (e.g., .bashrc or .bash_aliases):
# alias uptime-kuma-pr-test="$HOME/uptime_kuma_pr_test_v2.sh"
# alias uptime-kuma-pr-test="$HOME/.bash_aliases/uptime_kuma_pr_test_v2.sh"

# Usage:
# Execute this script in the terminal with the command:
# ./uptime_kuma_pr_test_v2

# Define the default ports for the Uptime-Kuma application and API.
port_app=3000 # Port for the main application
port_api=3001 # Port for the API

# Retrieve current user information.
# - username: the name of the current user.
# - puid: the user ID (PUID) of the current user.
# - pgid: the group ID (PGID) of the current user.
username=$(whoami)
puid=$(id -u)
pgid=$(id -g)

# Function to display the help message.
display_help() {
  clear
  echo "======================================================================"
  echo "         Welcome to the Uptime-Kuma Pull Request Testing Tool         "
  echo "======================================================================"
  echo
  echo "Usage:"
  echo "  1. Select an option from the main menu."
  echo "  2. Follow the on-screen prompts to proceed."
  echo
  echo "Note:"
  echo "  Option 2 has limited write permissions implemented."
  echo "  This limitation causes the container to exit unexpectedly when using"
  echo "  images louislam/uptime-kuma:pr-test2."
  echo
  echo "Options:"
  echo
  echo "  1. Uptime-Kuma Pull Request version: 2.x.x"
  echo "     Run a container with the louislam/uptime-kuma:pr-test2 image."
  echo
  echo "  2. Uptime-Kuma Pull Request version: 2.x.x with Persistent Storage"
  echo "     Run a container with the louislam/uptime-kuma:pr-test2 image."
  echo
  echo "Additional Options:"
  echo "  h or --help      : Display this help message."
  echo "  i or --info      : Show current user's information (PUID and PGID),"
  echo "                     as well as Docker and Docker Compose versions."
  echo "  q or --quit      : Quit the script."
  echo
  echo "For more information, visit:"
  echo "  https://github.com/louislam/uptime-kuma/wiki/Test-Pull-Requests"
  echo
  echo "======================================================================"
}

# Retrieve system, user, and Docker information.
display_system_info() {
  clear
  echo "======================================================================"
  echo "         Welcome to the Uptime-Kuma Pull Request Testing Tool         "
  echo "======================================================================"
  echo

  # Extract OS name from /etc/os-release.
  if [ -f /etc/os-release ]; then
    os_name=$(grep '^PRETTY_NAME=' /etc/os-release | cut -d= -f2 | tr -d '"')
  else
    # Default if the OS name cannot be determined
    os_name="Unknown OS"
  fi

  # Retrieve kernel version.
  kernel_info=$(uname -r)

  # Determine filesystem type of the root directory.
  filesystem=$(findmnt -n -o FSTYPE /)

  # Check for Docker and Docker Compose, displaying versions if installed.
  if command -v docker &>/dev/null; then
    # Get Docker version
    docker_version=$(docker --version)
  else
    # Message if Docker is not found
    docker_version="Docker is not installed."
  fi

  if command -v docker-compose &>/dev/null; then
    # Get Docker Compose version
    docker_compose_version=$(docker-compose --version)
  else
    # Message if Docker Compose is not found
    docker_compose_version="Docker Compose is not installed."
  fi

  # Display collected information.
  echo -e "Operating System Information:"
  echo -e "OS: $os_name"
  echo -e "Kernel Version: $kernel_info"
  echo -e "Filesystem: $filesystem"
  echo
  echo -e "User Information:"
  echo -e "Username: $username"
  echo -e "PUID: $puid"
  echo -e "PGID: $pgid"
  echo
  echo -e "Docker Information:"
  echo -e "$docker_version"
  echo -e "$docker_compose_version"
  echo
  echo "======================================================================"
}

# Validate GitHub repository link format (expected: 'owner:repo')
validate_repo_name() {
  if [[ ! "$1" =~ ^[a-zA-Z0-9._-]+:[a-zA-Z0-9._-]+$ ]]; then
    echo "Error: Invalid GitHub repository format. Use 'owner:repo' (e.g., 'Ionys320:master')."
    # Exit if the command fails
    exit 1
  fi
}

# Run Uptime-Kuma container for version 2 without persistent storage.
version_2() {
  echo "Running Uptime-Kuma version 2.x.x..."

  # Check if the container is already running.
  if [ "$(docker ps -q -f name=uptime-kuma-pr-test-v2)" ]; then
    echo "Error: The container 'uptime-kuma-pr-test-v2' is already running."
    # Exit if the container is already running
    exit 1
  fi

  # Execute the Docker run command with necessary environment variables and options.
  docker run \
    --env RUN_LOCAL=true \
    --env UPTIME_KUMA_GH_REPO="$pr_repo_name" \
    --env PUID="$puid" \
    --env PGID="$pgid" \
    --name uptime-kuma-pr-test-v2 \
    --pull=always \
    --rm \
    --publish "$port_app:3000/tcp" \
    --publish "$port_api:3001/tcp" \
    --security-opt no-new-privileges:true \
    --interactive \
    --tty \
    louislam/uptime-kuma:pr-test2 || {
    echo
    echo "Exiting container. Goodbye! Use CTRL+C to terminate."
    # Exit if the command fails or was terminated using CTRL+C
    exit 1
  }
}

# Run Uptime-Kuma container for version 2 with persistent storage.
version_2_persistent_storage() {
  echo "Running Uptime-Kuma version 2.x.x with persistent storage..."

  # Check if the container is already running.
  if [ "$(docker ps -q -f name=uptime-kuma-pr-test-v2)" ]; then
    echo "Error: The container 'uptime-kuma-pr-test-v2' is already running."
    # Exit if the container is already running
    exit 1
  fi

  # Execute the Docker run command with necessary environment variables, options, and volume mapping for persistence.
  docker run \
    --env RUN_LOCAL=true \
    --env UPTIME_KUMA_GH_REPO="$pr_repo_name" \
    --env PUID="$puid" \
    --env PGID="$pgid" \
    --name uptime-kuma-pr-test-v2 \
    --pull=always \
    --rm \
    --publish "$port_app:3000/tcp" \
    --publish "$port_api:3001/tcp" \
    --security-opt no-new-privileges:true \
    --interactive \
    --tty \
    --volume uptime-kuma-pr-test-v2:/app/data \
    louislam/uptime-kuma:pr-test2 || {
    echo
    echo "Exiting container. Goodbye! Use CTRL+C to terminate."
    # Exit if the command fails or was terminated using CTRL+C
    exit 1
  }
}

# Remove unused Docker images to free up disk space.
cleanup_dangling_images() {
  echo "Removing unused Docker images to free up storage..."
  # Prune dangling images to recover disk space.
  docker image prune --filter "dangling=true" -f || {
    echo "Error: Failed to prune Docker images. Please check your Docker setup."
    # Exit if the command fails
    exit 1
  }
}

# Main execution starts here.

# Main menu loop for user interaction.
while true; do
  clear
  echo "======================================================================"
  echo "         Welcome to the Uptime-Kuma Pull Request Testing Tool         "
  echo "======================================================================"
  echo
  echo "Please choose an option:"
  echo
  echo "   1. Uptime-Kuma Pull Request version: 2.x.x"
  echo "   2. Uptime-Kuma Pull Request version: 2.x.x with Persistent Storage"
  echo
  echo "   q: quit   h: help   i: info"
  echo "======================================================================"
  echo
  read -r -p "Please select an option (1, 2, h, i, or q to exit): " choice

  case $choice in
  1)
    selected_option="version_2"
    break
    ;;
  2)
    selected_option="version_2_persistent_storage"
    break
    ;;
  h | --help)
    # Show help message
    display_help
    echo
    # Wait for user input
    read -n 1 -s -r -p "Press any key to continue..."
    continue
    ;;
  i | --info)
    # Show system info
    display_system_info
    echo
    # Wait for user input
    read -n 1 -s -r -p "Press any key to continue..."
    continue
    ;;
  q | --quit)
    echo
    echo "Exiting the script Goodbye!."
    echo
    echo "Thank you for using the Uptime-Kuma Pull Request Testing Tool."
    # Exiting the script
    exit 0
    ;;
  *)
    echo "Error: Invalid option. Please try again." # Error message for invalid option
    ;;
  esac
done

# Prompt for the GitHub repository link and validate the format
read -r -p "Please enter the GitHub repository link here (e.g., Ionys320:master): " pr_repo_name
validate_repo_name "$pr_repo_name"

# Execute the selected Docker run command
$selected_option

# Clean up dangling Docker images
cleanup_dangling_images

📝 Relevant log output (click to expand)

npm WARN deprecated @playwright/test@1.39.0: Please update to the latest version of Playwright to test up-to-date browsers.

added 306 packages, removed 614 packages, changed 371 packages, and audited 1284 packages in 23s

246 packages are looking for funding
  run `npm fund` for details

4 high severity vulnerabilities

Some issues need review, and may require choosing
a different dependency.

Run `npm audit` for details.

The CJS build of Vite's Node API is deprecated. See https://vitejs.dev/guide/troubleshooting.html#vite-cjs-node-api-deprecated for more details.

  VITE v5.2.14  ready in 262 ms

  ➜  Local:   http://localhost:3000/
  ➜  Network: http://172.17.0.2:3000/
  ➜  Vue DevTools: Open http://localhost:3000/__devtools__/ as a separate window
  ➜  Vue DevTools: Press Alt(⌥)+Shift(⇧)+D in App to toggle the Vue DevTools

  ➜  press h + enter to show help

@homelab-alpha homelab-alpha mentioned this pull request Dec 16, 2024
1 task
@derekoharrow
Copy link

Sorry, problem is still present in 2.0.0-beta.1

@Ionys320
Copy link
Contributor Author

@derekoharrow Works fine for me. Can you provide more infos, especially the DB used for your instance?

@homelab-alpha
Copy link
Contributor

@Ionys320, The conversation continued in...

@derekoharrow
Copy link

@derekoharrow Works fine for me. Can you provide more infos, especially the DB used for your instance?

It worked on most monitors but on a couple of existing ones with multiple tags (with no values in them) it wouldn't delete.

I'm running MariaDB

I've managed to get around it by cloning the monitor and deleting the tags before saving.

@homelab-alpha
Copy link
Contributor

@derekoharrow, Let’s keep this conversation in one place, please, in this case...

Thank you in advance for your cooperation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:dashboard The main dashboard page where monitors' status are shown
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug in Tag Removal for Uptime-Kuma Version 2.0.0-beta.0 v2 tag values missing in UI
5 participants