Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Route ConfigMaps are not removed after workspace removal #21676

Closed
Tracked by #21705
ibuziuk opened this issue Aug 31, 2022 · 3 comments
Closed
Tracked by #21705

Route ConfigMaps are not removed after workspace removal #21676

ibuziuk opened this issue Aug 31, 2022 · 3 comments
Labels
area/che-operator Issues and PRs related to Eclipse Che Kubernetes Operator kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. sprint/current
Milestone

Comments

@ibuziuk
Copy link
Member

ibuziuk commented Aug 31, 2022

Describe the bug

Che Operator creates workspace<>-route configmap for every workspace in order to support the single-host strategy. However, those cms are not removed after the workspace removal.

Che version

7.53@latest

Steps to reproduce

  1. Create and start a workspace
  2. In the namespace where CheCluster is created see that new route cm is created
    image
  3. Remove the workspace
  4. ERROR: cm is still there after workspace removal

Expected behavior

route cm is deleted together with workspace

Runtime

OpenShift

Screenshots

No response

Installation method

OperatorHub

Environment

Linux

Eclipse Che Logs

No response

Additional context

This isssue could be critical for long-running instancies where thousands of workspaces are created

@ibuziuk ibuziuk added kind/bug Outline of a bug - must adhere to the bug report template. sprint/next severity/P1 Has a major impact to usage or development of the system. area/che-operator Issues and PRs related to Eclipse Che Kubernetes Operator labels Aug 31, 2022
@tolusha tolusha mentioned this issue Sep 16, 2022
59 tasks
@ibuziuk
Copy link
Member Author

ibuziuk commented Oct 3, 2022

looks like the issue is not reproducible anymore on dogfooding, after workspace removal route cm is deleted correctly. Production clusters have been analyzed and we only found two orphaned / abandoned cms at this point

@tolusha
Copy link
Contributor

tolusha commented Oct 3, 2022

Since it is not possible to reproduce the issue, I suggest running the script once to clean up abandoned ConfigMaps.

#!/bin/bash

set -e

usage () {
    echo "Deletes abandoned workspaces ConfigMaps."
    echo
    echo "Usage:   $0 [--namespace NAMESPACE] [--delete] [--dump]"
    echo
    echo "OPTIONS:"
    echo -e "\t-n,--namespace           Kubernetes namespace where Eclipse is deployed into"
    echo -e "\t-n,--delete              [default: no] Indicates to delete abandoned ConfigMap"
    echo -e "\t-n,--dump                [default: no] Indicates to make a dump of the ConfigMap before deletion"
    echo    
    echo "Example: $0 --namespace eclise-che"
    echo "Example: $0 --namespace eclise-che --delete --dump"
}

unset NAMESPACE
unset DELETE
unset DUMP

while [[ "$#" -gt 0 ]]; do
  case $1 in
    '--namespace'|'-n') NAMESPACE="$2"; shift 1;;
    '--delete') DELETE="true";;
    '--dump') DUMP="true";;
    '--help'|'-h') usage; exit;;
  esac
  shift 1
done

if [[ ! ${NAMESPACE} ]]; then usage; exit 1; fi

dumpAndDeleteConfigMap() {
    if [[ ${DUMP} == "true" ]]; then
        oc get configmap "${CM_NAME}" -n "${NAMESPACE}" -o yaml > "${CM_NAME}.yaml"
    fi
    oc delete configmap "${CM_NAME}" -n "${NAMESPACE}"
}

run() {
    DEVWORKSPACES_IDS=(
        $(oc get devworkspace -A --no-headers=true -o custom-columns=DW_ID:status.devworkspaceId)
    )
    WORKSPACE_CONFIG_MAPS=(
         $(oc get configmap -n "${NAMESPACE}" --no-headers \
        -o custom-columns=NAME:metadata.name \
        -l app.kubernetes.io/part-of=che.eclipse.org,app.kubernetes.io/component=gateway-config)
    )

    for CM_NAME in "${WORKSPACE_CONFIG_MAPS[@]}"; do
        CM_ITEM=($(oc get configmap "${CM_NAME}" -n "${NAMESPACE}" --no-headers -o custom-columns=CREATION_TIMESTAMP:metadata.creationTimestamp,DW_ID:metadata.labels."controller\.devfile\.io/devworkspace_id"))
        CM_CREATION_TIME=${CM_ITEM[0]}
        CM_DW_ID=${CM_ITEM[1]}

        if [[ -z ${CM_DW_ID} ]]; then
            echo "Abandoned Gateway workspace ConfigMap without devWorkspaceId: ${CM_NAME}, created: ${CM_CREATION_TIME}"
        elif [[ ! ${DEVWORKSPACES_IDS[*]}  =~ ${CM_DW_ID} ]]; then
            if [[ ${DELETE} == "true" ]]; then
                dumpAndDeleteConfigMap
            else
                echo "Abandoned Gateway workspace ConfigMap: ${CM_NAME}, devWorkspaceId: ${CM_DW_ID}, created: ${CM_CREATION_TIME}"
            fi
        fi
    done
    
    WORKSPACE_CONFIG_MAPS=(
        $(oc get configmap -n "${NAMESPACE}" --no-headers -o custom-columns=NAME:metadata.name -l app.kubernetes.io/part-of=che.eclipse.org)
    )

    for CM_NAME in "${WORKSPACE_CONFIG_MAPS[@]}"; do
        WS_ID=$(oc get configmap "${CM_NAME}" -n "${NAMESPACE}" -o=jsonpath='{.metadata.labels.che\.workspace_id}')

        if [[ ${WS_ID} ]]; then
            if [[ ${DELETE} == "true" ]]; then
                dumpAndDeleteConfigMap
            else
                echo "Abandoned Che workspace ConfigMap: ${CM_NAME}, created: ${CM_CREATION_TIME}"
            fi        
        fi
    done
}

run

@tolusha
Copy link
Contributor

tolusha commented Oct 4, 2022

Closed as can't reproduce.
Abandoned ConfigMap can be deleted by running #21676 (comment)

@tolusha tolusha closed this as completed Oct 4, 2022
@tolusha tolusha added this to the 7.55 milestone Oct 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/che-operator Issues and PRs related to Eclipse Che Kubernetes Operator kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. sprint/current
Projects
None yet
Development

No branches or pull requests

3 participants