Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(pingcap/tiflow): tiflow split release 8.2 jobs #3008

Merged

Conversation

purelind
Copy link
Collaborator

@purelind purelind commented Jun 25, 2024

User description

tiflow split release 8.2 jobs


PR Type

Enhancement, Tests


Description

  • Created new folders and pipeline jobs for pingcap/tiflow release 8.2.
  • Implemented pipeline scripts for various integration tests including Kafka, MySQL, Pulsar, Storage, DM compatibility, and DM integration.
  • Defined Kubernetes pod templates for the new pipeline jobs.
  • Updated kustomization to include the new presubmits for release 8.2.

Changes walkthrough 📝

Relevant files
Enhancement
25 files
aa_folder.groovy
Create folder for `pingcap/tiflow` release 8.2 pipelines 

jobs/pingcap/tiflow/release-8.2/aa_folder.groovy

  • Created a new folder for pingcap/tiflow release 8.2 pipelines.
  • Added a description for the folder.
  • +3/-0     
    ghpr_verify.groovy
    Define `ghpr_verify` pipeline job for release 8.2               

    jobs/pingcap/tiflow/release-8.2/ghpr_verify.groovy

  • Defined a new pipeline job for ghpr_verify.
  • Configured GitHub project URL and SCM settings.
  • Added parameters and log rotation settings.
  • +37/-0   
    pull_cdc_integration_kafka_test.groovy
    Define `pull_cdc_integration_kafka_test` pipeline job for release 8.2

    jobs/pingcap/tiflow/release-8.2/pull_cdc_integration_kafka_test.groovy

  • Defined a new pipeline job for pull_cdc_integration_kafka_test.
  • Configured GitHub project URL and SCM settings.
  • Added parameters and log rotation settings.
  • +37/-0   
    pull_cdc_integration_mysql_test.groovy
    Define `pull_cdc_integration_mysql_test` pipeline job for release 8.2

    jobs/pingcap/tiflow/release-8.2/pull_cdc_integration_mysql_test.groovy

  • Defined a new pipeline job for pull_cdc_integration_mysql_test.
  • Configured GitHub project URL and SCM settings.
  • Added parameters and log rotation settings.
  • +37/-0   
    pull_cdc_integration_pulsar_test.groovy
    Define pull_cdc_integration_pulsar_test pipeline job for release 8.2

    jobs/pingcap/tiflow/release-8.2/pull_cdc_integration_pulsar_test.groovy

  • Defined a new pipeline job for pull_cdc_integration_pulsar_test.
  • Configured GitHub project URL and SCM settings.
  • Added parameters and log rotation settings.
  • +37/-0   
    pull_cdc_integration_storage_test.groovy
    Define pull_cdc_integration_storage_test pipeline job for release 8.2

    jobs/pingcap/tiflow/release-8.2/pull_cdc_integration_storage_test.groovy

  • Defined a new pipeline job for pull_cdc_integration_storage_test.
  • Configured GitHub project URL and SCM settings.
  • Added parameters and log rotation settings.
  • +37/-0   
    pull_dm_compatibility_test.groovy
    Define `pull_dm_compatibility_test` pipeline job for release 8.2

    jobs/pingcap/tiflow/release-8.2/pull_dm_compatibility_test.groovy

  • Defined a new pipeline job for pull_dm_compatibility_test.
  • Configured GitHub project URL and SCM settings.
  • Added parameters and log rotation settings.
  • +37/-0   
    pull_dm_integration_test.groovy
    Define `pull_dm_integration_test` pipeline job for release 8.2

    jobs/pingcap/tiflow/release-8.2/pull_dm_integration_test.groovy

  • Defined a new pipeline job for pull_dm_integration_test.
  • Configured GitHub project URL and SCM settings.
  • Added parameters and log rotation settings.
  • +37/-0   
    pull_engine_integration_test.groovy
    Define `pull_engine_integration_test` pipeline job for release 8.2

    jobs/pingcap/tiflow/release-8.2/pull_engine_integration_test.groovy

  • Defined a new pipeline job for pull_engine_integration_test.
  • Configured GitHub project URL and SCM settings.
  • Added parameters and log rotation settings.
  • +37/-0   
    ghpr_verify.groovy
    Implement `ghpr_verify` pipeline script for release 8.2   

    pipelines/pingcap/tiflow/release-8.2/ghpr_verify.groovy

  • Implemented the ghpr_verify pipeline script.
  • Configured Kubernetes agent, environment variables, and stages.
  • Added stages for debug info, checkout, and tests.
  • +114/-0 
    pull_cdc_integration_kafka_test.groovy
    Implement pull_cdc_integration_kafka_test pipeline script for release
    8.2

    pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_kafka_test.groovy

  • Implemented the pull_cdc_integration_kafka_test pipeline script.
  • Configured Kubernetes agent, environment variables, and stages.
  • Added stages for debug info, checkout, preparation, and tests.
  • +186/-0 
    pull_cdc_integration_mysql_test.groovy
    Implement pull_cdc_integration_mysql_test pipeline script for release
    8.2

    pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_mysql_test.groovy

  • Implemented the pull_cdc_integration_mysql_test pipeline script.
  • Configured Kubernetes agent, environment variables, and stages.
  • Added stages for debug info, checkout, preparation, and tests.
  • +174/-0 
    pull_cdc_integration_pulsar_test.groovy
    Implement pull_cdc_integration_pulsar_test pipeline script for release
    8.2

    pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_pulsar_test.groovy

  • Implemented the pull_cdc_integration_pulsar_test pipeline script.
  • Configured Kubernetes agent, environment variables, and stages.
  • Added stages for debug info, checkout, preparation, and tests.
  • +169/-0 
    pull_cdc_integration_storage_test.groovy
    Implement pull_cdc_integration_storage_test pipeline script for
    release 8.2

    pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_storage_test.groovy

  • Implemented the pull_cdc_integration_storage_test pipeline script.
  • Configured Kubernetes agent, environment variables, and stages.
  • Added stages for debug info, checkout, preparation, and tests.
  • +174/-0 
    pull_dm_compatibility_test.groovy
    Implement pull_dm_compatibility_test pipeline script for release 8.2

    pipelines/pingcap/tiflow/release-8.2/pull_dm_compatibility_test.groovy

  • Implemented the pull_dm_compatibility_test pipeline script.
  • Configured Kubernetes agent, environment variables, and stages.
  • Added stages for debug info, checkout, preparation, and tests.
  • +155/-0 
    pull_dm_integration_test.groovy
    Implement `pull_dm_integration_test` pipeline script for release 8.2

    pipelines/pingcap/tiflow/release-8.2/pull_dm_integration_test.groovy

  • Implemented the pull_dm_integration_test pipeline script.
  • Configured Kubernetes agent, environment variables, and stages.
  • Added stages for debug info, checkout, preparation, and tests.
  • +212/-0 
    pull_engine_integration_test.groovy
    Implement pull_engine_integration_test pipeline script for release 8.2

    pipelines/pingcap/tiflow/release-8.2/pull_engine_integration_test.groovy

  • Implemented the pull_engine_integration_test pipeline script.
  • Configured Kubernetes agent, environment variables, and stages.
  • Added stages for debug info, checkout, preparation, and tests.
  • +220/-0 
    pod-ghpr_verify.yaml
    Define Kubernetes pod template for `ghpr_verify`                 

    pipelines/pingcap/tiflow/release-8.2/pod-ghpr_verify.yaml

  • Defined Kubernetes pod template for ghpr_verify.
  • Configured containers and volume mounts.
  • Set resource limits and requests.
  • +44/-0   
    pod-pull_cdc_integration_kafka_test.yaml
    Define Kubernetes pod template for `pull_cdc_integration_kafka_test`

    pipelines/pingcap/tiflow/release-8.2/pod-pull_cdc_integration_kafka_test.yaml

  • Defined Kubernetes pod template for pull_cdc_integration_kafka_test.
  • Configured containers and volume mounts.
  • Set resource limits and requests.
  • +160/-0 
    pod-pull_cdc_integration_mysql_test.yaml
    Define Kubernetes pod template for `pull_cdc_integration_mysql_test`

    pipelines/pingcap/tiflow/release-8.2/pod-pull_cdc_integration_mysql_test.yaml

  • Defined Kubernetes pod template for pull_cdc_integration_mysql_test.
  • Configured containers and volume mounts.
  • Set resource limits and requests.
  • +32/-0   
    pod-pull_cdc_integration_pulsar_test.yaml
    Define Kubernetes pod template for `pull_cdc_integration_pulsar_test`

    pipelines/pingcap/tiflow/release-8.2/pod-pull_cdc_integration_pulsar_test.yaml

  • Defined Kubernetes pod template for pull_cdc_integration_pulsar_test.
  • Configured containers and volume mounts.
  • Set resource limits and requests.
  • +32/-0   
    pod-pull_cdc_integration_storage_test.yaml
    Define Kubernetes pod template for pull_cdc_integration_storage_test

    pipelines/pingcap/tiflow/release-8.2/pod-pull_cdc_integration_storage_test.yaml

  • Defined Kubernetes pod template for pull_cdc_integration_storage_test.
  • Configured containers and volume mounts.
  • Set resource limits and requests.
  • +32/-0   
    pod-pull_dm_compatibility_test.yaml
    Define Kubernetes pod template for `pull_dm_compatibility_test`

    pipelines/pingcap/tiflow/release-8.2/pod-pull_dm_compatibility_test.yaml

  • Defined Kubernetes pod template for pull_dm_compatibility_test.
  • Configured containers and volume mounts.
  • Set resource limits and requests.
  • +70/-0   
    pod-pull_dm_integration_test.yaml
    Define Kubernetes pod template for `pull_dm_integration_test`

    pipelines/pingcap/tiflow/release-8.2/pod-pull_dm_integration_test.yaml

  • Defined Kubernetes pod template for pull_dm_integration_test.
  • Configured containers and volume mounts.
  • Set resource limits and requests.
  • +70/-0   
    pod-pull_engine_integration_test.yaml
    Define Kubernetes pod template for `pull_engine_integration_test`

    pipelines/pingcap/tiflow/release-8.2/pod-pull_engine_integration_test.yaml

  • Defined Kubernetes pod template for pull_engine_integration_test.
  • Configured containers and volume mounts.
  • Set resource limits and requests.
  • +80/-0   
    Configuration changes
    2 files
    kustomization.yaml
    Update kustomization to include release 8.2 presubmits     

    prow-jobs/kustomization.yaml

    • Added new entry for release-8.2-presubmits.yaml.
    +22/-21 
    release-8.2-presubmits.yaml
    Define presubmits for `pingcap/tiflow` release 8.2             

    prow-jobs/pingcap/tiflow/release-8.2-presubmits.yaml

  • Defined presubmits for pingcap/tiflow release 8.2.
  • Configured triggers, contexts, and branches.
  • +97/-0   

    💡 PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    Copy link

    ti-chi-bot bot commented Jun 25, 2024

    I Skip it since the diff size(106108 bytes > 80000 bytes) is too large

    Copy link
    Contributor

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    PR Reviewer Guide 🔍

    ⏱️ Estimated effort to review [1-5] 4
    🧪 Relevant tests Yes
    🔒 Security concerns No
    ⚡ Key issues to review Possible Bug:
    The PR introduces multiple Jenkins pipeline configurations and scripts for various integration tests. It is crucial to ensure that these scripts handle error cases gracefully and clean up resources even when tests fail. This is particularly important in environments like Kubernetes where resources can be limited and expensive.
    Performance Concern:
    The configurations specify resource limits and requests for containers in Kubernetes. It's important to validate that these settings are appropriate for the expected workload. Over-provisioning can lead to wasted resources, while under-provisioning can cause poor performance and timeouts.
    Hardcoded Values:
    There are several hardcoded values within the scripts (e.g., memory and CPU limits, specific version tags for Docker images). These should ideally be parameterized to make the pipelines more flexible and easier to update.

    Copy link
    Contributor

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Score
    Best practice
    Ensure that logs are always collected after the tests, even if they are successful

    Add a post block to the stage('Tests') section to ensure that logs are always collected,
    even if the tests are successful.

    pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_pulsar_test.groovy [123-163]

     stage('Tests') {
         when { expression { !skipRemainingStages} }
         matrix {
             axes {
                 axis {
                     name 'TEST_GROUP'
                     values 'G00', 'G01', 'G02', 'G03', 'G04', 'G05', 'G06',  'G07', 'G08', 'G09',
                         'G10', 'G11', 'G12', 'G13', 'G14', 'G15', 'G16', 'G17'
                 }
             }
             agent{
                 kubernetes {
                     namespace K8S_NAMESPACE
                     yamlFile POD_TEMPLATE_FILE
                     defaultContainer 'golang'
                 }
             } 
             stages {
                 stage("Test") {
                     options { timeout(time: 40, unit: 'MINUTES') }
                     steps {
                         dir('tiflow') {
                             cache(path: "./", includes: '**/*', key: "ws/${BUILD_TAG}/tiflow-cdc") {
                                 sh label: "${TEST_GROUP}", script: """
                                     rm -rf /tmp/tidb_cdc_test && mkdir -p /tmp/tidb_cdc_test
                                     chmod +x ./tests/integration_tests/run_group.sh
                                     ./tests/integration_tests/run_group.sh pulsar ${TEST_GROUP}
                                 """
                             }
                         }
                     }
                     post {
    -                    failure {
    +                    always {
                             sh label: "collect logs", script: """
                                 ls /tmp/tidb_cdc_test/
                                 tar -cvzf log-${TEST_GROUP}.tar.gz \$(find /tmp/tidb_cdc_test/ -type f -name "*.log")    
                                 ls -alh  log-${TEST_GROUP}.tar.gz  
                             """
                             archiveArtifacts artifacts: "log-${TEST_GROUP}.tar.gz", fingerprint: true 
                         }
                     }
                 }
             }
         }        
     }
     
    • Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: Modifying the post block to collect logs regardless of test success or failure is crucial for debugging and should be implemented to capture all possible outcomes.

    8
    Prevent multiple instances of the job from running simultaneously

    Add a disableConcurrentBuilds property to prevent multiple instances of the job from
    running simultaneously, which can help avoid potential conflicts and resource contention.

    jobs/pingcap/tiflow/release-8.2/pull_cdc_integration_storage_test.groovy [15-36]

    +disableConcurrentBuilds()
     definition {
         cpsScm {
             lightweight(true)
             scriptPath("pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_storage_test.groovy")
             scm {
                 git{
                     remote {
                         url('https://github.com/PingCAP-QE/ci.git')
                     }
                     branch('main')
                     extensions {
                         cloneOptions {
                             depth(1)
                             shallow(true)
                             timeout(5)
                         }
                     }
                 }
             }
         }
     }
     
    • Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: Adding disableConcurrentBuilds() is crucial for avoiding conflicts and resource contention, making it a significant improvement in job configuration.

    8
    Add a post block to the Checkout stage to handle failures

    Add a post block to the Checkout stage to handle potential failures and ensure proper
    cleanup or notifications.

    pipelines/pingcap/tiflow/release-8.2/pull_engine_integration_test.groovy [70-83]

     stage('Checkout') {
         when { expression { !skipRemainingStages} }
         options { timeout(time: 10, unit: 'MINUTES') }
         steps {
             dir("tiflow") {
                 cache(path: "./", includes: '**/*', key: prow.getCacheKey('git', REFS), restoreKeys: prow.getRestoreKeys('git', REFS)) {
                     retry(2) {
                         script {
                             prow.checkoutRefs(REFS)
                         }
                     }
                 }
             }
         }
    +    post {
    +        failure {
    +            echo "Checkout stage failed"
    +            // Add any cleanup or notification logic here
    +        }
    +    }
     }
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: Adding a post block for handling failures in the Checkout stage is a good practice for better error handling and cleanup, making the pipeline more robust.

    7
    Add a post block to the prepare stage to handle failures

    Add a post block to the prepare stage to handle potential failures and ensure proper
    cleanup or notifications.

    pipelines/pingcap/tiflow/release-8.2/pull_dm_integration_test.groovy [83-129]

     stage("prepare") {
         when { expression { !skipRemainingStages} }
         options { timeout(time: 20, unit: 'MINUTES') }
         steps {
             dir("third_party_download") {
                 retry(2) {
                     sh label: "download third_party", script: """
                         cd ../tiflow && ./dm/tests/download-integration-test-binaries.sh ${REFS.base_ref} && ls -alh ./bin
                         cd - && mkdir -p bin && mv ../tiflow/bin/* ./bin/
                         ls -alh ./bin
                         ./bin/tidb-server -V
                         ./bin/pd-server -V
                         ./bin/tikv-server -V
                     """
                 }
             }
             dir("tiflow") {
                 cache(path: "./bin", includes: '**/*', key: prow.getCacheKey('binary', REFS, 'dm-integration-test')) {
                     // build dm-master.test for integration test
                     // only build binarys if not exist, use the cached binarys if exist
                     // TODO: how to update cached binarys if needed
                     sh label: "prepare", script: """
                         if [[ ! -f "bin/dm-master.test" || ! -f "bin/dm-test-tools/check_master_online" || ! -f "bin/dm-test-tools/check_worker_online" ]]; then
                             echo "Building binaries..."
                             make dm_integration_test_build
                             mkdir -p bin/dm-test-tools && cp -r ./dm/tests/bin/* ./bin/dm-test-tools
                         else
                             echo "Binaries already exist, skipping build..."
                         fi
                         ls -alh ./bin
                         ls -alh ./bin/dm-test-tools
                         which ./bin/dm-master.test
                         which ./bin/dm-syncer.test
                         which ./bin/dm-worker.test
                         which ./bin/dmctl.test
                         which ./bin/dm-test-tools/check_master_online
                         which ./bin/dm-test-tools/check_worker_online
                     """
                 }
                 cache(path: "./", includes: '**/*', key: "ws/${BUILD_TAG}/tiflow-dm") { 
                     sh label: "prepare", script: """
                         cp -r ../third_party_download/bin/* ./bin/
                         ls -alh ./bin
                         ls -alh ./bin/dm-test-tools
                     """
                 }
             }
         }
    +    post {
    +        failure {
    +            echo "Prepare stage failed"
    +            // Add any cleanup or notification logic here
    +        }
    +    }
     }
     
    Suggestion importance[1-10]: 7

    Why: Implementing a post block in the prepare stage for handling failures is a good practice, enhancing the robustness of the pipeline by ensuring proper cleanup and notifications on errors.

    7
    Add a post block to handle cleanup tasks after the pipeline execution

    Consider adding a post block to the pipeline section to handle cleanup tasks, such as
    deleting temporary files or directories, regardless of the build result.

    pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_pulsar_test.groovy [14-45]

     pipeline {
         agent {
             kubernetes {
                 namespace K8S_NAMESPACE
                 yamlFile POD_TEMPLATE_FILE
                 defaultContainer 'golang'
             }
         }
         environment {
             FILE_SERVER_URL = 'http://fileserver.pingcap.net'
         }
         options {
             timeout(time: 60, unit: 'MINUTES')
             parallelsAlwaysFailFast()
         }
         stages {
             stage('Debug info') {
                 steps {
                     sh label: 'Debug info', script: """
                         printenv
                         echo "-------------------------"
                         go env
                         echo "-------------------------"
                         echo "debug command: kubectl -n ${K8S_NAMESPACE} exec -ti ${NODE_NAME} bash"
                     """
                     container(name: 'net-tool') {
                         sh 'dig github.com'
                         script {
                             prow.setPRDescription(REFS)
                         }
                     }
                 }
             }
    +    }
    +    post {
    +        always {
    +            sh 'rm -rf /tmp/tidb_cdc_test'
    +        }
    +    }
    +}
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: Adding a post block for cleanup is a good practice to ensure no resources are left hanging, which can improve the maintainability of the pipeline.

    7
    Add a post block to ensure cleanup of temporary files after the pipeline execution

    Add a post block to the pipeline section to ensure that temporary files or directories are
    cleaned up after the build, regardless of the result.

    pipelines/pingcap/tiflow/release-8.2/pull_dm_compatibility_test.groovy [14-45]

     pipeline {
         agent {
             kubernetes {
                 namespace K8S_NAMESPACE
                 yamlFile POD_TEMPLATE_FILE
                 defaultContainer 'golang'
             }
         }
         environment {
             FILE_SERVER_URL = 'http://fileserver.pingcap.net'
         }
         options {
             timeout(time: 60, unit: 'MINUTES')
             parallelsAlwaysFailFast()
         }
         stages {
             stage('Debug info') {
                 steps {
                     sh label: 'Debug info', script: """
                         printenv
                         echo "-------------------------"
                         go env
                         echo "-------------------------"
                         echo "debug command: kubectl -n ${K8S_NAMESPACE} exec -ti ${NODE_NAME} -c golang -- bash"
                     """
                     container(name: 'net-tool') {
                         sh 'dig github.com'
                         script {
                             prow.setPRDescription(REFS)
                         }
                     }
                 }
             }
    +    }
    +    post {
    +        always {
    +            sh 'rm -rf /tmp/dm_test'
    +        }
    +    }
    +}
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: Similar to the first suggestion, adding a post block for cleanup in this pipeline script is beneficial for resource management and is a good practice.

    7
    Use a variable for the path to the tiflow directory to avoid hardcoding paths

    To avoid potential issues with hardcoded paths, consider using a variable for the path to
    the tiflow directory. This will make the script more flexible and easier to update.

    pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_mysql_test.groovy [70-77]

    -dir("tiflow") {
    +def tiflowDir = "tiflow"
    +
    +dir(tiflowDir) {
         cache(path: "./", includes: '**/*', key: prow.getCacheKey('git', REFS), restoreKeys: prow.getRestoreKeys('git', REFS)) {
             retry(2) {
                 script {
                     prow.checkoutRefs(REFS)
                 }
             }
         }
     }
     
    • Apply this suggestion
    Suggestion importance[1-10]: 6

    Why: Using a variable for directory paths enhances flexibility and maintainability, although this is a minor improvement.

    6
    Possible issue
    Add a catch block to handle exceptions during the prow.checkoutRefs call for better error handling

    To improve error handling, consider adding a catch block to handle exceptions that may
    occur during the prow.checkoutRefs call. This will provide more informative error messages
    and help with debugging.

    pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_kafka_test.groovy [72-76]

     retry(2) {
         script {
    -        prow.checkoutRefs(REFS)
    +        try {
    +            prow.checkoutRefs(REFS)
    +        } catch (Exception e) {
    +            error "Failed to checkout refs: ${e.message}"
    +        }
         }
     }
     
    • Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: Adding error handling around critical operations like prow.checkoutRefs is crucial for robustness and easier debugging.

    8
    Maintainability
    Extract repeated cache block into a reusable function to reduce code duplication

    To improve readability and maintainability, consider extracting the repeated cache block
    into a reusable function. This will reduce code duplication and make future updates
    easier.

    pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_kafka_test.groovy [71-77]

    -cache(path: "./", includes: '**/*', key: prow.getCacheKey('git', REFS), restoreKeys: prow.getRestoreKeys('git', REFS)) {
    -    retry(2) {
    -        script {
    -            prow.checkoutRefs(REFS)
    +def cacheAndCheckout(path, includes, key, restoreKeys, refs) {
    +    cache(path: path, includes: includes, key: key, restoreKeys: restoreKeys) {
    +        retry(2) {
    +            script {
    +                prow.checkoutRefs(refs)
    +            }
             }
         }
     }
     
    +cacheAndCheckout("./", '**/*', prow.getCacheKey('git', REFS), prow.getRestoreKeys('git', REFS), REFS)
    +
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: The suggestion correctly identifies code duplication in caching logic and proposes a reusable function, improving maintainability.

    7
    Extract the repeated Docker login command into a reusable function

    To improve readability and maintainability, extract the repeated Docker login command into
    a reusable function.

    pipelines/pingcap/tiflow/release-8.2/pull_dm_integration_test.groovy [208]

    -echo "$HARBOR_CRED_PSW" | docker login -u $HARBOR_CRED_USR --password-stdin hub.pingcap.net
    +def dockerLogin() {
    +    sh """
    +        echo "$HARBOR_CRED_PSW" | docker login -u $HARBOR_CRED_USR --password-stdin hub.pingcap.net
    +    """
    +}
    +...
    +dockerLogin()
     
    Suggestion importance[1-10]: 6

    Why: Extracting the Docker login command into a reusable function improves maintainability and reduces code duplication, which is beneficial for long-term code management.

    6
    Enhancement
    Use a loop to define TEST_GROUP values to enhance readability and maintainability

    To enhance the readability and maintainability of the script, consider using a loop to
    define the TEST_GROUP values instead of listing them individually.

    pipelines/pingcap/tiflow/release-8.2/pull_cdc_integration_storage_test.groovy [130-131]

    -values 'G00', 'G01', 'G02', 'G03', 'G04', 'G05', 'G06',  'G07', 'G08', 'G09', 'G10', 'G11', 'G12', 'G13', 'G14', 'G15', 'G16', 'G17'
    +values (0..17).collect { "G${String.format('%02d', it)}" }
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: The suggestion to use a loop for defining TEST_GROUP values simplifies the code and enhances readability, which is a good practice.

    7
    Add a property to discard old builds based on the number of builds

    Consider adding a discardOldBuilds property to the logRotator block to ensure that old
    builds are properly discarded based on the number of builds as well as the number of days.

    jobs/pingcap/tiflow/release-8.2/pull_cdc_integration_storage_test.groovy [3-5]

     logRotator {
         daysToKeep(30)
    +    numToKeep(10)
     }
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: Adding a numToKeep property to the logRotator block is a valid enhancement for better build management, though not critical.

    7
    Add a trigger to periodically poll the SCM for changes

    Add a pollSCM trigger to periodically check for changes in the repository, ensuring the
    job is triggered automatically when there are updates.

    jobs/pingcap/tiflow/release-8.2/pull_cdc_integration_storage_test.groovy [12-14]

     properties {
         githubProjectUrl("https://github.com/pingcap/tiflow")
     }
    +triggers {
    +    pollSCM('H/15 * * * *')
    +}
     
    • Apply this suggestion
    Suggestion importance[1-10]: 6

    Why: Adding a pollSCM trigger is a useful feature for automation, enhancing the job to automatically check for repository updates.

    6
    Robustness
    Add a property to retry the checkout operation in case of transient errors

    Include a checkoutRetryCount property in the git block to specify the number of retry
    attempts for the checkout operation, improving robustness in case of transient errors.

    jobs/pingcap/tiflow/release-8.2/pull_cdc_integration_storage_test.groovy [21-34]

     git{
         remote {
             url('https://github.com/PingCAP-QE/ci.git')
         }
         branch('main')
    +    checkoutRetryCount(3)
         extensions {
             cloneOptions {
                 depth(1)
                 shallow(true)
                 timeout(5)
             }
         }
     }
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: Including a checkoutRetryCount in the git block improves robustness by handling transient errors during checkout, which is a practical enhancement.

    7

    Copy link

    ti-chi-bot bot commented Jun 25, 2024

    [APPROVALNOTIFIER] This PR is APPROVED

    This pull-request has been approved by: wuhuizuo

    The full list of commands accepted by this bot can be found here.

    The pull request process is described here

    Needs approval from an approver in each of these files:

    Approvers can indicate their approval by writing /approve in a comment
    Approvers can cancel approval by writing /approve cancel in a comment

    @ti-chi-bot ti-chi-bot bot added the lgtm label Jun 25, 2024
    Copy link

    ti-chi-bot bot commented Jun 25, 2024

    [LGTM Timeline notifier]

    Timeline:

    • 2024-06-25 08:17:05.178926498 +0000 UTC m=+707551.664415332: ☑️ agreed by wuhuizuo.

    @ti-chi-bot ti-chi-bot bot added the approved label Jun 25, 2024
    @ti-chi-bot ti-chi-bot bot merged commit 180bd9a into PingCAP-QE:main Jun 25, 2024
    3 checks passed
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    Status: Done
    Development

    Successfully merging this pull request may close these issues.

    2 participants