Skip to content
This repository has been archived by the owner on Oct 10, 2023. It is now read-only.

unnecessary msg "Error: exit status 1" when a Tanzu CLI command failed #44

Closed
iancoffey opened this issue Jul 6, 2021 · 1 comment · Fixed by #3103
Closed

unnecessary msg "Error: exit status 1" when a Tanzu CLI command failed #44

iancoffey opened this issue Jul 6, 2021 · 1 comment · Fixed by #3103

Comments

@iancoffey
Copy link
Contributor

Bug description

The following tanzu command prints Error: accepts 1 arg(s), received 0 and the help content, but also Error: exit status 1 at the end. Seems there is no need to print Error: exit status 1. I also noticed when tanzu management-cluster create ... prints the help context when its execution failed.

$ tanzu cluster kubeconfig get
Error: accepts 1 arg(s), received 0
Get kubeconfig of a cluster and merge the context into the default kubeconfig file

Usage:
  tanzu cluster kubeconfig get CLUSTER_NAME [flags]

Examples:
  
    # Get workload cluster kubeconfig
    tanzu cluster kubeconfig get CLUSTER_NAME    # Get workload cluster admin kubeconfig
    tanzu cluster kubeconfig get CLUSTER_NAME --adminFlags:
      --admin                Get admin kubeconfig of the workload cluster
      --export-file string   File path to export a standalone kubeconfig for workload cluster
  -h, --help                 help for get
  -n, --namespace string     The namespace where the workload cluster was created. Assumes 'default' if not specified.Global Flags:
        --log-file string   Log file path
  -v, --verbose int32     Number for the log level verbosity(0-9)

Error: exit status 1
✖  exit status 1

For example, 'kubectl get' doesn't print the exit status.

$ kubectl get 
You must specify the type of resource to get. Use "kubectl api-resources" for a complete list of supported resources.

error: Required resource not specified.
Use "kubectl explain <resource>" for a detailed description of that resource (e.g. kubectl explain pods).
See 'kubectl get -h' for help and examples

Affected product area (please put an X in all that apply)

[ ] APIs
[ ] Addons
[X] CLI
[ ] Docs
[ ] Installation
[ ] Plugin
[ ] Security
[ ] Test and Release
[ ] User Experience

Expected behavior

Steps to reproduce the bug

Version (include the SHA if the version is not obvious)

$ tanzu version
version: v1.3.0-rc.2
buildDate: 2021-02-26
sha: bc3e781-dirty

Environment where the bug was observed (cloud, OS, etc)

Relevant Debug Output (Logs, manifests, etc)

Collated Context

Context from 2021-03-10 07:23:20
User: jessehu
Here is the log when tanzu management-cluster create ... failed and it prints the usage content after the detail error msg. It's not expected to print the usage content in this command failure case.

[2021-03-09T16:53:50.149Z] Success waiting on all providers.
[2021-03-09T16:53:50.149Z] Start creating management cluster...
[2021-03-09T16:53:50.149Z] 
[2021-03-09T16:53:50.149Z] 
[2021-03-09T16:53:50.149Z] Failure while deploying management cluster, Here are some steps to investigate the cause:
[2021-03-09T16:53:50.149Z] 
[2021-03-09T16:53:50.149Z] Debug:
[2021-03-09T16:53:50.149Z]     kubectl get po,deploy,cluster,kubeadmcontrolplane,machine,machinedeployment -A --kubeconfig /home/capv/.kube-tkg/tmp/config_L5etLzc8
[2021-03-09T16:53:50.149Z]     kubectl logs deployment.apps/<deployment-name> -n <deployment-namespace> manager --kubeconfig /home/capv/.kube-tkg/tmp/config_L5etLzc8
[2021-03-09T16:53:50.149Z] 
[2021-03-09T16:53:50.149Z] To clean up the resources created by the management cluster:
[2021-03-09T16:53:50.149Z] 	  tanzu management-cluster delete
[2021-03-09T16:53:50.149Z] Error: unable to set up management cluster: unable to wait for cluster and get the cluster kubeconfig: error waiting for cluster to be provisioned (this may take a few minutes): cluster creation failed, reason:'VMProvisionFailed @ Machine/tkgextensions-bee09a7c0b33-control-plane-xmh5k', message:'1 of 2 completed'

[2021-03-09T16:53:50.149Z] Usage:
[2021-03-09T16:53:50.149Z]   tanzu management-cluster create [flags]
[2021-03-09T16:53:50.149Z] 
[2021-03-09T16:53:50.149Z] Examples:
[2021-03-09T16:53:50.149Z]   
[2021-03-09T16:53:50.149Z]     # Create a management cluster on AWS infrastructure, initializing it with
[2021-03-09T16:53:50.149Z]     # components required to create workload clusters through it on the same infrastructure
[2021-03-09T16:53:50.149Z]     # by bootstrapping through a self-provisioned bootstrap cluster.
[2021-03-09T16:53:50.149Z]     tanzu management-cluster create --file ~/clusterconfigs/aws-mc-1.yaml
[2021-03-09T16:53:50.149Z]     # Launch an interactive UI to configure the settings necessary to create a
[2021-03-09T16:53:50.149Z]     # management cluster
[2021-03-09T16:53:50.149Z]     tanzu management-cluster create --ui
[2021-03-09T16:53:50.149Z]     # Create a management cluster on vSphere infrastructure by using an existing
[2021-03-09T16:53:50.149Z]     # bootstrapper cluster. The current kube context should point to that
[2021-03-09T16:53:50.149Z]     # of the existing bootstrap cluster.
[2021-03-09T16:53:50.149Z]     tanzu management-cluster create --use-existing-bootstrap-cluster --file vsphere-mc-1.yaml
[2021-03-09T16:53:50.149Z] 
[2021-03-09T16:53:50.149Z] Flags:
[2021-03-09T16:53:50.149Z]   -b, --bind string                      Specify the IP and port to bind the Kickstart UI against (e.g. 127.0.0.1:8080). (default "127.0.0.1:8080")
[2021-03-09T16:53:50.149Z]       --browser string                   Specify the browser to open the Kickstart UI on. Use 'none' for no browser. Defaults to OS default browser. Supported: ['chrome', 'firefox', 'safari', 'ie', 'edge', 'none']
[2021-03-09T16:53:50.149Z]   -f, --file string                      Configuration file from which to create a management cluster
[2021-03-09T16:53:50.149Z]   -h, --help                             help for create
[2021-03-09T16:53:50.149Z]   -t, --timeout duration                 Time duration to wait for an operation before timeout. Timeout duration in hours(h)/minutes(m)/seconds(s) units or as some combination of them (e.g. 2h, 30m, 2h30m10s) (default 30m0s)
[2021-03-09T16:53:50.149Z]   -u, --ui                               Launch interactive management cluster provisioning UI
[2021-03-09T16:53:50.149Z]   -e, --use-existing-bootstrap-cluster   Use an existing bootstrap cluster to deploy the management cluster
[2021-03-09T16:53:50.149Z]   -y, --yes                              Create management cluster without asking for confirmation
[2021-03-09T16:53:50.149Z] 
[2021-03-09T16:53:50.149Z] Global Flags:
[2021-03-09T16:53:50.149Z]         --log-file string   Log file path
[2021-03-09T16:53:50.149Z]   -v, --verbose int32     Number for the log level verbosity(0-9)
[2021-03-09T16:53:50.150Z] 
[2021-03-09T16:53:50.150Z] Error: exit status 1
[2021-03-09T16:53:50.150Z] ✖  exit status 1 
@marckhouzam
Copy link
Contributor

I looked into this and the printing of Error: exit status 1 is caused by the handling of plugins in the tanzu cli. The exit status reported is the exit code returned by the plugin that was executed. For example:

$ tanzu cluster kubeconfig get
Error: accepts 1 arg(s), received 0       # This is the error printed by the plugin binary itself
Error: exit status 1                      # This is the error printed by Cobra through the CLI command that calls the plugin

✖  exit status 1                          # This is the same error that Cobra printed, but printed by the tanzu cli itself

We can confirm this by running the plugin directly where we only see the single error message.

$ ~/.config/tanzu-plugins/distribution/darwin/arm64/cli/cluster/v0.26.0-dev/tanzu-cluster-darwin_arm64 kubeconfig get
Error: accepts 1 arg(s), received 0

We can also compare it with an error from a native command (not a plugin):

$ tanzu plugin delete
Error: must provide plugin name as positional argument

✖  must provide plugin name as positional argument

we notice that there is no exit code printed. However we do see the same error message printed twice, which is caused by a slightly different problem.

I will be posting a PR to address this.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants