Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tiup-clean --log add warining #1028

Closed
glkappe opened this issue Dec 30, 2020 · 1 comment · Fixed by #1029
Closed

tiup-clean --log add warining #1028

glkappe opened this issue Dec 30, 2020 · 1 comment · Fixed by #1029
Labels
type/feature-request Categorizes issue as related to a new feature.

Comments

@glkappe
Copy link
Contributor

glkappe commented Dec 30, 2020

Feature Request

Is your feature request related to a problem? Please describe:

The current clean --log implementation will close the instance and clean up *log, which can also be dangerous in the online environment.
We need some tips to tell users about these risks

[tidb@node4107 ~]$ tiup cluster clean qh --log Starting componentcluster`: /home/tidb/.tiup/components/cluster/v1.3.0/tiup-cluster clean qh --log
This operation will clean tidb v4.0.9 cluster qh's log.
Nodes will be ignored: []
Roles will be ignored: []
Files to be deleted are:
172.16.4.107:
/home/tidb/qh_new/deploy/grafana-43000/log/.log
/home/tidb/qh_new/deploy/prometheus-49090/log/
.log
/home/tidb/qh_new/deploy/tiflash-19002/log/.log
/home/tidb/qh_new/deploy/tidb-24000/log/
.log
/home/tidb/qh_new/deploy/tikv-40164/log/.log
/home/tidb/qh_new/deploy/pd-12379/log/
.log
172.16.4.237:
/home/tidb/qh_new/deploy/tidb-24000/log/*.log
Do you want to continue? [y/N]: y
Cleanup cluster...

  • [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/qh/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/qh/ssh/id_rsa.pub
  • [Parallel] - UserSSH: user=tidb, host=172.16.4.107
  • [Parallel] - UserSSH: user=tidb, host=172.16.4.237
  • [Parallel] - UserSSH: user=tidb, host=172.16.4.107
  • [Parallel] - UserSSH: user=tidb, host=172.16.4.107
  • [Parallel] - UserSSH: user=tidb, host=172.16.4.107
  • [Parallel] - UserSSH: user=tidb, host=172.16.4.107
  • [Parallel] - UserSSH: user=tidb, host=172.16.4.107
  • [ Serial ] - StopCluster
    Stopping component grafana
    Stopping instance 172.16.4.107
    Stop grafana 172.16.4.107:43000 success
    Stopping component prometheus
    Stopping instance 172.16.4.107
    Stop prometheus 172.16.4.107:49090 success
    Stopping component tiflash
    Stopping instance 172.16.4.107
    Stop tiflash 172.16.4.107:19002 success
    Stopping component tidb
    Stopping instance 172.16.4.237
    Stopping instance 172.16.4.107
    Stop tidb 172.16.4.237:24000 success
    Stop tidb 172.16.4.107:24000 success
    Stopping component node_exporter
    Stopping component blackbox_exporter
    Stopping component tikv
    Stopping instance 172.16.4.107
    Stop tikv 172.16.4.107:40164 success
    Stopping component pd
    Stopping instance 172.16.4.107
    Stop pd 172.16.4.107:12379 success
    Stopping component node_exporter
    Stopping component blackbox_exporter
  • [ Serial ] - CleanupCluster
    Cleanup instance 172.16.4.107
    Cleanup 172.16.4.107 success
    Cleanup instance 172.16.4.237
    Cleanup 172.16.4.237 success
    Cleanup cluster qh successfully

`
Describe the feature you'd like:

There may be a way to clean up the current and archived logs gracefully (maybe individual components do not have log rotation)

Describe alternatives you've considered:

Teachability, Documentation, Adoption, Migration Strategy:

@glkappe glkappe added the type/feature-request Categorizes issue as related to a new feature. label Dec 30, 2020
@lucklove
Copy link
Member

Currently, TiUP will stop the cluster before cleanup logs, because the components will be writing the log if we don't stop them. And if the log file is hold by the component and we delete it, the component will still write it after the file removed. And the disk usage increase but no useful log write down.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/feature-request Categorizes issue as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants