Skip to content

[autoscaler] Too much information polluting autoscaler logs #38727

@chappidim

Description

@chappidim

What happened + What you expected to happen

While using nightly build, it was really hard to debug autoscaler failures using monitor.out as it logs redundant new_config continuously in recent change. Fix is to move from print to DEBUG

new config {'cluster_name': 'chappidim-trn1-32', 'max_workers': 2, 'upscaling_speed': 1.0, 'docker': {}, 'idle_timeout_minutes': 60, 'provider': {'type': 'aws', 'region': 'us-west-2', 'availability_zone': 'us-west-2d', 'use_internal_ips': True, 'cache_stopped_nodes': False}, 'auth': {'ssh_user': 'ubuntu', 'ssh_private_key': '~/ray_bootstrap_key.pem'}, 'available_node_types': {'ray.head.default': {'node_config': {'InstanceType': 'trn
ubuntu@ip-10-0-143-38:~$ ls -lah /tmp/ray/session_latest/logs/monitor.out
-rw-rw-r-- 1 ubuntu ubuntu 122M Aug 22 16:08 /tmp/ray/session_latest/logs/monitor.out
ubuntu@ip-10-0-143-38:~$

Versions / Dependencies

Nightly builds

Reproduction script

  • Any generic autoscaler example for example, launch an AWS cluster with setup commands

Issue Severity

Low: It annoys or frustrates me.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething that is supposed to be working; but isn'ttriageNeeds triage (eg: priority, bug/not-bug, and owning component)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions