-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Closed
Labels
bugSomething that is supposed to be working; but isn'tSomething that is supposed to be working; but isn'ttriageNeeds triage (eg: priority, bug/not-bug, and owning component)Needs triage (eg: priority, bug/not-bug, and owning component)
Description
What happened + What you expected to happen
While using nightly build, it was really hard to debug autoscaler failures using monitor.out as it logs redundant new_config continuously in recent change. Fix is to move from print to DEBUG
new config {'cluster_name': 'chappidim-trn1-32', 'max_workers': 2, 'upscaling_speed': 1.0, 'docker': {}, 'idle_timeout_minutes': 60, 'provider': {'type': 'aws', 'region': 'us-west-2', 'availability_zone': 'us-west-2d', 'use_internal_ips': True, 'cache_stopped_nodes': False}, 'auth': {'ssh_user': 'ubuntu', 'ssh_private_key': '~/ray_bootstrap_key.pem'}, 'available_node_types': {'ray.head.default': {'node_config': {'InstanceType': 'trn
ubuntu@ip-10-0-143-38:~$ ls -lah /tmp/ray/session_latest/logs/monitor.out
-rw-rw-r-- 1 ubuntu ubuntu 122M Aug 22 16:08 /tmp/ray/session_latest/logs/monitor.out
ubuntu@ip-10-0-143-38:~$
Versions / Dependencies
Nightly builds
Reproduction script
- Any generic autoscaler example for example, launch an AWS cluster with setup commands
Issue Severity
Low: It annoys or frustrates me.
Metadata
Metadata
Assignees
Labels
bugSomething that is supposed to be working; but isn'tSomething that is supposed to be working; but isn'ttriageNeeds triage (eg: priority, bug/not-bug, and owning component)Needs triage (eg: priority, bug/not-bug, and owning component)