-
Notifications
You must be signed in to change notification settings - Fork 510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set minimum port number a Ray worker can listen on to 11002 #4278
Set minimum port number a Ray worker can listen on to 11002 #4278
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! cc'ing @romilbhardwaj for a check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @cbrownstein-lambda - looks like this is required after the recent migration of our lambda cloud support to the new provisioner. LGTM
…-org#4278) Set worker minimum port number
…f options (#4061) * user can select load balancing policies * some fixes * linting * Fixes according to comments * Linting * Linting * Fixed according to comments * fix * removed line from examples * Reverted changes * Reverted changes * Fixed according to comments * Linting * Update sky/serve/load_balancer.py Co-authored-by: Tian Xia <cblmemo@gmail.com> * [Catalog] Silently ignore TPU price not found. (#4134) * [Catalog] Silently ignore TPU price not found. * assert for non tpu v6e * format * [docs] Update GPUs used in docs (#4138) * Change V100 to H100 * updates * update * [k8s] Fix GPU labeling for EKS (#4146) Fix GPU labelling * [k8s] Handle @ in context name (#4147) Handle @ in context name * [Docs] Typo in distributed jobs docs (#4149) minor typo * [Performance] Refactor Azure SDK usage (#4139) * [Performance] Refactor Azure SDK usage * lazy import and address comments * address comments * fixes * fixes * nits * fixes * Fix OCI import issue (#4178) * Fix OCI import issue * Update sky/clouds/oci.py Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com> * edit comments --------- Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com> * [k8s] Add retry for apparmor failures (#4176) * Add retry for apparmor failures * add comment * [Docs] Update Managed Jobs page. (#4177) * [Docs] Update Managed Jobs page. * Lint * Updates * Minor: Jobs docs fix. (#4183) * [Docs] Update Managed Jobs page. * Lint * Updates * reword * [UX] remove all uses of deprecated `sky jobs` (#4173) * [UX] remove all uses of deprecated `sky jobs` * Apply suggestions from code review Co-authored-by: Romil Bhardwaj <romil.bhardwaj@gmail.com> * fix other mentions of "spot jobs" --------- Co-authored-by: Romil Bhardwaj <romil.bhardwaj@gmail.com> * [Azure] Support fractional A10 instance types (#3877) * fix * change catalog to float gpu num * support print float point gpu in sky launch. TODO: test if the ray deployment group works for fractional one * fix unittest * format * patch ray resources to ceil value * support launch from --gpus A10 * only allow strictly match fractional gpu counts * address comment * change back condition * fix * apply suggestions from code review * fix * Update sky/backends/cloud_vm_ray_backend.py Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com> * format * fix display of fuzzy candidates * fix precision issue * fix num gpu required * refactor in check_resources_fit_cluster * change type annotation of acc_count * enable fuzzy fp acc count * fix k8s * Update sky/clouds/service_catalog/common.py Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com> * fix integer gpus * format --------- Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com> * [Jobs] Refactor: Extract task failure state update helper (#4185) refactor: a unified exception handling utility * [Core] Remove backward compatibility code for 0.6.0 & 0.7.0 (#4175) * [Core] Remove backward compatibility code for 0.6.0 * remove backwards compatibility for 0.7.0 release * Update sky/serve/serve_state.py Co-authored-by: Romil Bhardwaj <romil.bhardwaj@berkeley.edu> * remove more * Revert "remove more" This reverts commit 34c28e9. * remove more but not instance tags --------- Co-authored-by: Christopher Cooper <cooperc@assemblesys.com> Co-authored-by: Romil Bhardwaj <romil.bhardwaj@berkeley.edu> * Remove outdated pylint disabling comments (#4196) Update cloud_vm_ray_backend.py * [test] update default clouds for smoke tests (#4182) * [k8s] Show all kubernetes clusters in optimizer table (#4013) * Show all kubernetes clusters in optimizer table * format * Add comment * [Azure] Allow resource group specifiation for Azure instance provisioning (#3764) * Allow resource group specifiation for Azure instance provisioning * Add 'use_external_resource_group' under provider config * nit * attached resources deletion * support deployment removal when terminating * nit * delete RoleAssignment when terminating * update ARM config template * nit * nit * delete role assignment with guid * update role assignment removal logic * Separate resource group region and VM, attached resources * nit * nit * nit * nit * add error handling for deletion * format * deployment naming update * test * nit * update deployment constant names * update open_ports to wait for the nsg creation corresponding to the VM being provisioned * format * nit * format * update docstring * add back deleted snippet * format * delete nic with retries * error handle update * [dev] restrict pylint to changed files (#4184) * [dev] restrict pylint to changed files * fix glob * avoid use of xargs -d * Update packer scripts (#4203) * Update custom image packer script to exclude .sky and include python sys packages * add comments * Upgrade Azure SDK version requirement (#4204) * [Jobs] Add option to specify `max_restarts_on_errors` (#4169) * Add option to specify `max_retry_on_failure` * fix recover counts * fix log streaming * fix docs * fix * fix * fix * fix * fix default value * Fix spinner * Add unit test for default strategy * fix test * format * Update docs/source/examples/managed-jobs.rst Co-authored-by: Zongheng Yang <zongheng.y@gmail.com> * rename to restarts * Update docs/source/examples/managed-jobs.rst Co-authored-by: Zongheng Yang <zongheng.y@gmail.com> * update docs * warning instead of error out * Update docs/source/examples/managed-jobs.rst Co-authored-by: Romil Bhardwaj <romil.bhardwaj@berkeley.edu> * rename * add comment * fix * rename * Update sky/execution.py Co-authored-by: Romil Bhardwaj <romil.bhardwaj@berkeley.edu> * Update sky/execution.py Co-authored-by: Romil Bhardwaj <romil.bhardwaj@berkeley.edu> * address comments * format * commit changes for docs * Format --------- Co-authored-by: Zongheng Yang <zongheng.y@gmail.com> Co-authored-by: Romil Bhardwaj <romil.bhardwaj@berkeley.edu> * [Core] Fix job race condition. (#4193) * [Core] Fix job race condition. * fix * simplify url * change to list_jobs * upd ray comments * only store jobs in ray_id_set * [Core] Fix issue with the wrong path of setup logs (#4209) * fix issue with a getting setup logs * More conservative * print error * comment * [Jobs] Fix jobs name (#4213) * fix issue with a getting setup logs * More conservative * print error * comment * Fix job name * [Performance] Speed up Azure A10 instance creation (#4205) * Use date instead of timestamp in skypilot image names * Speed up Azure A10 VM creation * disable nouveau and use smaller instance * address comments * address comments * add todo * [Tests] Fix public bucket tests (#4216) fix * [Catalog] Add TPU V6e. (#4218) * [Catalog] Add TPU V6e. * swap if else branch * [test] smoke test fixes for managed jobs (#4217) * [test] don't wait for old pending jobs controller messages `sky jobs queue` used to output a temporary "waiting" message while the managed jobs controller was still being provisioned/starting. Since #3288 this is not shown, and instead the queued jobs themselves will show PENDING/STARTING. This also requires some changes to tests to permit the PENDING and STARTING states for managed jobs. * fix default aws region * [test] wait for RECOVERING more quickly Smoke tests were failing because some managed jobs were fulling recovering back to the RUNNING state before the smoke test could catch the RECOVERING case (see e.g. #4192 `test_managed_jobs_cancellation_gcp`). Change tests that manually terminate a managed job instance, so that they will wait for the managed job to change away from the RUNNING state, checking every 10s. * address PR comments * fix * Add user toolkits to all sky custom images and fix PyTorch issue on A10 (#4219) * Add user toolkits to all sky custom images * address comments * [Core] Support TPU v6 (#4220) * init * fix * nit * format * add readme * add inference example * nit * add multi-host training * rephrase catalog doc * Update examples/tpu/v6e/README.md Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com> --------- Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com> * [Core] Make home address replacement more robust (#4227) * Make home address replacement more robust * format * [UX] sky launch --fast (#4159) * [UX] skip provisioning stages if cluster is already available * add new --skip-setup flag and further limit stages to match sky exec * rename flag to --fast * add smoke test for sky launch --fast * changes stages for --fast * fix --fast help message * add api test for fast param (outside CLI) * lint * explicitly specify stages * [Docs] Tpu v6 docs (#4221) * Update TPU v6 docs * tpu v6 docs * add TPU v6 * update * Fix tpu docs * fix indents * restructure TPU doc * Fix * Fix * fix * Fix TPU * fix docs * Update docs/source/reference/tpu.rst Co-authored-by: Tian Xia <cblmemo@gmail.com> --------- Co-authored-by: Tian Xia <cblmemo@gmail.com> * [ux] add sky jobs launch --fast (#4231) * [ux] add sky jobs launch --fast This flag will make the jobs controller launch use `sky launch --fast`. There are a few known situations where this can cause misbehavior in the jobs controller: - The SkyPilot wheel is outdated (due to changes in the SkyPilot code or a version upgrade). - The user's cloud credentials have changed. In this case the new credentials will not be synced, and if there are new clouds available in `sky check`, the cloud depedencies may not be correctly installed. However, this does speed up `jobs launch` _significantly_, so provide it as a dangerous option. Soon we will add robustness checks to `sky launch --fast` that will fix the above caveats, and we can remove this flag and just enable the behavior by default. * Apply suggestions from code review Co-authored-by: Romil Bhardwaj <romil.bhardwaj@gmail.com> * fix lint --------- Co-authored-by: Romil Bhardwaj <romil.bhardwaj@gmail.com> * [UX] Show 0.25 on controller queue (#4230) * Show 0.25 on controller queue * format * [Storage] Avoid opt-in regions for S3 (#4239) * S3 fix + timeout * S3 fix + timeout * lint * Update K8s docker image build and the source artifact registry (#4224) * Attempt at improving performance of k8s cluster launch * remove conda env creation * add multiple regions * K8s sky launch pulls the new docker images * Move k8s script * use us region only * typo * Remove --system-site-packages when setup sky cluster (#4168) * Remove --system-site-packages when setup sky cluster * add comments * [AWS/Azure] Avoid error out during image size check (#4244) * Avoid error out during image size check * Avoid error for azure * lint * [AWS] Disable additional auto update services for ubuntu image with cloud-init (#4252) * Disable additional auto update services for ubuntu image * simplify the commands * [Dashboard] Add a simple status filter. (#4253) * Disable more potential unattended upgrade sources for AWS (#4246) * Fix AWS unattended upgrade issue * more commands * add retry and disable all unattended * remove retry * disable unattended upgrades and add retry in aws default image * [docs]: OCI key_file path clarrification (#4262) * [docs]: OCI key_file path clarrification * Update installation.rst * [k8s] Parallelize setup for faster multi-node provisioning (#4240) * parallelize setup * lint * Add retries * lint * retry for get_remote_home_dir * optimize privilege check * parallelize termination * increase num threads * comments * lint * do not redirect stderr to /dev/null when submitting job (#4247) * do not redirect stderr to /dev/null when submitting job Should fix #4199. * remove grep, add worker_maximum_startup_concurrency override * [tests] Exclude runpod from smoke tests unless specified (#4238) Add runpod * Update comments pointing to Lambda's docs (#4272) * [Core] Avoid PENDING job to be set to FAILED and speed up job scheduling (#4264) * fix race condition for setting job status to FAILED during INIT * Fix * fix * format * Add smoke tests * revert pending submit * remove update entirely for the job schedule step * wait for job 32 to finish * fix smoke * move and rename * Add comment * minor * Set minimum port number a Ray worker can listen on to 11002 (#4278) Set worker minimum port number * [docs] use k8s instead of kubernetes in the CLI (#4164) * [docs] use k8s instead of kubernetes in the CLI * fix docs build script for linux * Update docs/source/reference/kubernetes/kubernetes-getting-started.rst Co-authored-by: Romil Bhardwaj <romil.bhardwaj@gmail.com> --------- Co-authored-by: Romil Bhardwaj <romil.bhardwaj@gmail.com> * [jobs] autodown managed job clusters (#4267) * [jobs] autodown managed job clusters If all goes correctly, the managed job controller should tear down a managed job cluster once the managed job completes. However, if the controller fails somehow (e.g. crashes, is terminated, etc), we don't want to leak resources. As a failsafe, set autodown on the job cluster. This is not foolproof, since the skylet on the cluster can also crash, but it's likely to catch many cases. * add comment about autodown duration * add leading _ * [UX] Improve Formatting of Post Job Creation Logs (#4198) * Update cloud_vm_ray_backend.py * Update cloud_vm_ray_backend.py * format * Fix `stream_logs` Duplicate Job Handling and TypeError (#4274) fix: multiple `job_id` * Update sky/serve/load_balancer.py Co-authored-by: Tian Xia <cblmemo@gmail.com> * feat(serve): Improve load balancing policy error message and display 1. Add available policies to schema validation 2. Show available policies in error message when invalid policy is specified 3. Display load balancing policy in service spec repr when explicitly set * fix(serve): Update load balancing policy schema to match implemented policies Only 'round_robin' is currently implemented in LoadBalancingPolicy class * linting * refactor(serve): Remove policy enum from schema Move policy validation to code to avoid duplication and make it easier to maintain when adding new policies * fix * linting * Update sky/serve/service_spec.py Co-authored-by: Tian Xia <cblmemo@gmail.com> * Fix circular import in schemas.py by moving load_balancing_policies import inside function * linting --------- Co-authored-by: Tian Xia <cblmemo@gmail.com> Co-authored-by: Romil Bhardwaj <romil.bhardwaj@berkeley.edu> Co-authored-by: Yika <yikaluo@assemblesys.com> Co-authored-by: Zhanghao Wu <zhanghao.wu@outlook.com> Co-authored-by: Zongheng Yang <zongheng.y@gmail.com> Co-authored-by: Christopher Cooper <cooperc@assemblesys.com> Co-authored-by: Romil Bhardwaj <romil.bhardwaj@gmail.com> Co-authored-by: Andy Lee <andylizf@outlook.com> Co-authored-by: landscapepainter <34902420+landscapepainter@users.noreply.github.com> Co-authored-by: Hysun He <hysunhe@foxmail.com> Co-authored-by: Cody Brownstein <105375373+cbrownstein-lambda@users.noreply.github.com>
This pull request sets the minimum port number a Ray worker can listen on to 11002. Without this change, when running examples/k8s_cloud_deploy/launch_k8s.sh, k3s/containerd will fail with the error:
See:
10010
, attaching to a container breaks withunable to upgrade connection: error dialing backend: dial tcp 127.0.0.1:10010: connect: no route to host
k3s-io/k3s#4984