You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The two nodes created in the cluster is running on t3x.large instance type. Is this expected or it should be running on a gpu instance type like g4dn??
$ kubectl describe pod comfyui-6cd777d4f-2hpkl
Name: comfyui-6cd777d4f-2hpkl
Namespace: default
Priority: 0
Service Account: default
Node:
Labels: app=comfyui
pod-template-hash=6cd777d4f
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/comfyui-6cd777d4f
Containers:
comfyui:
Image: 397016254480.dkr.ecr.us-west-2.amazonaws.com/comfyui-images:latest
Port: 8848/TCP
Host Port: 0/TCP
Limits:
nvidia.com/gpu: 1
Requests:
nvidia.com/gpu: 1
Environment:
Mounts:
/app/ComfyUI/models from stable-diffusion-models (rw)
/app/ComfyUI/output from comfyui-outputs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5mtf (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
stable-diffusion-models:
Type: HostPath (bare host directory volume)
Path: /comfyui-models
HostPathType:
comfyui-outputs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: comfyui-outputs-pvc
ReadOnly: false
kube-api-access-t5mtf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
nvidia.com/gpu:NoSchedule op=Exists
Events:
Type Reason Age From Message
Normal Nominated 51m karpenter Pod should schedule on: nodeclaim/karpenter-nodepool-ztcrj
Normal Nominated 35m karpenter Pod should schedule on: nodeclaim/karpenter-nodepool-7bqxl
Normal Nominated 19m karpenter Pod should schedule on: nodeclaim/karpenter-nodepool-dzsgk
Normal Nominated 3m54s karpenter Pod should schedule on: nodeclaim/karpenter-nodepool-2vbdk
Warning FailedScheduling 77s (x244 over 20h) default-scheduler 0/2 nodes are available: 2 Insufficient nvidia.com/gpu. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
The text was updated successfully, but these errors were encountered:
The two nodes created in the cluster is running on t3x.large instance type. Is this expected or it should be running on a gpu instance type like g4dn??
$ kubectl describe pod comfyui-6cd777d4f-2hpkl
Name: comfyui-6cd777d4f-2hpkl
Namespace: default
Priority: 0
Service Account: default
Node:
Labels: app=comfyui
pod-template-hash=6cd777d4f
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/comfyui-6cd777d4f
Containers:
comfyui:
Image: 397016254480.dkr.ecr.us-west-2.amazonaws.com/comfyui-images:latest
Port: 8848/TCP
Host Port: 0/TCP
Limits:
nvidia.com/gpu: 1
Requests:
nvidia.com/gpu: 1
Environment:
Mounts:
/app/ComfyUI/models from stable-diffusion-models (rw)
/app/ComfyUI/output from comfyui-outputs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5mtf (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
stable-diffusion-models:
Type: HostPath (bare host directory volume)
Path: /comfyui-models
HostPathType:
comfyui-outputs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: comfyui-outputs-pvc
ReadOnly: false
kube-api-access-t5mtf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
nvidia.com/gpu:NoSchedule op=Exists
Events:
Type Reason Age From Message
Normal Nominated 51m karpenter Pod should schedule on: nodeclaim/karpenter-nodepool-ztcrj
Normal Nominated 35m karpenter Pod should schedule on: nodeclaim/karpenter-nodepool-7bqxl
Normal Nominated 19m karpenter Pod should schedule on: nodeclaim/karpenter-nodepool-dzsgk
Normal Nominated 3m54s karpenter Pod should schedule on: nodeclaim/karpenter-nodepool-2vbdk
Warning FailedScheduling 77s (x244 over 20h) default-scheduler 0/2 nodes are available: 2 Insufficient nvidia.com/gpu. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
The text was updated successfully, but these errors were encountered: