-
-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error fetching ports #305
Comments
hey @whoissteven, can you give me more details ? here’s what you can do:
with this info, ill be able to figure out what’s going on and help you out. thx! |
got it, thx for the info, did this work in previous versions of kftray? either way, if you can send me the logs or any extra info, i can try to replicate it on my end and dig deeper into the issue |
Hello I have the same issue, still on windows 11. |
@lcasavola does this issue occur in the latest app versions or all versions? I couldn't reproduce the issue on my Windows setup please test with version v0.14.0 |
yes, is the latest . |
I would achieve the same result as this working kubectl running: I tried to overcome the interactive setup and importa a json file like this: after importing if I edit the configuration I see the value "default-service" on Pod Label attribute |
got it! i was able to simulate the issue. the problem occurs when a custom kubeconfig is selected. i'm already working on a fix and will update the thread once it's ready. just one important point: the "target" field in the json is actually the label of the pod, not the pod name itself. it should follow the kubernetes label format. if you want, you can test it with an example json like this: [
{
"target": "app.kubernetes.io/component=server",
"namespace": "argocd",
"local_port": 8585,
"remote_port": 8080,
"context": "kind-1",
"workload_type": "pod",
"protocol": "tcp",
"kubeconfig": "/users/henrique/.kube/config.bkp",
"alias": "argocd"
}
] the reason for using the label instead of the pod name is to ensure that if the pod dies, kftray keeps the port forward up and always forwards the request to a healthy pod. regarding the anyway, i'm working on a fix for the behavior in the ui! thanks for the report. |
Great, I'll wait for your fix. |
on the screen to add a new config, if you don't select a kubeconfig, the app will always try to look in the path specified by the KUBECONFIG environment variable. if the variable doesn't exist, it will try the default kubeconfig path, which is $HOME/.kube/config. the option in the ui to select a kubeconfig is just in case you want to use a kubeconfig that isn't in the variable or the default path. in the config json, you can import without the kubeconfig field/value in json, and the app will assume default, example: [
{
"target": "app.kubernetes.io/component=server",
"namespace": "argocd",
"local_port": 8585,
"remote_port": 8080,
"context": "kind-1",
"workload_type": "pod",
"protocol": "tcp",
"alias": "argocd"
}
] example with custom kubeconfig: example with default kubeconfig (in this case, i have the kubeconfig in default path this is the function with this logic: |
Argh! |
oh, but kftray doesn't depend on kubectl! the default kubeconfig path is just there to make it easier so you don't have to specify a custom kubeconfig path if you don't want to, but everything should work fine even if kubectl isn't installed on the machine and passing a kubeconfig path... i have a bunch of homelab vms with various os's that do not have kubectl installed, and kftray works fine. about the SaaS: that sounds cool. you can reach out to me on the kftray slack :) , here's the link to join the workspace: |
released the version v0.14.3 with fix: https://github.com/hcavarsan/kftray/releases/tag/v0.14.3 @lcasavola @whoissteven could you check if it's working now? |
This resolved my issue, thank you. |
Great, now it works! :-) |
@lcasavola cool! i'm from Brazil, so, we'll talk sometime on slack :) regarding your points:
i think your feedback makes total sense and thanks for it. i've already made these changes and created a PR with these updates, and i'll let you know as soon as a new version is released... in this new version, the alias and local_port fields are no longer required, and these are the new behaviors:
For example, for a JSON like this: [
{
"context": "kind-1",
"kubeconfig": "/Users/henrique/.kube/config.bkp",
"namespace": "argocd",
"protocol": "tcp",
"remote_port": 8080,
"service": "argocd-server",
"workload_type": "service"
}
]
PS- This same behavior applies if you add it via the UI too! and just to point, in kftray v0.14.0 i released a feature to auto import configs from the kubernetes cluster based on the annotations. this might help you in case you need to not pass anything and just configure the annotations on the kubernetes side. on the client side, you just click the auto import button to fetch all those configs automatically :) (specifically this feature, need the kubeconfig in default local path to works, im still developing the custom kubeconfig for this) this is the release and it has a video to show the workflow of auto import: so thats it, i'll let you know here when the new version is out with default values when adding or importing configs via UI |
@lcasavola this is the new release with optional |
cool!
or |
okay, i got it and i’ll think about it but could you open a new issue for this? it might be a bit long, so it’d be good to have a separate issue just for that. how’s that sound?” |
yes sure I'll copy my last post and put in a new issue, no problem. |
I understand. but it wouldn't be possible with kftray because I perform some checks for available pods before establishing the port forward tunnel. also, kftray has some more complex sanitization routines that still depend on other permissions at the namespace level. I think it would be impossible for kftray to rely solely on the port forward verb today. It would require a simpler application focused more on port forward management without many additional features. I'll close this issue, but I really appreciate your report! It helped me a lot, and the changes you suggested made sense. Thank you! If you need anything else or encounter any more bugs, feel free to open an issue, and I'm open to discussing it. :) |
@hcavarsan: no problem I added a clusterrole to let the user to see namespaces and all it works now as expected. |
excuse me if I come back on this issue. with the clusterrole now the user is allwed to see all the namespaces but it is annoying. |
Hello, I am unable to fetch any target ports.
To Reproduce
Steps to reproduce the behavior:
This is a bit of an odd one..
I uninstalled and reinstalled kftray, added a new kubeconfig, and the issue persists.
Expected behavior
To see the available target ports associated with my service.
Screenshots
Desktop (please complete the following information):
Additional context
I have no issues port forwarding with kubectl
The text was updated successfully, but these errors were encountered: