-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ocp4 admin credentials #62
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: akostadinov The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
LGTM |
@pruan-rht , now you can copy content of |
cli: SharedLocalCliExecutor | ||
admin_creds: MasterOsAdminCredentials | ||
api_port: 6443 | ||
version: 4.0.0.0 # version not supported on cluster (yet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcode version: 4.0.0.0
LG currently for a quite period before 4.1 comes :)
I suggest we implement getting version by oc get clusterversion
as https://bugzilla.redhat.com/show_bug.cgi?id=1658957#c3 hints :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xingxingxia , not having access to version anonymously will require some changes. I'd like to make sure we are not getting what we need before implementing something that may not be as nice as present implementation.
res = master_host.exec_admin("cat /root/.kube/config", quiet: true) | ||
if !res[:success] && res[:response].include?("No such file or directory") | ||
# try to find kubeconfig in other locations | ||
locations = [["kubeconfig", "/etc/kubernetes/static-pod-resources"]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heard from @jianlinliu that Dev is in-flight making cluster not ssh-able. So I think we still need URL support for kubeconfig.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unbelievable, how is customer supposed to access cluster if it is not SSH-able?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't read the linked to issue that SSH will be disabled. It makes not sense IMO to disable SSH.
@@ -84,6 +84,20 @@ environments: | |||
admin_creds: MasterOsAdminCredentials | |||
api_port: 8443 | |||
# version: 3.6.1.44.16 | |||
ocp4: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@openshift/team-qe fyi, now using ocp4 as BUSHSLICER_DEFAULT_ENVIRONMENT would let us need not add air_port and version for nextgen auto test
@@ -19,7 +19,21 @@ def initialize(env, **opts) | |||
|
|||
# @return [APIAccessor] | |||
def get | |||
res = master_host.exec_as(:admin, "cat /root/.kube/config", quiet: true) | |||
res = master_host.exec_admin("cat /root/.kube/config", quiet: true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @akostadinov , tried ocp4
in jobs 37492/consoleFull, 37493/console , both failed with error:
When I run the :annotate admin command with: # features/step_definitions/cli.rb:31
...
Inappropriate ioctl for device (Errno::ENOTTY)
/home/jenkins/workspace/Runner-v3/lib/ssh.rb:244:in `initialize'
/home/jenkins/workspace/Runner-v3/lib/host.rb:744:in `new'
...
/home/jenkins/workspace/Runner-v3/lib/admin_credentials.rb:22:in `get'
Env is good because then I turned back to my former workaround as below and that still passes (37495/console):
[root@ip-10-0-11-47 ~]# mkdir -p ~/.ssh; chmod 700 ~/.ssh; cat /home/core/.ssh/authorized_keys >> ~/.ssh/authorized_keys; mkdir ~/.kube; mv `find /etc/kubernetes/static-pod-resources/ -name "kubeconfig" | head -n 1` ~/.kube/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In console log I see core@ec2-18-191-132-235.us-east-2.compute.amazonaws.com's password: # Update with cluster-admin (*admin* command)
. This looks like SSH login to machine failed. Which is unrelated to this PR.
You sure installer created the machine with the correct key?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure correct key, compare below:
[tester@fedora28 ~]$ rm -rf ~/.ssh
[tester@fedora28 ~]$ ssh core@ec2-...-235.us-east-2.compute.amazonaws.com
Are you sure you want to continue connecting (yes/no)? yes
core@ec2-...-235.us-east-2.compute.amazonaws.com's password:
[tester@fedora28 ~]$ ssh -i verification-tests/features/tierN/private/config/keys/....pem core@ec2-...-235.us-east-2.compute.amazonaws.com
Last login: Tue Jan 8 08:25:03 2019 from ...
...
[core@ip-10-0-11-47 ~]$
I asked other colleague @yanpzhan trying their subteam env and met same error 37496/console
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my cluster it has been working fine. I've never did your workaround - neither with root
nor kubeconfig location. Tomorrow I'll create a new cluster and see if something changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I managed to reproduce.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @wjiangjay Adding these fix the problem, but I didn't have those settings for 3.x auto tests and they work. Why do we need add them in 4.0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we change some logic which handle the stderr and stdout
@akostadinov
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you all. I added ocp4 explicitly in Runner-v3 param description for ..._DEFAULT_ENVIRONMENT.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wjiangjay , we can change conf = YAML.load(res[:response])
to conf = YAML.load(res[:stdout])
in lib/admin_credentials.rb:46 so that error messages are ignored.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make sense!
@@ -19,7 +19,21 @@ def initialize(env, **opts) | |||
|
|||
# @return [APIAccessor] | |||
def get | |||
res = master_host.exec_as(:admin, "cat /root/.kube/config", quiet: true) | |||
res = master_host.exec_admin("cat /root/.kube/config", quiet: true) | |||
if !res[:success] && res[:response].include?("No such file or directory") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@akostadinov Hmm, today encountered another failure due to non-robust code here:
undefined method `unpack' for nil:NilClass
NoMethodError
/opt/rh/rh-ruby24/root/usr/share/ruby/base64.rb:59:in `decode64'
/home/jenkins/workspace/Runner-v3/lib/admin_credentials.rb:54:in `get'
/home/jenkins/workspace/Runner-v3/lib/environment.rb:89:in `admin'
/home/jenkins/workspace/Runner-v3/lib/environment.rb:333:in `nodes'
/home/jenkins/workspace/Runner-v3/features/step_definitions/node.rb:11:in `/^I select a random node's host$/'
features/tierN/cli/service.feature:125:in `And I select a random node's host'
I debugged on the master: http://pastebin.test.redhat.com/701353 , its /root/.kube/config is just the one shown in the failure log's output, which does not include admin cert/key. The failure cause is somebody did oc login on root@master with normal OCP user, which will create new /root/.kube/config, having no admin.
Could you help fix it to work more robust even when somebody did normal user oc login on master under root user?
BTW, I'll also suggest team try to not do normal user oc login on root@master. And I removed that /root/.kube/config to make auto test continue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can make it work with the token from the file but token is temporary and next day it will not work. Not sure it is a good idea.
https://projects.engineering.redhat.com/browse/OPENSHIFTQ-12575