Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ocp4 admin credentials #62

Merged
merged 1 commit into from
Jan 4, 2019
Merged

ocp4 admin credentials #62

merged 1 commit into from
Jan 4, 2019

Conversation

akostadinov
Copy link
Contributor

@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: akostadinov

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Jan 4, 2019
@pruan-rht
Copy link
Member

LGTM

@akostadinov
Copy link
Contributor Author

@pruan-rht , now you can copy content of host.spec artifact from jenkins and it should work OOB for a 4.0 cluster. Also you can use ocp4 env. Still SSH to nodes not working because they don't have public IPs. I'll be adding SSH bastion support for the purpose next week.
And after that would be some more mundane stuff like supporting router and registry new object type, etc.

@akostadinov akostadinov merged commit 1f28917 into openshift:master Jan 4, 2019
cli: SharedLocalCliExecutor
admin_creds: MasterOsAdminCredentials
api_port: 6443
version: 4.0.0.0 # version not supported on cluster (yet)
Copy link
Contributor

@xingxingxia xingxingxia Jan 7, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcode version: 4.0.0.0 LG currently for a quite period before 4.1 comes :)
I suggest we implement getting version by oc get clusterversion as https://bugzilla.redhat.com/show_bug.cgi?id=1658957#c3 hints :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xingxingxia , not having access to version anonymously will require some changes. I'd like to make sure we are not getting what we need before implementing something that may not be as nice as present implementation.

res = master_host.exec_admin("cat /root/.kube/config", quiet: true)
if !res[:success] && res[:response].include?("No such file or directory")
# try to find kubeconfig in other locations
locations = [["kubeconfig", "/etc/kubernetes/static-pod-resources"]]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heard from @jianlinliu that Dev is in-flight making cluster not ssh-able. So I think we still need URL support for kubeconfig.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unbelievable, how is customer supposed to access cluster if it is not SSH-able?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't read the linked to issue that SSH will be disabled. It makes not sense IMO to disable SSH.

@@ -84,6 +84,20 @@ environments:
admin_creds: MasterOsAdminCredentials
api_port: 8443
# version: 3.6.1.44.16
ocp4:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@openshift/team-qe fyi, now using ocp4 as BUSHSLICER_DEFAULT_ENVIRONMENT would let us need not add air_port and version for nextgen auto test

@@ -19,7 +19,21 @@ def initialize(env, **opts)

# @return [APIAccessor]
def get
res = master_host.exec_as(:admin, "cat /root/.kube/config", quiet: true)
res = master_host.exec_admin("cat /root/.kube/config", quiet: true)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @akostadinov , tried ocp4 in jobs 37492/consoleFull, 37493/console , both failed with error:

    When I run the :annotate admin command with:        # features/step_definitions/cli.rb:31
...
      Inappropriate ioctl for device (Errno::ENOTTY)
      /home/jenkins/workspace/Runner-v3/lib/ssh.rb:244:in `initialize'
      /home/jenkins/workspace/Runner-v3/lib/host.rb:744:in `new'
...
      /home/jenkins/workspace/Runner-v3/lib/admin_credentials.rb:22:in `get'

Env is good because then I turned back to my former workaround as below and that still passes (37495/console):

[root@ip-10-0-11-47 ~]# mkdir -p ~/.ssh; chmod 700 ~/.ssh; cat /home/core/.ssh/authorized_keys >> ~/.ssh/authorized_keys; mkdir ~/.kube; mv `find /etc/kubernetes/static-pod-resources/ -name "kubeconfig" | head -n 1` ~/.kube/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In console log I see core@ec2-18-191-132-235.us-east-2.compute.amazonaws.com's password: # Update with cluster-admin (*admin* command). This looks like SSH login to machine failed. Which is unrelated to this PR.

You sure installer created the machine with the correct key?

Copy link
Contributor

@xingxingxia xingxingxia Jan 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure correct key, compare below:

[tester@fedora28 ~]$ rm -rf ~/.ssh
[tester@fedora28 ~]$ ssh core@ec2-...-235.us-east-2.compute.amazonaws.com
Are you sure you want to continue connecting (yes/no)? yes
core@ec2-...-235.us-east-2.compute.amazonaws.com's password:

[tester@fedora28 ~]$ ssh -i verification-tests/features/tierN/private/config/keys/....pem core@ec2-...-235.us-east-2.compute.amazonaws.com
Last login: Tue Jan  8 08:25:03 2019 from ...
...
[core@ip-10-0-11-47 ~]$

I asked other colleague @yanpzhan trying their subteam env and met same error 37496/console

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my cluster it has been working fine. I've never did your workaround - neither with root nor kubeconfig location. Tomorrow I'll create a new cluster and see if something changed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I managed to reproduce.

Copy link
Contributor

@yapei yapei Jan 10, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @wjiangjay Adding these fix the problem, but I didn't have those settings for 3.x auto tests and they work. Why do we need add them in 4.0?

Copy link

@ghost ghost Jan 10, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we change some logic which handle the stderr and stdout
@akostadinov

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you all. I added ocp4 explicitly in Runner-v3 param description for ..._DEFAULT_ENVIRONMENT.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wjiangjay , we can change conf = YAML.load(res[:response]) to conf = YAML.load(res[:stdout]) in lib/admin_credentials.rb:46 so that error messages are ignored.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sense!

@@ -19,7 +19,21 @@ def initialize(env, **opts)

# @return [APIAccessor]
def get
res = master_host.exec_as(:admin, "cat /root/.kube/config", quiet: true)
res = master_host.exec_admin("cat /root/.kube/config", quiet: true)
if !res[:success] && res[:response].include?("No such file or directory")
Copy link
Contributor

@xingxingxia xingxingxia Jan 24, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@akostadinov Hmm, today encountered another failure due to non-robust code here:

undefined method `unpack' for nil:NilClass
NoMethodError
/opt/rh/rh-ruby24/root/usr/share/ruby/base64.rb:59:in `decode64'
/home/jenkins/workspace/Runner-v3/lib/admin_credentials.rb:54:in `get'
/home/jenkins/workspace/Runner-v3/lib/environment.rb:89:in `admin'
/home/jenkins/workspace/Runner-v3/lib/environment.rb:333:in `nodes'
/home/jenkins/workspace/Runner-v3/features/step_definitions/node.rb:11:in `/^I select a random node's host$/'
features/tierN/cli/service.feature:125:in `And I select a random node's host'

I debugged on the master: http://pastebin.test.redhat.com/701353 , its /root/.kube/config is just the one shown in the failure log's output, which does not include admin cert/key. The failure cause is somebody did oc login on root@master with normal OCP user, which will create new /root/.kube/config, having no admin.
Could you help fix it to work more robust even when somebody did normal user oc login on master under root user?
BTW, I'll also suggest team try to not do normal user oc login on root@master. And I removed that /root/.kube/config to make auto test continue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can make it work with the token from the file but token is temporary and next day it will not work. Not sure it is a good idea.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants