Skip to content
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.

Inject user OSO token in runtime machine #541

Closed
l0rd opened this issue Feb 8, 2018 · 20 comments
Closed

Inject user OSO token in runtime machine #541

l0rd opened this issue Feb 8, 2018 · 20 comments

Comments

@l0rd
Copy link
Contributor

l0rd commented Feb 8, 2018

In an OSIO Che workspace, if a user want to have access to the OSO OpenShift API, he needs to oc login. That should be automated: we should inject the OSO token of current user as soon as the workspace container is created. And we should set the current project the user namespace.

For instance, with the vertx quickstart, if a user run mvn clean fabric8:deploy -Popenshift from the Che terminal it should build successfully and deploy the quickstart application in the user namespace (not in the *-che namespace).

Note that the token is stored in stored in ~/.kube/config

@bartoszmajsak
Copy link

bartoszmajsak commented Feb 14, 2018

We are working on improving Cube experience around boosters tests in the issue linked above. We definitely need this as a smooth user experience. Not only user token but the target namespace. For this injecting current username would also help.

If ~/.kube/config will be available with proper user name and token it's all that we need for tests to pick it up and run seamlessly both from IDE as well as from mvn.

@l0rd
Copy link
Contributor Author

l0rd commented Feb 14, 2018

@bartoszmajsak we have not being able to add that issue to current sprint. We haven't enough capacity during this sprint. Anyway I've changed the priority to P1 to make sure that we will take it in consideration during next planning.

@bartoszmajsak
Copy link

bartoszmajsak commented Feb 15, 2018

Thanks @l0rd for making it highest prio, much appreciated!

After discussions with @ibuziuk and @lordofthejars we have some ideas about how to solve our use case, but let's start with it first.

Why we need that

Most of the boosters (if not all) are shipped with integration tests based on Arquillian Cube. This tool makes it easy to set up and hides all the infrastructure code so tests are focused only on the relevant scenario/business logic/acceptance criteria.

In Che environment, we are in the namespace (and using the service account) we don't have sufficient rights. Thus we cannot easily run our tests. For the moment we have the following workaround in place, but I think we can agree that we can provide the way smoother user experience.

Potential solutions

Cube comes with two options when it comes to deployment to the cluster:

  • (option 1) We can use current project/namespace
  • (option 2) Leveraging cluster URL and token we can point to any cluster of choice to deploy our services and run tests against them
    We have following ideas for both options:
Option 1:

(as described in the issue itself)

Option 2:
  • Having username, OSO token and cluster information for Cube to deploy injected in Che (briefly discussed with @ibuziuk and it seems to not be preferred solution)
Option 3:
  • Having OSIO user token injected to Che workspace so we can have a thin adapter for Cube doing the following:
    • call GET /api/user/services -> obtaining cluster url and other user data (including username)
    • call GET auth.openshift.io/api/token?for={CLUSTER_URL} -> to get OSO token
    • login to the target cluster and use namespace under the hood in Cube, thanks to information obtained above

Let me know what you think about both these options.

@l0rd
Copy link
Contributor Author

l0rd commented Feb 16, 2018

I think we should go for option 2 with username and OSO token injected in Che. @ibuziuk why this should not be ideal?

@bartoszmajsak
Copy link

@l0rd I split option 2 to two, as they are a bit different and I think previously it was slightly confusing from the description. Personally, I think the third one might be the most feasible one.

@ibuziuk
Copy link
Member

ibuziuk commented Feb 20, 2018

Well, actually I never told that option 2 is not preferred and actually it looks more robust that 2 since injected OSIO token might be expired by the time Cube adapter would call /api/user/services. @bartoszmajsak could you please clarify why 3 is more appealing than 2 from your perspective ?

@bartoszmajsak
Copy link

bartoszmajsak commented Feb 20, 2018

Well, actually I never told that option 2 is not preferred ...

@ibuziuk Then I guess I misunderstood our discussion about che-server and cluster information not being populated. Apologies.

... and actually it looks more robust that 2 since injected OSIO token might be expired by the time Cube adapter would call /api/user/services. @bartoszmajsak could you please clarify why 3 is more appealing than 2 from your perspective ?

I assumed that OSIO token lifespan is the same as OSO. If the case you are describing holds true, then 2nd option is in fact better. My thinking was that with the 3rd option we don't need to have any knowledge about the cluster we are running in, as this information will come from calling /api/user/services.

@ibuziuk
Copy link
Member

ibuziuk commented Feb 20, 2018

I assumed that OSIO token lifespan is the same as OSO

This is not the case afaik. Also, I'm wondering since all the oso communications are going to happen via oso proxy is it planned to have endpoint for obtaining oso token at all ? cc: @alexeykazakov
(all oso token direct manipulations where removed from rh-che as part of openshiftio/openshift.io#1683)

My thinking was that with the 3rd option we don't need to have any knowledge about the cluster we are running in, as this information will come from calling /api/user/services

This is a fair point, but still there is a problem with OSIO token expiration

@alexeykazakov
Copy link
Member

  1. I don't think OSIO token expiration is an issue here. OSIO token lives much longer than a Che Workspace (30 days for now). And OSO token lives even longer.

  2. /api/token endpoint(s) are there and we do not plan to remove them. BTW we added some new functionality to these endpoints (see Add alias for user's OSO url to /api/token?for=<alias> fabric8-services/fabric8-auth#334)
    You can now just use /api/token?for=openshift to get the OSO token for the cluster used by the user. The response will contain the actual cluster URL.

@garagatyi garagatyi self-assigned this Feb 23, 2018
@bartoszmajsak
Copy link

@garagatyi which route did you choose?

@garagatyi
Copy link

I'm going to execute oc login on a start of a workspace. It will include cluster, project, and token.

garagatyi pushed a commit that referenced this issue Feb 28, 2018
Signed-off-by: Oleksandr Garagatyi <ogaragat@redhat.com>
garagatyi pushed a commit that referenced this issue Mar 1, 2018
garagatyi pushed a commit that referenced this issue Mar 1, 2018
garagatyi pushed a commit that referenced this issue Mar 1, 2018
garagatyi pushed a commit that referenced this issue Mar 1, 2018
garagatyi pushed a commit that referenced this issue Mar 2, 2018
…project using oc CLI in workspace containers
garagatyi pushed a commit that referenced this issue Mar 2, 2018
…o user project using oc CLI in workspace containers
garagatyi pushed a commit that referenced this issue Mar 2, 2018
…Login to user project using oc CLI in workspace containers
garagatyi pushed a commit that referenced this issue Mar 2, 2018
…541: Login to user project using oc CLI in workspace containers
garagatyi pushed a commit that referenced this issue Mar 2, 2018
Signed-off-by: Oleksandr Garagatyi <ogaragat@redhat.com>
garagatyi pushed a commit that referenced this issue Mar 2, 2018
Signed-off-by: Oleksandr Garagatyi <ogaragat@redhat.com>
garagatyi pushed a commit that referenced this issue Mar 2, 2018
Signed-off-by: Oleksandr Garagatyi <ogaragat@redhat.com>
garagatyi pushed a commit that referenced this issue Mar 2, 2018
Signed-off-by: Oleksandr Garagatyi <ogaragat@redhat.com>
garagatyi pushed a commit that referenced this issue Mar 2, 2018
Signed-off-by: Oleksandr Garagatyi <ogaragat@redhat.com>
garagatyi pushed a commit that referenced this issue Mar 2, 2018
Signed-off-by: Oleksandr Garagatyi <ogaragat@redhat.com>
garagatyi pushed a commit that referenced this issue Mar 3, 2018
Signed-off-by: Oleksandr Garagatyi <ogaragat@redhat.com>
garagatyi pushed a commit that referenced this issue Mar 5, 2018
garagatyi pushed a commit that referenced this issue Mar 5, 2018
garagatyi pushed a commit that referenced this issue Mar 5, 2018
garagatyi pushed a commit that referenced this issue Mar 5, 2018
Signed-off-by: Oleksandr Garagatyi <ogaragat@redhat.com>
@bartoszmajsak
Copy link

bartoszmajsak commented Mar 6, 2018

Thanks @garagatyi for this important improvement. Can you give me a rough date when we can see it on OSIO prod-preview? /cc @l0rd

@garagatyi
Copy link

@bartoszmajsak I don't know when Che6 will be deployed on prod-preview with my changes.
@l0rd @davidfestal maybe you guys can help?
BTW here is a demo https://drive.google.com/file/d/1zxcZ0lDftJnodY_3l5QxP79gkybXpYZw/view?usp=sharing

@ibuziuk
Copy link
Member

ibuziuk commented Mar 6, 2018

@garagatyi @bartoszmajsak it is already on prod-preview ;-)

@garagatyi
Copy link

@ibuziuk but I mean including my changes that I did today

@ibuziuk
Copy link
Member

ibuziuk commented Mar 6, 2018

your changes should be already available on https://rhche.prod-preview.openshift.io/

@dipak-pawar
Copy link

@ibuziuk changes for this issue are available on https://rhche.prod-preview.openshift.io/. but I cannot see it on https://che.prod-preview.openshift.io.

screenshot from 2018-03-06 19-04-53

Any idea when this changes(which is already available for https://rhche.prod-preview.openshift.io) will be available on osio prod-preview(https://che.prod-preview.openshift.io)?

@ibuziuk
Copy link
Member

ibuziuk commented Mar 6, 2018

@dipak-pawar it is expected. This change would be only in brand new che 6 AFAIK. We are not planning to port it to che 5 since we would migrate to che 6 in a short run (this / next week )

@ibuziuk
Copy link
Member

ibuziuk commented Mar 6, 2018

btw, https://rhche.prod-preview.openshift.io is che 6 osio prod-preview

@dipak-pawar
Copy link

@ibuziuk Thank you for confirming.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants