Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenShift spi initial implementation #5716

Merged
merged 6 commits into from
Jul 20, 2017
Merged

OpenShift spi initial implementation #5716

merged 6 commits into from
Jul 20, 2017

Conversation

sleshchenko
Copy link
Member

@sleshchenko sleshchenko commented Jul 18, 2017

What does this PR do?

Minor fixes:

  • Fix location of agent binaries
  • Add missed server configs to InstallerDto
  • Fix construction of output channel

Changes related to spi reworking to make it reusable by other implementations:

  • Move installer and output endpoints to api-workspace module
  • Rework Bootstrapping to be able to reuse it for different spi implementations

Add initial state for openshift implementation of spi

What issues does this PR fix or reference?

#5686

@sleshchenko sleshchenko added the target/branch Indicates that a PR will be merged into a branch other than master. label Jul 18, 2017
@sleshchenko sleshchenko self-assigned this Jul 18, 2017
AGENT_BINARIES_URI=$(echo ${AGENT_BINARIES_URI} | sed 's/https/http/g')

# use wget
WGET_SPIDER="wget --spider"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why wget is removed ?

AGENT_BINARIES_URI=$(echo ${AGENT_BINARIES_URI} | sed 's/https/http/g')

# use wget
WGET_SPIDER="wget --spider"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why wget is removed ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: support was added as on alpine curl is not provided by default while wget is there

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for comment. I copied/pasted it from openshift-connector branch. I'll revise all changes in installer scripts and notify you.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sleshchenko note: we're now using openshift-connector-wip branch of che

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes were redundant. I reverted all changes in installers scripts. Thanks for this catch.


test "$(id -u)" = 0 || SUDO="sudo -E"
command -v curl >/dev/null 2>&1 || { PACKAGES=${PACKAGES}" curl"; }
test "$(id -u)" = 0 || test -f ${HOME}/is_arbitrary_user || SUDO="sudo -E"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we have a test on is_arbitrary_user ?

}
}
},
"recipe": {
Copy link
Contributor

@benoitf benoitf Jul 18, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK right now in OpenShift Connector there is a deployment generator of workspace definitions. Here you expect that users have to provide specific data for OSIO ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's a bad idea to introduce a new recipe type openshift.

But yes the approach we had with the current OpenShift connector was to be able to reuse current "docker image" recipes. It makes sense to run a centos_jdk8 Docker image on OpenShift so we didn't feel the need to support the openshift deployment descriptions (yaml or json).

The only drawback with your approach is that you will need to duplicate all the stacks in stacks.json if you want them to run on openshift.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make the things clear - the format is not "dedicated", it is "native" for K8s, OS, which I consider as a huge advantage of Che6 SPI.
Showing ability to support it is one of the main goal of initial implementation.
The problem you are talking about can be solved with more than one way:

  • defining another environment, i.e. one for Docker, another for OSIO (we can imagine some more for example for Local computer, no docker, no containers), so one or another will work depending on available infrastructure
  • adding one more infrastructure type which works on OS but support some other, simplified format (docker image, dockerfile etc)
  • extending this infrastructure implementation making it support more than one recipe type (like in Docker impl) - possible by design but I personally would not prefer this way

And, of course they can be combined.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@benoitf As it is an initial implementation it doesn't have to cover all the use cases. Implementing k8s here seems more interesting since it was quite impossible in the master.
And points 2 and 3 (GA, yes, it's by design) from Gennady's comment would allow using env types from the master in Openshift infra.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just saying that che is a 'portable workspace' so I was expecting I could use my workspace on all platforms whatever is their native stuff. It's like when I use docker I don't care if it's windows, linux or MacOS as native platform.
If I need to rewrite the data of my workspace then it's not anymore 'portable'

}
}
}
} catch (RuntimeException ignored) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

silent catch ?

.get();

if (actualPod == null) {
throw new InternalInfrastructureException("Can't find creted pod");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI typo on exception

throw new InternalInfrastructureException("Can't find creted pod");
}
String status = actualPod.getStatus().getPhase();
LOG.info("POD " + actualPod.getMetadata().getName() + " has status " + status);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no + usage in logger

Copy link
Contributor

@benoitf benoitf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may be wrong but it looks like odd for me that the recipe format is a dedicated format. Like if I write a simple dockerfile from eclipse che it won't work on OSIO ?

I don't see usage of persistent volume, env variables, but as it seems to be "initial" implementation, there is a lot of TODO
It looks like that some code is coming from the current openshift connector implementation in master but copyright header/author is changed.

@codenvy-ci
Copy link

Build # 3089 - FAILED

Please check console output at https://ci.codenvycorp.com/job/che-pullrequests-build/3089/ to view the results.

@sleshchenko sleshchenko changed the title OpenShift spi initial implementation [WIP] OpenShift spi initial implementation Jul 18, 2017
Copy link

@garagatyi garagatyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job! Since it is an initial step I believe it is a good start!

return environmentRecipe.getContent();
} else {
try {
return recipeDownloader.getRecipe(environmentRecipe.getLocation());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that it is similar to Docker infra impl. But it should use recipe retrieved by RuntimeContext

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@garagatyi Do you mean that parsing and provisioning of the environment should be done by RuntimeContext instead of RuntimeInfrastructure?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean that downloading of environment recipe is done on RuntimeContext.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just don't understand how to implement it. As docker infra does: context should have already parsed environment, so we need to download recipe for parsing before context creation.
WDYT should we do in this case? Should we move parsing of recipe and provisioning to context? Or maybe now leave it as it is and rework later along with docker infra?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docker infra is a separate implementation. Your implementation doesn't have to do things in the same way as it does. Downloading in docker infra was implemented long time ago. Then we realized that it should use recipe downloaded in context, but haven't changed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't have but for me, the responsibility of infrastructure and context is not clear enough. So I implemented in the same way as docker infrastructure does. So I'd leave it as it is. And rework it later when runtime infrastructure won't do estimation (that require recipe downloading) and it will define when validation and estimation should be done.

@codenvy-ci
Copy link

Build # 3095 - FAILED

Please check console output at https://ci.codenvycorp.com/job/che-pullrequests-build/3095/ to view the results.

@codenvy-ci
Copy link

Build # 3098 - FAILED

Please check console output at https://ci.codenvycorp.com/job/che-pullrequests-build/3098/ to view the results.

return UriBuilder.fromUri(apiURI)
.scheme("https".equals(apiURI.getScheme()) ? "wss" : "ws")
.replacePath(apiURI.getPath().replace("/api", ""))
.path(OUTPUT_WEBSOCKET_ENDPOINT_BASE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's avoid URI transformation providing ws:// URI directly instead (in general case it can be different than http:// )

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Introduced new property che.websocket.endpoint.base and added using it while construction of output channel

URL wsmasterEndpoint = new URL(workspaceMasterEndpoint);
this.downloadBootstrapperLink = wsmasterEndpoint.getProtocol() + "://" + wsmasterEndpoint.getHost() + ":" +
wsmasterEndpoint.getPort() +
"/agent-binaries/linux_amd64/bootstrapper/bootstrapper";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's avoid modifying URL. It is temporary is not it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added dedicated property che.infra.openshift.bootstrapper_binary_url.

Maybe later we will inject bootstrapper binary in other way but now I have no idea how we can implement it without execution command for downloading like curl or wget.

@sleshchenko sleshchenko force-pushed the spi-openshift branch 2 times, most recently from 41e93d1 to 193f18a Compare July 19, 2017 10:08
@codenvy-ci
Copy link

Build # 3107 - FAILED

Please check console output at https://ci.codenvycorp.com/job/che-pullrequests-build/3107/ to view the results.

@sleshchenko sleshchenko force-pushed the spi-openshift branch 3 times, most recently from ecdbe88 to ae116b1 Compare July 19, 2017 15:28
@sleshchenko
Copy link
Member Author

sleshchenko commented Jul 19, 2017

@benoitf

Like if I write a simple dockerfile from eclipse che it won't work on OSIO ?

You're right. It won't work. Mb later we will support dockerfile/composefile but not in this initial implementation of openshift spi. See #5716 (comment)

I don't see usage of persistent volume, env variables

There are few [hard coded env variables](https://github.com/eclipse/che/pull/5716/files#diff-
dc6c75e34aa896716a466e0b3bf1cf98R40), it will be fixed in #5690
For using persistent volume there is another issue #5691

It looks like that some code is coming from the current openshift connector implementation in master but copyright header/author is changed.

It this comment actual after refactoring? If yes, then please notify me.

And please take a look my PR again after refactoring.

@codenvy-ci
Copy link

Build # 3115 - FAILED

Please check console output at https://ci.codenvycorp.com/job/che-pullrequests-build/3115/ to view the results.

@l0rd
Copy link
Contributor

l0rd commented Jul 19, 2017

I think that supporting recipes of type openshift is definitely a good thing. So I really like the work you are doing and that is something that we haven't implemented in the OpenShift connector and it's cool that you have thought about that. But I think that before merging this to master you should add the capability to run different kind of recipes on OpenShift too. Not only openshift recipes but at least also already built docker images recipes. I'm sure that is something you have already think about but let me note here a couple of reasons why this is important:

  • Portable stacks: Until now a user was able to use the same stack definition without worrying about what was the underlying Che infrastructure (docker, swarm, openshift, k8s etc...). And that is good because I want to be able to use the same runtime Dockerfile on my laptop, for the CI and in production. If we having different stacks for different environment we loose all the benefits of using Docker containers.

  • Don't break OSIO: OSIO Che will be broken if you don't support running already-built docker images recipes. Even if you can map every Docker stack to an OpenShift stack there are some details that need to be deduced at runtime and cannot be hardcoded in stacks.json. For example workspace routes and paths of workspace persistent volume claims.

@sleshchenko sleshchenko changed the title [WIP] OpenShift spi initial implementation OpenShift spi initial implementation Jul 20, 2017
Copy link
Contributor

@skabashnyuk skabashnyuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@l0rd I agree with all your points. As we mention the main purpose of this PR is to prove that Che6 SPI is capable of providing native capabilities for Openshift. I understand your concerns about backward compatibility with the state that we have in the master branch, and as @gazarenkov says #5716 (comment) we have a couple of different options to solve this.

@codenvy-ci
Copy link

Build # 3124 - FAILED

Please check console output at https://ci.codenvycorp.com/job/che-pullrequests-build/3124/ to view the results.

@sleshchenko sleshchenko merged commit 1fa5248 into spi Jul 20, 2017
@sleshchenko sleshchenko deleted the spi-openshift branch July 20, 2017 13:58
@skabashnyuk skabashnyuk added this to the 6.0.0-M1 milestone Oct 19, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
target/branch Indicates that a PR will be merged into a branch other than master.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants