Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[maintainance] Setup the repository #8

Closed
gaocegege opened this issue Apr 4, 2018 · 6 comments
Closed

[maintainance] Setup the repository #8

gaocegege opened this issue Apr 4, 2018 · 6 comments

Comments

@gaocegege
Copy link
Member

We should set up CI and other things according to https://github.com/kubeflow/community/blob/master/repository-setup.md

@YujiOshima
Copy link
Contributor

@gaocegege @jlewi I have read the tests of other repos and tried to use ci-bot and gcr.io/kubeflow-ci.
But I think it is difficult to test with only them.

In my understanding, the run_e2e_workflow script in gcr.io/kubeflow-ci deploy some pods defined test/workflows/components directory ( the path is defined in prow_config.yaml ) with ksonnet.
So we should rewrite manifests of katib into the ksonnet format and store in test/workflows/components, right?
And Is the code of e2e-test test/e2e/main.go?
I can`t understand when is it built and who run it..
Moreover, Katib cannot be tested with kubectl. It needs grpc (e.g. CreateStudy, GetStudy).
Is it possible to do such test?

If the e2e test is not available, we should do the component.
I added some test code https://github.com/YujiOshima/hp-tuning/blob/ci-setup/manager/main_test.go
You can run the test code by
docker run -e REPO_OWNER=YujiOshima -e REPO_NAME=hp-tuning -e PULL_BASE_SHA=ci-setup katib/test-image
I think this would be available on k8s test infra.

WDYT? Could you give me advice?

@jlewi
Copy link
Contributor

jlewi commented Apr 6, 2018

Here's how the tests work

  • You define an Argo workflow inside the repo

  • The Argo workflow specifies a bunch of pods to run

  • Its entirely up to you what docker images and what commands to run in each step of the Argo workflow.

  • So if you want to run your unittests you would just have a pod that would do "go test ...". We need to do a little bit of extra work to report the failures in the way Prow expects. For python unittests we have py_checks but we don't have anything similar for go unittests

  • For E2E tests you might do something like the following

    1. step of the workflow would deploy katib
    2. Next step would run some tests; e.g. make gRPC requests

You can use whatever docker image you like with the pods. Alternatively if your missing some dependency that should be in the common worker image you can add it here

https://github.com/kubeflow/testing/tree/master/images

@gaocegege
Copy link
Member Author

@YujiOshima There is no need to deploy katib with ksonnet. We use ksonnet and argo to construct the DAG. But you could use docker containers to do whatever you want.

You can refer to https://github.com/kubeflow/pytorch-operator . It uses ksonnet to construct the build graph while it does not use it to install the operator.

@gaocegege gaocegege changed the title [Maintainance] Setup the repository [maintainance] Setup the repository Apr 7, 2018
@jlewi
Copy link
Contributor

jlewi commented Apr 9, 2018

Please try to use ksonnet. We should try to be consistent about how we package things. We want to have a single registry for all our packages and deploy them in a consistent fashion.

@gaocegege
Copy link
Member Author

gaocegege commented Apr 10, 2018

OK, I will open an issue to keep track of the progress. As you know, we are not ksonnet experts, thus maybe we need some help from the community.

@gaocegege
Copy link
Member Author

I am going to close the issue. There are one thing to do: #32

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants