Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1alpha2: Add implementation #526

Merged
merged 11 commits into from
Apr 20, 2018
Merged

v1alpha2: Add implementation #526

merged 11 commits into from
Apr 20, 2018

Conversation

gaocegege
Copy link
Member

@gaocegege gaocegege commented Apr 11, 2018

/assign @ddysher @DjangoPeng @jlewi @mitake @ScorpioCPH

This PR is the implementation of v1alpha2 and it is mainly contributed by @ScorpioCPH and @gaocegege

Signed-off-by: Ce Gao gaoce@caicloud.io


This change is Reviewable

@coveralls
Copy link

coveralls commented Apr 11, 2018

Coverage Status

Coverage increased (+1.6%) to 49.92% when pulling 99f6a9d on gaocegege:implementation into 6e706d6 on kubeflow:master.

@gaocegege
Copy link
Member Author

The test coverage of v1alpha2 should be 70%+, PTAL

)

// ServerOption is the main context object for the controller manager.
type ServerOption struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about adding the flag enableGangScheduling https://github.com/kubeflow/tf-operator/blob/master/cmd/tf-operator/app/options/options.go#L29 ? Probably I'm the most suitable person for doing this, so please let me know if you want me to add the field for the gang scheduling after merging this PR :)

Copy link
Member Author

@gaocegege gaocegege Apr 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#490
We have an issueto keep track of the progress of syncing commits with v1alpha1. Really appreciate if you could help us implement in v1alpha2. And as kube-arbitrator maintainers said, maybe we do not need to import PDB now.

}
go run(stopCh)

var key string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not fully sure but this style of communication would be warned by the race detector? How about adding a mutex or pass the key via a channel?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think so. I will open an issue to keep track. I think it is better not to import more enhancements in this PR since it is copied from upstream/v1alpha2 branch and I think we could record the potential issues and fix it after it is merged.

WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, opening an issue and tracking would be ok.

@jlewi
Copy link
Contributor

jlewi commented Apr 11, 2018

Would you mind submitting a separate PR for the signals package just to cut down the size of this PR a bit?

@gaocegege
Copy link
Member Author

gaocegege commented Apr 11, 2018

@jlewi Sure I will do it.

@gaocegege
Copy link
Member Author

@jlewi But the test will fail because the code can not be built.

Copy link
Member

@ddysher ddysher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the PR!

I only briefly scanned the PR, will walk though the code again later :)

)

var (
Version = "0.3.0+git"
Version = "v0.1.0-alpha"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this still the right version?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We released a version v0.1.0 and I am not sure if we should set the version to it. Since it is behind the master.

// To allow injection of syncTFJob for testing.
syncHandler func(tfJobKey string) (bool, error)

updateStatusHandler func(tfjob *tfv1alpha2.TFJob) error
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this saved as a handler?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we need to mock it for test.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can u add a comment?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🆗


// AddFlags adds flags for a specific CMServer to the specified FlagSet.
func (s *ServerOption) AddFlags(fs *flag.FlagSet) {
fs.StringVar(&s.Kubeconfig, "kubeconfig", "~/.kube/config",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

per discussion in #522, should this default flag be removed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I will update it.

// getPodsForTFJob returns the set of pods that this tfjob should manage.
// It also reconciles ControllerRef by adopting/orphaning.
// Note that the returned Pods are pointers into the cache.
func (tc *TFJobController) getPodsForTFJob(tfjob *tfv1alpha2.TFJob) ([]*v1.Pod, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct me if i'm wrong, but getPodsForTFJob doesn't seem to capture the intent of this method - claiming (i.e. patch ownership) pods are already done in the method, returning a list of pods is just a side effect of calling this method.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think we should change the name to getAndClaimPodsForTFJob. And I think it is better to remove the logic of claiming, since there is no need to claim the pods every time. I can open an issue for it.

WDYT @ScorpioCPH

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

Ports: []v1.ServicePort{
{
Name: genGeneralName(tfjobKey, rt, index),
Port: 2222,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is the port hard coded here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will update it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We removed the field TFPort, thus we should reuse the ports in podtemplatespec. And I will open an issue to keep track. Currently we use a hard coded port and do not support customization.

I do not intent to implement in this PR since it is copied from upstream/v1alpha2. I will file a new PR for this feature.

WDYT?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

limitations under the License.
*/

// Note(CPH): this file is copied form k8s.io/kubernetes/pkg/controller
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how hard would it be to vendor this?

@ScorpioCPH

Copy link
Member Author

@gaocegege gaocegege Apr 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have no vendor k8s.io/kubernetes/, and CPH thinks that there is no need to import such a vendor for this struct.

WDYT @ddysher

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, k8s.io/kubernetes is too large for us.

@gaocegege
Copy link
Member Author

The test will fail since there is no signals package.

Signed-off-by: Ce Gao <gaoce@caicloud.io>
Signed-off-by: Ce Gao <gaoce@caicloud.io>
Signed-off-by: Ce Gao <gaoce@caicloud.io>
Signed-off-by: Ce Gao <gaoce@caicloud.io>
const RecommendedKubeConfigPathEnv = "KUBECONFIG"

func Run(opt *options.ServerOption) error {

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove blank line.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

Signed-off-by: Ce Gao <gaoce@caicloud.io>
Signed-off-by: Ce Gao <gaoce@caicloud.io>
@gaocegege
Copy link
Member Author

Rebased upstream/master and addressed some comments. PTAL

@ddysher ddysher mentioned this pull request Apr 12, 2018
Signed-off-by: Ce Gao <gaoce@caicloud.io>
@jlewi
Copy link
Contributor

jlewi commented Apr 13, 2018

Review status: 0 of 15 files reviewed at latest revision, 8 unresolved discussions.


cmd/tf-operator.v2/app/options/options.go, line 38 at r2 (raw file):

Previously, gaocegege (Ce Gao) wrote…

Yeah, I will update it.

Don't we get the path of the KubeCOnfig from an environment variable? So is the plan to get rid of this flag altogether?


pkg/controller.v2/controller.go, line 152 at r7 (raw file):

expectations

Can you explain this to me? Why do track expectations in the form of aggregated statistics?

Why wouldn't we keep track of expectations for each replica? e.g. have an expectation for
service-0 create
service-1 create

If we just look at aggregated statistics won't we have problems? e.g. suppose we expect 2 workers
So in the current implementation we expect 2 create requests but if we get
service - 0 create
service -0 create

Then we haven't actually created both services even though we observed 2 create events.


pkg/controller.v2/controller_pod.go, line 54 at r7 (raw file):

	// Expect to have `replicas - succeeded` pods alive.
	expected := *spec.Replicas - succeeded

Why are you only checking the number of pods? Why aren't you checking that each individual replica has a pod in the expected state?

Not only will checking each replica be more robust, but it positions us to do a better job debugging problems because we will know what state each replica is in.


pkg/controller.v2/controller_pod.go, line 61 at r7 (raw file):

		err := tc.updateTFJobConditions(tfjob, tfv1alpha2.TFJobSucceeded, tfJobSucceededReason, msg)
		if err != nil {
			log.Infof("Append tfjob condition error: %v", err)

Here and everywhere else please use a context logger like
https://github.com/kubeflow/tf-operator/blob/master/pkg/trainer/training.go#L89

And log fields for the
job namespace + name
uuid
replica (if applicable)
replica index if applicable

This will ensure these fields are stored as metadata in the json log entry which makes filtering log messages for a particular job to figure out what happened easier.


pkg/controller.v2/controller_pod.go, line 94 at r7 (raw file):

	diff := len(activePods) - int(expected)

	if diff < 0 {

Per comments above; can we check that every replica is in the expected state?


pkg/controller.v2/controller_pod.go, line 109 at r7 (raw file):

		for _, index := range diffIndexes {
			log.Infof("need to create new pod: %s-%s", rt, index)

Please use logrus to log this with rt and index as metadata fields so we can easily filter log messages by replica.

This will make it really easy to filter logs to find out what happened during a replicas life.


pkg/controller.v2/controller_pod.go, line 148 at r7 (raw file):

			// Set restart policy
			if spec.RestartPolicy != "ExitCode" {

nit; Define a named constant


pkg/controller.v2/controller_pod.go, line 203 at r7 (raw file):

		if _, ok := desiredIndexes[index]; ok {
			desiredIndexes[index] = hit
		}

Should we log a warning if index isn't found in desiredIndexes? That would seem like it indicates a bug of some sort since that shouldn't happen.


pkg/controller.v2/controller_pod.go, line 218 at r7 (raw file):

// getPodsForTFJob returns the set of pods that this tfjob should manage.
// It also reconciles ControllerRef by adopting/orphaning.
// Note that the returned Pods are pointers into the cache.

What does adopting/orphaning pods mean? I assume it means changing the owner ref on the pod to point to this TFJob controller.

Do we actually expect this to happen? Why would the labels or selector get changed on the pod?


pkg/controller.v2/controller_pod.go, line 295 at r7 (raw file):

		tfjob := tc.resolveControllerRef(pod.Namespace, controllerRef)
		if tfjob == nil {
			log.Info("This pod's tfjob does not exists")

Lets add appropriate metadata fields to the long entry; e.g. tfjob name and uuid and maybe replica type and index?


Comments from Reviewable

@gaocegege
Copy link
Member Author

cmd/tf-operator.v2/app/options/options.go, line 38 at r2 (raw file):

Previously, jlewi (Jeremy Lewi) wrote…

Why do we need two ways of specifying the path to kubeconfig? Even if users are running locally they should be able to set the environment variable KUBECONFIG.

I think its highly undesirable to have both an environment variable and a command line flag. This inevitably leads to confusion about which should take precedence.

Make sense. I could remove the flag later. Personally, I think it is better not to import more enhancements into the PR, since I think it is a sync with upstream/v1alpha2. We are using it to implement some systems and I hope we could use this commit instead of the branch v1alpha2. Thus keeping consistent matters.


Comments from Reviewable

@gaocegege
Copy link
Member Author

pkg/controller.v2/controller_pod.go, line 54 at r7 (raw file):

Previously, jlewi (Jeremy Lewi) wrote…

I couldn't understand from #538 what the bug is. What's the plan for this PR?

Please see above, could we merge it and enhance the implementation iteratively?


Comments from Reviewable

@gaocegege
Copy link
Member Author

pkg/controller.v2/controller_pod.go, line 94 at r7 (raw file):

Previously, jlewi (Jeremy Lewi) wrote…

Do you plan to fix this in this PR?

No, I think it is a big change and I am not sure map is the best way. Thus we could merge it and enhance the implementation iteratively.


Comments from Reviewable

@gaocegege
Copy link
Member Author

pkg/controller.v2/controller_pod.go, line 203 at r7 (raw file):

Previously, jlewi (Jeremy Lewi) wrote…

Why not just fix it in this PR?

Because we need to import logger and then do it. I think we could do it after the PR merged. Or I could file a new PR based on this PR to fix it.


Comments from Reviewable

@jlewi
Copy link
Contributor

jlewi commented Apr 18, 2018

Review status: 3 of 15 files reviewed at latest revision, 12 unresolved discussions.


cmd/tf-operator.v2/app/options/options.go, line 38 at r2 (raw file):

Previously, gaocegege (Ce Gao) wrote…

Make sense. I could remove the flag later. Personally, I think it is better not to import more enhancements into the PR, since I think it is a sync with upstream/v1alpha2. We are using it to implement some systems and I hope we could use this commit instead of the branch v1alpha2. Thus keeping consistent matters.

I don't understand your point about doing more changes in this PR. The point of this PR is to start checking code into master.

Why would we treat the work going on in the v1alpha2 branch any differently from work going on in a local branch

I think we should make all the changes in this PR such that the resulting code that's checked into master is in a good state and the source of truth.

So I think we should make this and other straightforward changes in this PR. I don't think we should check in code only to remove it in a follow on PR.


pkg/controller.v2/controller_pod.go, line 94 at r7 (raw file):

Previously, gaocegege (Ce Gao) wrote…

No, I think it is a big change and I am not sure map is the best way. Thus we could merge it and enhance the implementation iteratively.

If we're uncertain about how to implement it then why check it in?

Why not remove the code that we are uncertain about (replace it with a TODO).

Alternatively, is there a simpler, less efficient but correct implementation we could check in? I don't like the idea of checking in code that we don't think works. Could we just iterate over all the replicas and check for each one if we need to do any work?


pkg/controller.v2/controller_pod.go, line 203 at r7 (raw file):

Previously, gaocegege (Ce Gao) wrote…

Because we need to import logger and then do it. I think we could do it after the PR merged. Or I could file a new PR based on this PR to fix it.

Isn't controller_pod.go already importing log?


Comments from Reviewable

@ScorpioCPH
Copy link
Member

Review status: 3 of 15 files reviewed at latest revision, 13 unresolved discussions.


pkg/controller.v2/controller_pod.go, line 94 at r7 (raw file):

Previously, jlewi (Jeremy Lewi) wrote…

If we're uncertain about how to implement it then why check it in?

Why not remove the code that we are uncertain about (replace it with a TODO).

Alternatively, is there a simpler, less efficient but correct implementation we could check in? I don't like the idea of checking in code that we don't think works. Could we just iterate over all the replicas and check for each one if we need to do any work?

Hi, getDiffPodIndexes() will check which replica should be handled.


pkg/controller.v2/controller_pod.go, line 218 at r7 (raw file):

Previously, gaocegege (Ce Gao) wrote…

Yeah, the code refers to the job controller and I think we do not expect that happen but there might be. Thus I think we should claim them and deal with them later.

We can't assume that only tf-operator will change the pods' status, so we should claiming these pods.


Comments from Reviewable

Signed-off-by: Ce Gao <gaoce@caicloud.io>
@gaocegege
Copy link
Member Author

cmd/tf-operator.v2/app/options/options.go, line 38 at r2 (raw file):

Previously, jlewi (Jeremy Lewi) wrote…

I don't understand your point about doing more changes in this PR. The point of this PR is to start checking code into master.

Why would we treat the work going on in the v1alpha2 branch any differently from work going on in a local branch

I think we should make all the changes in this PR such that the resulting code that's checked into master is in a good state and the source of truth.

So I think we should make this and other straightforward changes in this PR. I don't think we should check in code only to remove it in a follow on PR.

Done


Comments from Reviewable

@gaocegege
Copy link
Member Author

pkg/controller.v2/controller_pod.go, line 94 at r7 (raw file):

Previously, ScorpioCPH (Penghao Cen) wrote…

Hi, getDiffPodIndexes() will check which replica should be handled.

@ScorpioCPH I think Jeremy means that we use an integer instead of a map to represent the succeeded/failed pods.


Comments from Reviewable

@jlewi
Copy link
Contributor

jlewi commented Apr 18, 2018

Review status: 3 of 15 files reviewed at latest revision, 12 unresolved discussions.


pkg/controller.v2/controller_pod.go, line 94 at r7 (raw file):

Previously, gaocegege (Ce Gao) wrote…

@ScorpioCPH I think Jeremy means that we use an integer instead of a map to represent the succeeded/failed pods.

I think the problem is that we check if diff < 0 to decided whether there is any work to do.

As noted in my earlier comment I think that check is insufficient under certain conditions; e.g. the case where you have multiple pods for a given.

You already get a list of all pods for this job. So I think you want to group these pods based on replica. You can then correctly check

  1. If there is a replica missing some pods
  2. If there are multiple pods for a replica; you can figure out what the correct state and action is.

pkg/controller.v2/controller_pod.go, line 218 at r7 (raw file):

Previously, ScorpioCPH (Penghao Cen) wrote…

We can't assume that only tf-operator will change the pods' status, so we should claiming these pods.

Thanks.


Comments from Reviewable

@u2takey
Copy link
Contributor

u2takey commented Apr 18, 2018

@jlewi @gaocegege v1alpha2 is not just a api change? we are doing a totally refactoring??? can we still keep compatiblty with v1alpha1?

gaocegege referenced this pull request Apr 18, 2018
Signed-off-by: Ce Gao <gaoce@caicloud.io>
@gaocegege
Copy link
Member Author

@u2takey We can not since API changes. We refactored the operator because of #300

Signed-off-by: Ce Gao <gaoce@caicloud.io>
@gaocegege
Copy link
Member Author

pkg/controller.v2/controller_pod.go, line 203 at r7 (raw file):

Previously, jlewi (Jeremy Lewi) wrote…

Isn't controller_pod.go already importing log?

Done.


Comments from Reviewable

@gaocegege
Copy link
Member Author

gaocegege commented Apr 18, 2018

pkg/controller.v2/controller_pod.go, line 94 at r7 (raw file):

Previously, jlewi (Jeremy Lewi) wrote…

I think the problem is that we check if diff < 0 to decided whether there is any work to do.

As noted in my earlier comment I think that check is insufficient under certain conditions; e.g. the case where you have multiple pods for a given.

You already get a list of all pods for this job. So I think you want to group these pods based on replica. You can then correctly check

  1. If there is a replica missing some pods
  2. If there are multiple pods for a replica; you can figure out what the correct state and action is.

I cam not sure if I understand you. Do you mean the code similar to here:

func (tc *TFJobController) syncPods(pods []*v1.Pod, replicas int, logger  *log.Entry) {
	podSlices := getPodSlices(pods, replicaslogger)
	for index, podSlice := range podSlices {
		if len(podSlice) > 1 {
			logger.Warning("We have to many pods for the worker %d", index)
			// Kill some
		}
		if len(podsSlice) == 0 {
			// Create one
		}
		pod := podSlice[0]
		// We already have one, and check if it is succeede or something else.
	}
}

func getPodSlices(pods []*v1.Pod, replicas int, logger *log.Entry) [][]*v1.Pod {
	resMap := make([][]*v1.Pod)
	for _, pod := range pods {
		if _, ok := pod.Labels[tfReplicaIndexLabel]; !ok {
			logger.Warning("The pod do not have the index label.")
		}
		index, err := strconv.Atoi(pod.Labels[tfReplicaIndexLabel])
		if err != nil {
			logger.Warning("Error when strconv.Atoi: %v", err)
		}
		if index < 0 || index >= replicas {
			logger.Warningf("The label index is not expected: %d", index)
		}

		resMap[index] = append(resMap[index], pod)
	}
}

Comments from Reviewable

@jlewi
Copy link
Contributor

jlewi commented Apr 19, 2018

Review status: 3 of 17 files reviewed at latest revision, 11 unresolved discussions.


cmd/tf-operator.v2/app/options/options.go, line 38 at r2 (raw file):

Previously, gaocegege (Ce Gao) wrote…

Done

Thanks.


pkg/controller.v2/controller_pod.go, line 94 at r7 (raw file):

Previously, gaocegege (Ce Gao) wrote…

I cam not sure if I understand you. Do you mean the code similar to here:

func (tc *TFJobController) syncPods(pods []*v1.Pod, replicas int, logger  *log.Entry) {
	podSlices := getPodSlices(pods, replicaslogger)
	for index, podSlice := range podSlices {
		if len(podSlice) > 0 {
			logger.Warning("We have to many pods for the worker %d", index)
			// Kill some
		}
		if len(podsSlice) == 0 {
			// Create one
		}
		pod := podSlice[0]
		// We already have one, and check if it is succeede or something else.
	}
}

func getPodSlices(pods []*v1.Pod, replicas int, logger *log.Entry) [][]*v1.Pod {
	resMap := make([][]*v1.Pod)
	for _, pod := range pods {
		if _, ok := pod.Labels[tfReplicaIndexLabel]; !ok {
			logger.Warning("The pod do not have the index label.")
		}
		index, err := strconv.Atoi(pod.Labels[tfReplicaIndexLabel])
		if err != nil {
			logger.Warning("Error when strconv.Atoi: %v", err)
		}
		if index < 0 || index >= replicas {
			logger.Warningf("The label index is not expected: %d", index)
		}

		resMap[index] = append(resMap[index], pod)
	}
}

Exactly. I'd be fine merging that snippet with TODOs to flush out the implementation.


Comments from Reviewable

Signed-off-by: Ce Gao <gaoce@caicloud.io>
@gaocegege
Copy link
Member Author

pkg/controller.v2/controller_pod.go, line 94 at r7 (raw file):

Previously, jlewi (Jeremy Lewi) wrote…

Exactly. I'd be fine merging that snippet with TODOs to flush out the implementation.

I added the snippet to the code and add a TODO in reconcilePods, PTAL


Comments from Reviewable

@ScorpioCPH
Copy link
Member

@gaocegege Great job! Looking forward to seeing this PR merged.

@jlewi
Copy link
Contributor

jlewi commented Apr 20, 2018

Thanks!
/lgtm
/approve

@k8s-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jlewi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit 9e07f6d into kubeflow:master Apr 20, 2018
@gaocegege gaocegege deleted the implementation branch April 20, 2018 07:13
yph152 pushed a commit to yph152/tf-operator that referenced this pull request Jun 18, 2018
* *: Add implementation for v1alpha2

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* linter: Fix

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* options: Remove default config for kubeconfig

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* service: Add const

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* server.go: Remove new lines

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* *: Fix import styles

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* controller.go: Add comment for handler

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* controller_pod: Fix name

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* options: Remove kubeconfig

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* *: Update

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* controller_pod: Add TODO

Signed-off-by: Ce Gao <gaoce@caicloud.io>
jetmuffin pushed a commit to jetmuffin/tf-operator that referenced this pull request Jul 9, 2018
* *: Add implementation for v1alpha2

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* linter: Fix

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* options: Remove default config for kubeconfig

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* service: Add const

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* server.go: Remove new lines

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* *: Fix import styles

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* controller.go: Add comment for handler

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* controller_pod: Fix name

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* options: Remove kubeconfig

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* *: Update

Signed-off-by: Ce Gao <gaoce@caicloud.io>

* controller_pod: Add TODO

Signed-off-by: Ce Gao <gaoce@caicloud.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants