Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADD pod resource & routes to get member-cluster namespace/deployment/pod #150

Merged
merged 1 commit into from
Dec 6, 2024

Conversation

Heylosky
Copy link
Contributor

What type of PR is this?
/kind feature

What this PR does / why we need it:
Add pod resources. Add routes for obtaining member cluster pods( list and detail ), namespaces, and deployments.
Set a router group under /api/v1 for prefix /member/:clustername, to get member cluster resources.

Which issue(s) this PR fixes:
Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

NONE

@karmada-bot karmada-bot added the kind/feature Categorizes issue or PR as related to a new feature. label Nov 27, 2024
@karmada-bot karmada-bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Nov 27, 2024
@warjiang
Copy link
Contributor

/assign

@warjiang
Copy link
Contributor

Thansk, i'll check it today~

if err != nil {
common.Fail(c, err)
return
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate check for existence of member cluster, for these kind of check, we can use middleware in gin, like :

// middleware.go
func EnsureMemberClusterMiddleware() gin.HandlerFunc {
	return func(c *gin.Context) {
		karmadaClient := client.InClusterKarmadaClient()
		_, err := karmadaClient.ClusterV1alpha1().Clusters().Get(context.TODO(), c.Param("clustername"), metav1.GetOptions{})
		if err != nil {
			c.AbortWithStatusJSON(http.StatusOK, common.BaseResponse{
				Code: 500,
				Msg:  err.Error(),
			})
			return
		}
		c.Next()
	}
}

// usage in setup.go
member = v1.Group("/member/:clustername")
member.Use(EnsureMemberClusterMiddleware())

when the code executing in the handler, we can ensure the existence of membercluster, otherwise it will be abort in the middleware.

@warjiang
Copy link
Contributor

I've check the code locally, the only one problem is duplicate check of membercluster. Apart from that, one question I want to talk about is that for all kind of resources in member cluster, is it need to make another copy just like the ones we implemented for the karmada apiserver. It seems that the answer is yes, but is there a better to implemented that. Just a question, i have no idea till now.

@Heylosky
Copy link
Contributor Author

Heylosky commented Dec 2, 2024

I've check the code locally, the only one problem is duplicate check of membercluster. Apart from that, one question I want to talk about is that for all kind of resources in member cluster, is it need to make another copy just like the ones we implemented for the karmada apiserver. It seems that the answer is yes, but is there a better to implemented that. Just a question, i have no idea till now.

Perhaps add a middleware for the karmada apiserver as well, and put the client initialization process in the middleware, and then pass the client using the c.Set and c.Get? This way we can use the same implementation across all resources in the member cluster and the karmada control plane. Just like:

func EnsureMemberClusterMiddleware() gin.HandlerFunc {
	return func(c *gin.Context) {
		karmadaClient := client.InClusterKarmadaClient()
		_, err := karmadaClient.ClusterV1alpha1().Clusters().Get(context.TODO(), c.Param("clustername"), metav1.GetOptions{})
		if err != nil {
			c.AbortWithStatusJSON(http.StatusOK, common.BaseResponse{
				Code: 500,
				Msg:  err.Error(),
			})
			return
		}
		// Init client for member cluster
		memberClient := client.InClusterClientForMemberCluster(c.Param("clustername"))
		c.Set("client", memberClient)

		c.Next()
	}
}

func V1Middleware() gin.HandlerFunc {
	return func(c *gin.Context) {
		// Init client for controller cluster
		k8sClient := client.InClusterClientForKarmadaApiServer()
		c.Set("client", k8sClient)

		c.Next()
	}
}

For example, to get deployments:

func handleGetDeployments(c *gin.Context) {
	if value, exists := c.Get("client"); exists {
		if memberClient, ok := value.(kubeclient.Interface); ok {
			namespace := common.ParseNamespacePathParameter(c)
			dataSelect := common.ParseDataSelectPathParameter(c)
			result, err := deployment.GetDeploymentList(memberClient, namespace, dataSelect)
			if err != nil {
				common.Fail(c, err)
				return
			}
			common.Success(c, result)
		} else {
			c.JSON(500, gin.H{"error": "Failed to assert memberClient to kubernetes.Interface"})
		}
	} else {
		c.JSON(500, gin.H{"error": "memberClient not found"})
	}
}

In this case, maybe we can avoid some copy?

@warjiang
Copy link
Contributor

warjiang commented Dec 3, 2024

I've check the code locally, the only one problem is duplicate check of membercluster. Apart from that, one question I want to talk about is that for all kind of resources in member cluster, is it need to make another copy just like the ones we implemented for the karmada apiserver. It seems that the answer is yes, but is there a better to implemented that. Just a question, i have no idea till now.

Perhaps add a middleware for the karmada apiserver as well, and put the client initialization process in the middleware, and then pass the client using the c.Set and c.Get? This way we can use the same implementation across all resources in the member cluster and the karmada control plane. Just like:

func EnsureMemberClusterMiddleware() gin.HandlerFunc {
	return func(c *gin.Context) {
		karmadaClient := client.InClusterKarmadaClient()
		_, err := karmadaClient.ClusterV1alpha1().Clusters().Get(context.TODO(), c.Param("clustername"), metav1.GetOptions{})
		if err != nil {
			c.AbortWithStatusJSON(http.StatusOK, common.BaseResponse{
				Code: 500,
				Msg:  err.Error(),
			})
			return
		}
		// Init client for member cluster
		memberClient := client.InClusterClientForMemberCluster(c.Param("clustername"))
		c.Set("client", memberClient)

		c.Next()
	}
}

func V1Middleware() gin.HandlerFunc {
	return func(c *gin.Context) {
		// Init client for controller cluster
		k8sClient := client.InClusterClientForKarmadaApiServer()
		c.Set("client", k8sClient)

		c.Next()
	}
}

For example, to get deployments:

func handleGetDeployments(c *gin.Context) {
	if value, exists := c.Get("client"); exists {
		if memberClient, ok := value.(kubeclient.Interface); ok {
			namespace := common.ParseNamespacePathParameter(c)
			dataSelect := common.ParseDataSelectPathParameter(c)
			result, err := deployment.GetDeploymentList(memberClient, namespace, dataSelect)
			if err != nil {
				common.Fail(c, err)
				return
			}
			common.Success(c, result)
		} else {
			c.JSON(500, gin.H{"error": "Failed to assert memberClient to kubernetes.Interface"})
		}
	} else {
		c.JSON(500, gin.H{"error": "memberClient not found"})
	}
}

In this case, maybe we can avoid some copy?

+1, I think it's a good practice. One more suggestion is that, maybe we can use different client name to distinguish client for k8s and client for karmada.

@Heylosky
Copy link
Contributor Author

Heylosky commented Dec 3, 2024

I've check the code locally, the only one problem is duplicate check of membercluster. Apart from that, one question I want to talk about is that for all kind of resources in member cluster, is it need to make another copy just like the ones we implemented for the karmada apiserver. It seems that the answer is yes, but is there a better to implemented that. Just a question, i have no idea till now.

Perhaps add a middleware for the karmada apiserver as well, and put the client initialization process in the middleware, and then pass the client using the c.Set and c.Get? This way we can use the same implementation across all resources in the member cluster and the karmada control plane. Just like:

func EnsureMemberClusterMiddleware() gin.HandlerFunc {
	return func(c *gin.Context) {
		karmadaClient := client.InClusterKarmadaClient()
		_, err := karmadaClient.ClusterV1alpha1().Clusters().Get(context.TODO(), c.Param("clustername"), metav1.GetOptions{})
		if err != nil {
			c.AbortWithStatusJSON(http.StatusOK, common.BaseResponse{
				Code: 500,
				Msg:  err.Error(),
			})
			return
		}
		// Init client for member cluster
		memberClient := client.InClusterClientForMemberCluster(c.Param("clustername"))
		c.Set("client", memberClient)

		c.Next()
	}
}

func V1Middleware() gin.HandlerFunc {
	return func(c *gin.Context) {
		// Init client for controller cluster
		k8sClient := client.InClusterClientForKarmadaApiServer()
		c.Set("client", k8sClient)

		c.Next()
	}
}

For example, to get deployments:

func handleGetDeployments(c *gin.Context) {
	if value, exists := c.Get("client"); exists {
		if memberClient, ok := value.(kubeclient.Interface); ok {
			namespace := common.ParseNamespacePathParameter(c)
			dataSelect := common.ParseDataSelectPathParameter(c)
			result, err := deployment.GetDeploymentList(memberClient, namespace, dataSelect)
			if err != nil {
				common.Fail(c, err)
				return
			}
			common.Success(c, result)
		} else {
			c.JSON(500, gin.H{"error": "Failed to assert memberClient to kubernetes.Interface"})
		}
	} else {
		c.JSON(500, gin.H{"error": "memberClient not found"})
	}
}

In this case, maybe we can avoid some copy?

+1, I think it's a good practice. One more suggestion is that, maybe we can use different client name to distinguish client for k8s and client for karmada.

You mean c.Get("client") this client name? But if the name in the context is different, the func is different also I think. (so can not reuse by member and karmada?)

@warjiang
Copy link
Contributor

warjiang commented Dec 3, 2024

I've check the code locally, the only one problem is duplicate check of membercluster. Apart from that, one question I want to talk about is that for all kind of resources in member cluster, is it need to make another copy just like the ones we implemented for the karmada apiserver. It seems that the answer is yes, but is there a better to implemented that. Just a question, i have no idea till now.

Perhaps add a middleware for the karmada apiserver as well, and put the client initialization process in the middleware, and then pass the client using the c.Set and c.Get? This way we can use the same implementation across all resources in the member cluster and the karmada control plane. Just like:

func EnsureMemberClusterMiddleware() gin.HandlerFunc {
	return func(c *gin.Context) {
		karmadaClient := client.InClusterKarmadaClient()
		_, err := karmadaClient.ClusterV1alpha1().Clusters().Get(context.TODO(), c.Param("clustername"), metav1.GetOptions{})
		if err != nil {
			c.AbortWithStatusJSON(http.StatusOK, common.BaseResponse{
				Code: 500,
				Msg:  err.Error(),
			})
			return
		}
		// Init client for member cluster
		memberClient := client.InClusterClientForMemberCluster(c.Param("clustername"))
		c.Set("client", memberClient)

		c.Next()
	}
}

func V1Middleware() gin.HandlerFunc {
	return func(c *gin.Context) {
		// Init client for controller cluster
		k8sClient := client.InClusterClientForKarmadaApiServer()
		c.Set("client", k8sClient)

		c.Next()
	}
}

For example, to get deployments:

func handleGetDeployments(c *gin.Context) {
	if value, exists := c.Get("client"); exists {
		if memberClient, ok := value.(kubeclient.Interface); ok {
			namespace := common.ParseNamespacePathParameter(c)
			dataSelect := common.ParseDataSelectPathParameter(c)
			result, err := deployment.GetDeploymentList(memberClient, namespace, dataSelect)
			if err != nil {
				common.Fail(c, err)
				return
			}
			common.Success(c, result)
		} else {
			c.JSON(500, gin.H{"error": "Failed to assert memberClient to kubernetes.Interface"})
		}
	} else {
		c.JSON(500, gin.H{"error": "memberClient not found"})
	}
}

In this case, maybe we can avoid some copy?

+1, I think it's a good practice. One more suggestion is that, maybe we can use different client name to distinguish client for k8s and client for karmada.

You mean c.Get("client") this client name? But if the name in the context is different, the func is different also I think. (so can not reuse by member and karmada?)

I mean for k8s client we can name the key as kuberneters-client, for karmada client we can name the key as karmada-client, for different member cluster client, we can name the key as member-client

@Heylosky
Copy link
Contributor Author

Heylosky commented Dec 3, 2024

I mean for k8s client we can name the key as kuberneters-client, for karmada client we can name the key as karmada-client, for different member cluster client, we can name the key as member-client

But in this case, how can we know which key should retrieve. For example, getting deployments

func handleGetDeployments(c *gin.Context) {
	c.Get( kuberneters-client?  or  karmada-client?  or member-client?  )
}

I think the route maybe like this

r := router.V1()
r.GET("/deployment", handleGetDeployments)
r := router.MemberV1()
r.GET("/deployment", handleGetDeployments)

@warjiang
Copy link
Contributor

warjiang commented Dec 4, 2024

I mean for k8s client we can name the key as kuberneters-client, for karmada client we can name the key as karmada-client, for different member cluster client, we can name the key as member-client

But in this case, how can we know which key should retrieve. For example, getting deployments

func handleGetDeployments(c *gin.Context) {
	c.Get( kuberneters-client?  or  karmada-client?  or member-client?  )
}

I think the route maybe like this

r := router.V1()
r.GET("/deployment", handleGetDeployments)
r := router.MemberV1()
r.GET("/deployment", handleGetDeployments)

Yeah, to take these two problems into consideration at the same time is a little bit hard, we can init the clients(for kuberneters、for karmada and for member-cluster ) in the middleware functions, and use them on demand. If we cannot reuse the resource handler currently, we can just hold on these problem, just some more duplicate code, maybe we can reduce these kinds of code in the future~

@karmada-bot karmada-bot added the do-not-merge/contains-merge-commits Indicates a PR which contains merge commits. label Dec 6, 2024
@warjiang
Copy link
Contributor

warjiang commented Dec 6, 2024

@Heylosky can you rebase your code and push again ?
image

@karmada-bot karmada-bot removed the do-not-merge/contains-merge-commits Indicates a PR which contains merge commits. label Dec 6, 2024
@Heylosky
Copy link
Contributor Author

Heylosky commented Dec 6, 2024

@Heylosky can you rebase your code and push again ?

Sorry, I just push again. I have added the middleware and removed the duplicate code for the member-cluster check. I didn't add the init clients in the middleware functions in this PR. I think it can be add in the future code optimization for resource handler together.

@warjiang
Copy link
Contributor

warjiang commented Dec 6, 2024

/lgtm
/approve
thanks~ @Heylosky

@karmada-bot karmada-bot added the lgtm Indicates that a PR is ready to be merged. label Dec 6, 2024
@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: warjiang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 6, 2024
@karmada-bot karmada-bot merged commit 3afef47 into karmada-io:main Dec 6, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/feature Categorizes issue or PR as related to a new feature. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants