-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ADD pod resource & routes to get member-cluster namespace/deployment/pod #150
Conversation
/assign |
Thansk, i'll check it today~ |
if err != nil { | ||
common.Fail(c, err) | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
duplicate check for existence of member cluster, for these kind of check, we can use middleware in gin, like :
// middleware.go
func EnsureMemberClusterMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
karmadaClient := client.InClusterKarmadaClient()
_, err := karmadaClient.ClusterV1alpha1().Clusters().Get(context.TODO(), c.Param("clustername"), metav1.GetOptions{})
if err != nil {
c.AbortWithStatusJSON(http.StatusOK, common.BaseResponse{
Code: 500,
Msg: err.Error(),
})
return
}
c.Next()
}
}
// usage in setup.go
member = v1.Group("/member/:clustername")
member.Use(EnsureMemberClusterMiddleware())
when the code executing in the handler, we can ensure the existence of membercluster, otherwise it will be abort in the middleware.
I've check the code locally, the only one problem is duplicate check of membercluster. Apart from that, one question I want to talk about is that for all kind of resources in member cluster, is it need to make another copy just like the ones we implemented for the karmada apiserver. It seems that the answer is yes, but is there a better to implemented that. Just a question, i have no idea till now. |
Perhaps add a middleware for the karmada apiserver as well, and put the client initialization process in the middleware, and then pass the client using the
For example, to get deployments:
In this case, maybe we can avoid some copy? |
+1, I think it's a good practice. One more suggestion is that, maybe we can use different client name to distinguish client for k8s and client for karmada. |
You mean |
I mean for k8s client we can name the key as |
But in this case, how can we know which key should retrieve. For example, getting deployments
I think the route maybe like this
|
Yeah, to take these two problems into consideration at the same time is a little bit hard, we can init the clients(for kuberneters、for karmada and for member-cluster ) in the middleware functions, and use them on demand. If we cannot reuse the resource handler currently, we can just hold on these problem, just some more duplicate code, maybe we can reduce these kinds of code in the future~ |
@Heylosky can you rebase your code and push again ? |
Signed-off-by: Heylosky <2547226479@qq.com>
3201af6
to
e16cf8f
Compare
Sorry, I just push again. I have added the middleware and removed the duplicate code for the member-cluster check. I didn't add the |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: warjiang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Add pod resources. Add routes for obtaining member cluster pods( list and detail ), namespaces, and deployments.
Set a router group under
/api/v1
for prefix/member/:clustername
, to get member cluster resources.Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: