Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add unit tests for common.go in pkg/util #5331

Merged

Conversation

NishantBansal2003
Copy link
Contributor

What type of PR is this?
/kind failing-test

What this PR does / why we need it:
The test case coverage for common.go has been increased to 100%.
Which issue(s) this PR fixes:
Ref #5235

Special notes for your reviewer:
To verify the changes in the pkg/util directory run the following commands:

go test ./... -coverprofile=coverage.out
go tool cover -html=coverage.out -o coverage.html
open coverage.html

Does this PR introduce a user-facing change?:

NONE

@karmada-bot karmada-bot added the kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. label Aug 8, 2024
@karmada-bot karmada-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Aug 8, 2024
@codecov-commenter
Copy link

codecov-commenter commented Aug 8, 2024

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 28.48%. Comparing base (5a10d75) to head (9ca6274).

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5331      +/-   ##
==========================================
+ Coverage   28.45%   28.48%   +0.02%     
==========================================
  Files         632      632              
  Lines       43812    43812              
==========================================
+ Hits        12466    12479      +13     
+ Misses      30445    30437       -8     
+ Partials      901      896       -5     
Flag Coverage Δ
unittests 28.48% <ø> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@NishantBansal2003 NishantBansal2003 force-pushed the unit-tests-util-common branch 2 times, most recently from 6805621 to 9ca6274 Compare August 8, 2024 17:28
@karmada-bot karmada-bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Aug 8, 2024
@NishantBansal2003
Copy link
Contributor Author

Hey @XiShanYongYe-Chang added 100% test coverage in pkg/util/common.go. PTAL...

@XiShanYongYe-Chang
Copy link
Member

Thanks @NishantBansal2003
/assign

@NishantBansal2003
Copy link
Contributor Author

Hey @XiShanYongYe-Chang, I have some doubts I hope you can resolve:

  1. Suppose there is a helper function used to test the functionality of another function, but not all aspects of the helper function are utilized (and therefore remain untested). Should we create separate tests specifically for this helper function, or is it unnecessary to test it?
  2. I've read some of your reviews on PRs where you mentioned, Although the current test content improves the file coverage, the test cases have no real meaning. Additionally, the maintenance cost is increased. Could you explain this further so I can avoid making similar mistakes?

@XiShanYongYe-Chang
Copy link
Member

Suppose there is a helper function used to test the functionality of another function, but not all aspects of the helper function are utilized (and therefore remain untested). Should we create separate tests specifically for this helper function, or is it unnecessary to test it?

I'm afraid I answered wrong. do you have any specific examples?

I've read some of your reviews on PRs where you mentioned, Although the current test content improves the file coverage, the test cases have no real meaning. Additionally, the maintenance cost is increased. Could you explain this further so I can avoid making similar mistakes?

The functions to be tested should have clear purposes and the logic can be tested. For example, different test paths can be used to form different test cases. If the test case requires the same maintenance cost as the function under test, then I don't think the test is needed.

Comment on lines 116 to 122
added, removed := DiffKey(tt.previous, tt.current)
isAddedExpected := reflect.DeepEqual(added, tt.expectedAdded)
isRemovedExpected := reflect.DeepEqual(removed, tt.expectedRemoved)
if !isAddedExpected && !isRemovedExpected {
t.Errorf("added = %v want %v, removed = %v want %v", added, tt.expectedAdded, removed, tt.expectedRemoved)
}
if !isAddedExpected {
t.Errorf("added = %v, want %v", added, tt.expectedAdded)
}
if !isRemovedExpected {
t.Errorf("removed = %v want %v", removed, tt.expectedRemoved)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about update like this:

                        added, removed := DiffKey(tt.previous, tt.current)
-                       isAddedExpected := reflect.DeepEqual(added, tt.expectedAdded)
-                       isRemovedExpected := reflect.DeepEqual(removed, tt.expectedRemoved)
-                       if !isAddedExpected && !isRemovedExpected {
-                               t.Errorf("added = %v want %v, removed = %v want %v", added, tt.expectedAdded, removed, tt.expectedRemoved)
-                       }
-                       if !isAddedExpected {
+                       if !reflect.DeepEqual(added, tt.expectedAdded) {
                                t.Errorf("added = %v, want %v", added, tt.expectedAdded)
                        }
-                       if !isRemovedExpected {
+                       if !reflect.DeepEqual(removed, tt.expectedRemoved) {
                                t.Errorf("removed = %v want %v", removed, tt.expectedRemoved)
                        }

Comment on lines 142 to 163
t.Run("keys exist", func(t *testing.T) {
mcs := map[string]sigMultiCluster{
kubefed.Name: kubefed,
karmada.Name: karmada,
}
got := Keys(mcs)
expect := []string{kubefed.Name, karmada.Name}
sort.Strings(got)
sort.Strings(expect)
if !reflect.DeepEqual(got, expect) {
t.Errorf("got = %v, want %v", got, expect)
}
})
t.Run("empty keys", func(t *testing.T) {
var mcs map[string]sigMultiCluster
got := Keys(mcs)
var expect []string
sort.Strings(got)
if !reflect.DeepEqual(got, expect) {
t.Errorf("got = %v, want %v", got, expect)
}
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about rewriting this test as tabel drive as well? Just like your rewrite above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the same. Now I'm doing it...

Signed-off-by: Nishant Bansal <nishant.bansal.mec21@iitbhu.ac.in>
@NishantBansal2003
Copy link
Contributor Author

I'm afraid I answered wrong. do you have any specific examples?

Yeah like this -

func NewResponder(response *httptest.ResponseRecorder) *MockResponder {
return &MockResponder{
resp: response,
}
}

Is used to test TestConnectCluster in https://github.com/karmada-io/karmada/blob/master/pkg/util/proxy/proxy_test.go
But other functions like -
func (f *MockResponder) Object(statusCode int, obj runtime.Object) {
f.resp.Code = statusCode
if obj != nil {
err := json.NewEncoder(f.resp).Encode(obj)
if err != nil {
f.Error(err)
}
}
}

Has not been used, Hence remain untested. So should I create separate tests specifically for this(untested) function, or is it unnecessary to test it?

@NishantBansal2003
Copy link
Contributor Author

PTAL...

Copy link
Member

@XiShanYongYe-Chang XiShanYongYe-Chang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks~
/lgtm
/approve

@karmada-bot karmada-bot added the lgtm Indicates that a PR is ready to be merged. label Aug 9, 2024
@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: XiShanYongYe-Chang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 9, 2024
@karmada-bot karmada-bot merged commit 37488bd into karmada-io:master Aug 9, 2024
12 checks passed
@XiShanYongYe-Chang
Copy link
Member

Has not been used, Hence remain untested. So should I create separate tests specifically for this(untested) function, or is it unnecessary to test it?

I understand that this is part of the test framework, no need to test them.

@RainbowMango RainbowMango added this to the v1.11 milestone Aug 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants