-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bug with null replication metrics when row is all null #706
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just one comment left around test values... otheriwse, LGTM
data.loc[max_null_index] = {} | ||
|
||
# Fill in missing rows with NaN and convert types to numeric | ||
data = data.reindex(range(data.index.max() + 1), fill_value=np.nan).apply( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this could be really expensive for large data. Is there another way around this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What specific error is requiring this change / where?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 good catch @JGSweets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a row is all null, the dataframe will be missing a row, which causes an error on the line
sum_null = data.iloc[null_indices, data.columns != col_id].sum().to_numpy()
.
reindex
fills in these missing rows with nan
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The .reindex()
though when done on an entire dataset could be very costly in terms of runtime; so that is the concern around this operation that is being added to the process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
null_indices + non-complete null rows
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or even better potentially would be null_indices - complete null rows
(theoretically there should be less completely null rows)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another way would be to reindex a single column in clean_samples
and then combine all of them into data
, which should also fix the bug.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would work too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any way we can do it w/o re-indexing? I think that would be the ideal state.
|
||
# If the last row is all null, then add rows to the data DataFrame | ||
max_null_index = max( | ||
[max(i) for i in getattr(self._profile[0], "null_types_index").values()], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self._profile[0]
Right now this is getting the first profile every time is this correct?
Why do we need to add rows to the df?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the last row(s) is all null, then the maximum null index of any profile should be the index of the last row.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- in this test case,
len(self._profile)
is3
and only the zero-th index has values.
getattr(self._profile[0], "null_types_index").values()
dict_values([{2, 4}])
getattr(self._profile[1], "null_types_index").values()
dict_values([])
getattr(self._profile[2], "null_types_index").values()
dict_values([])
@tonywu315 is this always the case though? are we sure we can always just pull the zero-th profile in the self._profile
list?
Head branch was pushed to by a user without write access
sum_null = ( | ||
data.loc[data.index.intersection(null_indices), data.columns != col_id] | ||
.sum() | ||
.to_numpy() | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code does not error anymore if the entire DataFrame is null.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is much cleaner!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
beautiful!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM -- great job!
When
null_replication_metrics
is enabled and any row is all null, this errors: