-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug fixes #298
Open
gAldeia
wants to merge
15
commits into
cavalab:master
Choose a base branch
from
gAldeia:bug_fixes
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Bug fixes #298
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Previously, I made a change to calculate the complexity before logging, as the complexity column was filled with zeros if we dont use the NSGA2 survival. Still, I was updating the stats after sending them to the log file. Now Im doing everything in the right order, and the log file doesnt get filled with zeros in complexity.
We also make sure complexity is calculated before returning it.
FEAT always work with minimization problems, so the previous accuracy calculation was actually correct. I added a string explaining this, and changed how we calculate it.
FEAT is designed to work with continuous class encodings, and do not perform the transformation itself. Instead of changing the behavior of FEAT, I think the error message is already clear enough and the user can easily get around this problem if the data is not as FEAT expects. For the record, this is the new test function: ```python def check_estimators_dtypes(name, estimator_orig): rnd = np.random.RandomState(0) X_train_32 = 3 * rnd.uniform(size=(20, 5)).astype(np.float32) X_train_32 = _enforce_estimator_tags_X(estimator_orig, X_train_32) X_train_64 = X_train_32.astype(np.float64) X_train_int_64 = X_train_32.astype(np.int64) X_train_int_32 = X_train_32.astype(np.int32) y = np.array([1, 2] * 10, dtype=np.int64) y = _enforce_estimator_tags_y(estimator_orig, y) ``` and FEAT wont work due to: ```python self.classes_ = [int(i) for i in np.unique(np.asarray(y))] if (any([ i != j for i,j in zip(self.classes_, np.arange(np.max(self.classes_))) ])): raise ValueError('y must be a contiguous set of labels from ', '0 to n_classes. y contains the values {}'.format( self.classes_) ) ```.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes stats being reported before updating the objectives
Previously, I made a change to force the calculation of the
complexity:
https://github.com/cavalab/feat/blob/459e021d4fbccf1ed4dfa6bc37d15aca71bfb49b/src/feat.cc#L719C2-L724
Still, I was updating the stats after sending them to the log file
(line 714).
Now I'm doing everything in the right order, and the log file doesn't
get filled with zeros in complexity.
Implements
get_complexity
for python wrapperSince I'm changing the code to report the complexity in logfile
(if complexity is specified as one of the objectives), I also thought
that would be great to individually assess the complexity of a final
model. I added this feature, so you can call
get_complexity
after fittingthe individual.
I also make sure complexity is calculated before returning it.