-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Planned functionality #1
Comments
We should use the mode (i.e., most common value) of the column for categorical variables, and the median for continuous variables. Since there’s no easy way to detect continuous vs. categorical variables in pandas, we use a heuristic: If >20% of the values in a column are unique, then it is probably a continuous variable. Otherwise, it is probably a categorical variable. (Related to #1)
In my experience it is worth identifying ordinal variables (e.g. numerical grades) and handle then separately. In many cases these can be treated as continuous variables, but sometimes it is necessary to treat them as discrete ones. One example of this is missing value imputation. If treating them as continuous you may end up injecting fake values that then can mislead the downstream analysis. Thanks for the project! I tested it on some of my biomedical datasets and compared the PCA before/after the cleaning. The only case where there were differences is a dataset with discrete variables (Exome sequencing) and specifically in the columns where some of the values were '0'. There was the following error message: |
Indeed, which is why I'm trying to discover how to identify ordinal vs. continuous variables. I posted this question on StackOverflow to brainstorm. |
In our software we went with a much simpler approach. Letting the user specify a list of attributes to be treated as ordinal. Of course, an automatic solution is far more elegant :) |
"Convenience function: Detect if there are non-numerical features and encode them as numerical features" EpistasisLab/tpot#61 |
Do I have to do get_dummies() all by myself? ... get_dummies() accepts a number of kwargs |
I think it illogical to e.g. average Exterior1st in the Kaggle House Prices Dataset: the average of ImStucc and Wd Sdng seems nonsensical? |
CSVW as JSONLD may be a good way to specify a dataset header with the relevant metadata for such operations? pandas-dev/pandas#3402 |
You should be able to use the sklearn |
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html Is there a way to specify that I only need certain columns to be expanded into multiple columns w/ OneHotEncoder? |
See the docs you linked and the |
Do I need to write a FunctionTransformer to stack multiple preprocessing modules? |
i.e for different columns. Or just run |
Running |
https://github.com/paulgb/sklearn-pandas |
It may be worth noting that pandas Categoricals have an Does specifying the Categoricals have a different effect than inferring the ordinals from the happenstance sequence of strings in a given dataset? |
any plans to impute NA's rather then replace continuous variables with the median value? |
@adrose, do you mean via model-based imputation? |
@rhiever sorry should have been A LOT more specific, but yes something similar to what the Amelia command is doing in this R package - i.e. (bootstrapped linear regression). Happy to expand on it more, or would be excited to see if you have any thoughts on this function if you think it may be applicable. |
|
In the immediate future, datacleaner will:
See this tweet chain for more ideas.
If anyone has more ideas, please add them here.
The text was updated successfully, but these errors were encountered: