GithubHelp home page GithubHelp logo

autoviml / featurewiz Goto Github PK

View Code? Open in Web Editor NEW
544.0 544.0 84.0 10.61 MB

Use advanced feature engineering strategies and select best features from your data set with a single line of code. Created by Ram Seshadri. Collaborators welcome.

License: Apache License 2.0

Python 100.00%
best-encoders categorical-variables feature-engg feature-engineering feature-extraction feature-selection featuretools rfe rfecv xgboost

featurewiz's Introduction

Repos Badge Updated Badge Join our elite team of contributors! Contributors Display Contributors Display Contributors Display Contributors Display image3000

👋 Welcome to the AutoViML Fan Club Page!
We just hit 3300 stars collectively for all AutoViML libraries on Github!!

AutoViML creates innovative Open Source libraries to make data scientists' and machine learning engineers' lives easier and more productive!

kanchitank

Our innovative libraries so far:

  • 🤝 AutoViz Automatically Visualizes any dataset, any size with a single line of code. Now with Bokeh and Holoviews it can make your charts and dashboards interactive!
  • 🤝 Auto_ViML Automatically builds multiple ML models with a single line of code. Uses scikit-learn, XGBoost and CatBoost.
  • 🤝 Auto_TS Automatically builds ARIMA, SARIMAX, VAR, FB Prophet and XGBoost Models on Time Series data sets with a Single Line of Code. Now updated with DASK to handle millions of rows.
  • 🤝 Featurewiz Uses advanced feature engineering strategies and select the best features from your data set fast with a single line of code. Now updated with DASK to handle millions of rows.
  • 🤝 Deep_AutoViML Builds tensorflow keras models and pipelines for any data set, any size with text, image and tabular data, with a single line of code.
  • 🤝 lazytransform Automatically transform all categorical, date-time, NLP variables to numeric in a single line of code, for any data, set any size.
  • 🤝 pandas_dq Automatically find and fix data quality issues in your dataset with a single line of code, for pandas.

Feb-2024: Added "Auto Encoders" for automatic feature extraction to featurewiz library for #feature-extraction

On Feb 8, 2024, we released a major update to our popular "featurewiz" library that will transform your input into a latent space with a dimension of latent_dim. This lower dimension (similar to PCA) will enable you to extract the best patterns in your data for the toughest imbalanced class and multi-class problems. Try it and let us know! autoencoders-screenshot
how to use autoencoders in featurewiz

April-2023: Released a major new python library "pandas_dq" #data_quality #dataengineering

On April 2, 2023, we released a major new Python library called "pandas_dq" that will automatically find and fix data quality issuesin your train and test dataframes in a single line of code, for any data, set any size. fix-dq-screenshot
how many pixels wide is my screen

April-2022: Released a major new python library "lazytransform" #featureengineering #featureselection

On April 3, 2022, we released a major new Python library called "lazytransform" that will automatically transform all categorical, date-time, NLP variables to numeric in a single line of code, for any data, set any size. lazy-code2

Jan-2022: Major upgrade to featurewiz: you can now perform feature selection thru fit and transform #MLOps #featureselection

As of version 0.0.90, featurewiz has a scikit-learn compatible feature selection transformer called FeatureWiz. You can use it to perform fit and predict as follows. You will get a Scikit-Learn Transformer object that you can add it to other data pipelines in MLops to select the top variables from your dataset.
featurewiz-class2

Dec-23-2021 Update: AutoViz now does Wordclouds! #autoviz #wordcloud

AutoViz can now create Wordclouds automatically for your NLP variables in data. It detects NLP variables automatically and creates wordclouds for them.

Dec 21, 2021: AutoViz now runs on Docker containers as part of MLOps pipelines. Check out Orchest.io

We are excited to announce that AutoViz and Deep_AutoViML are now available as containerized applications on Docker. This means that you can build data pipelines using a fantastic tool like orchest.io to build MLOps pipelines visually. Here are two sample pipelines we have created:

AutoViz pipeline: https://lnkd.in/g5uC-z66 Deep_AutoViML pipeline: https://lnkd.in/gdnWTqCG

You can find more examples and a wonderful video on orchest's web site banner

Dec-17-2021 AutoViz now uses HoloViews to display dashboards with Bokeh and save them as Dynamic HTML for web serving #HTML #Bokeh #Holoviews

Now you can use AutoViz to create Interactive Bokeh charts and dashboards (see below) either in Jupyter Notebooks or in the browser. Use chart_format as follows:

  • chart_format='bokeh': interactive Bokeh dashboards are plotted in Jupyter Notebooks.
  • chart_format='server', dashboards will pop up for each kind of chart on your web browser.
  • chart_format='html', interactive Bokeh charts will be silently saved as Dynamic HTML files under AutoViz_Plots directory

Languages and Tools:

docker git python scikit_learn

 AutoViML

AutoViML

Our Kaggle Badges:

notebook discussion

Connect with us on Linkedin:

ram seshadri

featurewiz's People

Contributors

a-schot avatar aaadityag avatar ai-ahmed avatar asiaticum avatar autoviml avatar boneyag avatar chinmay7016 avatar davisy avatar eromoe avatar gfggithubleet avatar guglielmocerri avatar himanshumahto avatar mishrasamiksha avatar thefznkhan avatar you-now-who avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

featurewiz's Issues

TypeError: gen_cat_encodet_features() got an unexpected keyword argument 'fitted'

Hi,
I love featurewiz!! I got it to work using:

#outputs = featurewiz(df99, target='FSXRNE', corr_limit=0.70, verbose=2,
#header=0, test_data='',feature_engg='interactions')

and it worked really well! HOwever, when I use:

outputs = featurewiz(df99, target='FSXRNE', corr_limit=0.70, verbose=2,header=0, category_encoders='OneHotEncoder')

I get:

TypeError: gen_cat_encodet_features() got an unexpected keyword argument 'fitted'

Any ideas on what is going wrong? Thank you!

Sincerely,

tom

Not able to replicate results - seed not set for random

This issue was raised previously and was said to have been addressed but I am still getting inconsistent results.

I checked the source code. Seeds are provided for numpy's and others' random number generators but not for package random.

Can this please be fixed ASAP? Thank you so much.

Verbose

Fantastic and beautiful package.

is it possible I could run this package on a list of dataframes without having to close graphs even when I set verbose to 0?

"left_subtract" not defined in SULOV

Hello,

When I execute
outputs = featurewiz.featurewiz(features_train.join(y_train), "label", corr_limit=0.7, verbose=1)

I receive the error message

#######################################################################################
#####  Searching for Uncorrelated List Of Variables (SULOV) in 446 features ############
#######################################################################################
    there are no null values in dataset...
    SULOV Method crashing due to name 'left_subtract' is not defined
    SULOV method is erroring. Continuing ...
Time taken for SULOV method = 2 seconds
    Adding 0 categorical variables to reduced numeric variables  of 446
Final list of selected vars after SULOV = 446

Also, when I import the left_subtract function, it still doesnt work.
from featurewiz.featurewiz import left_subtract

What is the issue here?

Help with feature_engineering and feature selection

As the @AutoViML said that the soul of the featurewiz is built to solve two problems

  1. Feature Engineering
  2. Feature Selection
    As per the instructions given through my last issue @AutoViML has guided me with a code used to build the features using the code snippet described below 👍

trainm, testm = FW.featurewiz(dataname=train, target=target, corr_limit=0.70, verbose=2, sep=',', header=0, test_data=test, feature_engg='', category_encoders='',dask_xgboost_flag=False, nrows=None)

This snippet seems working but it's not producing any feature-engineered features(new features from existing features) using the parameters given to "feature_engg" it's just performing the feature selection it's returning two data frames trainm&testm but with existing features. can @AutoViML help me with my doubts by giving solutions and giving straightforward snippets to feature selection and feature engineering(developing new features from existing ones)

I am thanking you in Advance!

[FEATURE REQUEST] - Limit the use of feature selection until SULOV part

Hello,

I was wondering if there could be a way to limit the automated feature selection part until after SULOV?

I'm trying to only get the output from removing low variance features and correlated features. Is this possible? I dont want to run the recursive XGBoost feature selection part.

Not sure if it is allow to change the code in my conda environment, and add a flag that returns the final_list variable after running SULOV?

cannot convert float NaN to integer

I checked for nulls:
df_select.isnull().values.any()
False
I also tried
df_select = df_select.dropna()
and then:
target = 'target'
features, train = featurewiz(X, target, corr_limit=0.7, verbose=2, sep=",", header=1, test_data="", feature_engg="", category_encoders="")

ValueError Traceback (most recent call last)
in
2 target = 'target'
3
----> 4 features, train = featurewiz(X, target, corr_limit=0.7, verbose=2, sep=",", header=0, test_data="", feature_engg="", category_encoders="")

~\Anaconda3\lib\site-packages\featurewiz\featurewiz.py in featurewiz(dataname, target, corr_limit, verbose, sep, header, test_data, feature_engg, category_encoders, **kwargs)
1269 if len(numvars) > 1:
1270 final_list = FE_remove_variables_using_SULOV_method(train,numvars,settings.modeltype,target,
-> 1271 corr_limit,verbose)
1272 else:
1273 final_list = copy.deepcopy(numvars)

~\Anaconda3\lib\site-packages\featurewiz\featurewiz.py in FE_remove_variables_using_SULOV_method(df, numvars, modeltype, target, corr_limit, verbose)
605 corr_values = correlation_dataframe.values
606 col_index = correlation_dataframe.columns.tolist()
--> 607 index_triupper = list(zip(np.triu_indices_from(corr_values,k=1)[0],np.triu_indices_from(
608 corr_values,k=1)[1]))
609 high_corr_index_list = [x for x in np.argwhere(abs(corr_values[np.triu_indices(len(corr_values), k = 1)])>=corr_limit)]

<array_function internals> in triu_indices_from(*args, **kwargs)

~\Anaconda3\lib\site-packages\dask\array\core.py in array_function(self, func, types, args, kwargs)
1530 if da_func is func:
1531 return handle_nonmatching_names(func, args, kwargs)
-> 1532 return da_func(*args, **kwargs)
1533
1534 @Property

~\Anaconda3\lib\site-packages\dask\array\routines.py in triu_indices_from(arr, k)
1741 if arr.ndim != 2:
1742 raise ValueError("input array must be 2-d")
-> 1743 return triu_indices(arr.shape[-2], k=k, m=arr.shape[-1], chunks=arr.chunks)

~\Anaconda3\lib\site-packages\dask\array\routines.py in triu_indices(n, k, m, chunks)
1734 @derived_from(np)
1735 def triu_indices(n, k=0, m=None, chunks="auto"):
-> 1736 return nonzero(~tri(n, m, k=k - 1, dtype=bool, chunks=chunks))
1737
1738

~\Anaconda3\lib\site-packages\dask\array\creation.py in tri(N, M, k, dtype, chunks)
687
688 m = greater_equal.outer(
--> 689 arange(N, chunks=chunks[0][0], dtype=_min_int(0, N)),
690 arange(-k, M - k, chunks=chunks[1][0], dtype=_min_int(-k, M - k)),
691 )

~\Anaconda3\lib\site-packages\dask\array\creation.py in arange(*args, **kwargs)
377 chunks = kwargs.pop("chunks", "auto")
378
--> 379 num = int(max(np.ceil((stop - start) / step), 0))
380
381 dtype = kwargs.pop("dtype", None)

ValueError: cannot convert float NaN to integer

Sample weight support for regression problems

Hello - I just saw this library written up on Medium and it looks very interesting. I wanted to ask about the possibility of adding Sample Weight support for it? Xgboost already has support for it via the weight parameter in the .fit() call, so I'm not sure what would be needed other than updating the API to allow a user to pass sample weights.

Thanks!

Featurewiz key error during fit and transform

I am working on a ML project to select important features.

So, I am using the Featurewiz package from documentation here

I tried the below from github for my data

from featurewiz import FeatureWiz
features = FeatureWiz(corr_limit=0.70, feature_engg='', category_encoders='', dask_xgboost_flag=False, nrows=None, verbose=2)
X_train_selected = features.fit_transform(ord_train_t, y_train)
X_test_selected = features.transform(ord_test_t) # error is encountered here
features.features  ### provides the list of selected features ###

Both ord_train_t and ord_test_t contain the same columns.

But I get a key error message when I try to use transform function after fit.

KeyError: "['Feat1', 'Feat2', 'Feat3', 'Feat5', 'Feat6', 'Feat7'] not in index"

But these columns are present in my ord_test_t data.

Is there anything wrong with the package or documentation? \

or am I using the fit and transform functions incorrectly?

find the full error below

    C:\Users\abcd\AppData\Local\Temp/ipykernel_11076/432759899.py in <module>
          2 features = FeatureWiz(corr_limit=0.70, feature_engg='', category_encoders='', dask_xgboost_flag=False, nrows=None, verbose=2)
          3 X_train_selected = features.fit(ord_train_t, y_train)
    ----> 4 X_test_selected = features.transform(ord_test_t)
          5 features.features  ### provides the list of selected features ###
    
    ~\Anaconda3\lib\site-packages\featurewiz\featurewiz.py in transform(self, X)
       3562 
       3563     def transform(self, X):
    -> 3564         return X[self.features]
       3565 ###################################################################################################
       3566 import copy
    
    ~\Anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
       3462             if is_iterator(key):
       3463                 key = list(key)
    -> 3464             indexer = self.loc._get_listlike_indexer(key, axis=1)[1]
       3465 
       3466         # take() does not accept boolean indexers
    
    ~\Anaconda3\lib\site-packages\pandas\core\indexing.py in _get_listlike_indexer(self, key, axis)
       1312             keyarr, indexer, new_indexer = ax._reindex_non_unique(keyarr)
       1313 
    -> 1314         self._validate_read_indexer(keyarr, indexer, axis)
       1315 
       1316         if needs_i8_conversion(ax.dtype) or isinstance(
    
    ~\Anaconda3\lib\site-packages\pandas\core\indexing.py in _validate_read_indexer(self, key, indexer, axis)
       1375 
       1376             not_found = list(ensure_index(key)[missing_mask.nonzero()[0]].unique())
    -> 1377             raise KeyError(f"{not_found} not in index")
       1378 
       1379 
    
        KeyError: "['Feat1', 'Feat2', 'Feat3', 'Feat5', 'Feat6', 'Feat7'] not in index"

UnboundLocalError: local variable 'params' referenced before assignment

/opt/conda/lib/python3.7/site-packages/featurewiz/featurewiz.py in featurewiz(dataname, target, corr_limit, verbose, sep, header, test_data, feature_engg, category_encoders, dask_xgboost_flag, nrows, **kwargs)
1130 param['nthread'] = -1
1131 param['tree_method'] = 'gpu_hist'
-> 1132 params['eta'] = 0.01
1133 params['subsample'] = 0.5
1134 params['grow_policy'] = 'depthwise' # 'lossguide' #

Is params getting set here instead of param? https://github.com/AutoViML/featurewiz/blob/main/featurewiz/featurewiz.py

Getting memory error while memory is free

I run featurewiz on Google Colab and get an error about memory, but there seems to be a lot of free memory.

- Free memory: 11918835712
- Requested memory: 25696

Full output:

############################################################################################
############       F A S T   F E A T U R E  E N G G    A N D    S E L E C T I O N ! ########
# Be judicious with featurewiz. Don't use it to create too many un-interpretable features! #
############################################################################################
Skipping feature engineering since no feature_engg input...
Skipping category encoding since no category encoders specified in input...
**INFO: featurewiz can now read feather formatted files. Loading train data...
    Shape of your Data Set loaded: (6424, 12784)
    Caution: We will try to reduce the memory usage of dataframe from 626.61 MB
        memory usage after optimization is: 121.60 MB
        decreased by 80.6%
    Loaded train data. Shape = (6424, 12784)
loading the entire test dataframe - there is no nrows limit applicable #########
    Shape of your Data Set loaded: (1134, 12784)
    Loaded test data. Shape = (1134, 12784)
#######################################################################################
######################## C L A S S I F Y I N G  V A R I A B L E S  ####################
#######################################################################################
Classifying variables in data set...
    12783 Predictors classified...
        4 variable(s) to be removed since ID or low-information variables
    	variables removed = ['_12544', '_12560', '_13520', '_13568']
train data shape before dropping 4 columns = (6424, 12784)
	train data shape after dropping columns = (6424, 12780)
    Converted pandas dataframe into a Dask dataframe ...
    Converted pandas dataframe into a Dask dataframe ...
GPU active on this device
    Tuning XGBoost using GPU hyper-parameters. This will take time...
    After removing redundant variables from further processing, features left = 12779
No interactions created for categorical vars since feature engg does not specify it
#### Single_Label Multi_Classification problem ####
    Skipping SULOV method since data dimension 82 m > 50 m. Continuing ...
Time taken for SULOV method = 0 seconds
    Adding 0 categorical variables to reduced numeric variables  of 12779
Final list of selected vars after SULOV = 12779
Readying dataset for Recursive XGBoost by converting all features to numeric...
#######################################################################################
#####    R E C U R S I V E   X G B O O S T : F E A T U R E   S E L E C T I O N  #######
#######################################################################################
    using regular XGBoost
Train and Test loaded into Dask dataframes successfully after feature_engg completed
Current number of predictors = 12779 
    XGBoost version: 1.6.0
Number of booster rounds = 100
        using 12779 variables...
Regular XGBoost is crashing due to: [14:14:47] ../src/c_api/../data/../common/device_helpers.cuh:428: Memory allocation error on worker 0: [14:14:47] ../src/c_api/../data/../common/common.h:46: ../src/common/device_helpers.cuh: 447: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device
Stack trace:
  [bt] (0) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x38f399) [0x7fab1fcf6399]
  [bt] (1) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x393333) [0x7fab1fcfa333]
  [bt] (2) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3d340e) [0x7fab1fd3a40e]
  [bt] (3) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3e7374) [0x7fab1fd4e374]
  [bt] (4) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3e91f0) [0x7fab1fd501f0]
  [bt] (5) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x582179) [0x7fab1fee9179]
  [bt] (6) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x20fb08) [0x7fab1fb76b08]
  [bt] (7) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(XGBoosterUpdateOneIter+0x68) [0x7fab1fa10758]
  [bt] (8) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call_unix64+0x4c) [0x7fab59f73dae]


- Free memory: 11918835712
- Requested memory: 25696

Stack trace:
  [bt] (0) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x38f399) [0x7fab1fcf6399]
  [bt] (1) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3937ab) [0x7fab1fcfa7ab]
  [bt] (2) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3d3549) [0x7fab1fd3a549]
  [bt] (3) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3e7374) [0x7fab1fd4e374]
  [bt] (4) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3e91f0) [0x7fab1fd501f0]
  [bt] (5) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x582179) [0x7fab1fee9179]
  [bt] (6) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x20fb08) [0x7fab1fb76b08]
  [bt] (7) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(XGBoosterUpdateOneIter+0x68) [0x7fab1fa10758]
  [bt] (8) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call_unix64+0x4c) [0x7fab59f73dae]


[14:14:47] ../src/c_api/../data/../common/device_helpers.cuh:428: Memory allocation error on worker 0: [14:14:47] ../src/c_api/../data/../common/common.h:46: ../src/common/device_helpers.cuh: 447: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device
Stack trace:
  [bt] (0) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x38f399) [0x7fab1fcf6399]
  [bt] (1) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x393333) [0x7fab1fcfa333]
  [bt] (2) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3d340e) [0x7fab1fd3a40e]
  [bt] (3) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3e7374) [0x7fab1fd4e374]
  [bt] (4) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3e91f0) [0x7fab1fd501f0]
  [bt] (5) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x582179) [0x7fab1fee9179]
  [bt] (6) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x20fb08) [0x7fab1fb76b08]
  [bt] (7) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(XGBoosterUpdateOneIter+0x68) [0x7fab1fa10758]
  [bt] (8) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call_unix64+0x4c) [0x7fab59f73dae]


- Free memory: 11918835712
- Requested memory: 25696

Stack trace:
  [bt] (0) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x38f399) [0x7fab1fcf6399]
  [bt] (1) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3937ab) [0x7fab1fcfa7ab]
  [bt] (2) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3d3549) [0x7fab1fd3a549]
  [bt] (3) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3e7374) [0x7fab1fd4e374]
  [bt] (4) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x3e91f0) [0x7fab1fd501f0]
  [bt] (5) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x582179) [0x7fab1fee9179]
  [bt] (6) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(+0x20fb08) [0x7fab1fb76b08]
  [bt] (7) /usr/local/lib/python3.7/dist-packages/xgboost/lib/libxgboost.so(XGBoosterUpdateOneIter+0x68) [0x7fab1fa10758]
  [bt] (8) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call_unix64+0x4c) [0x7fab59f73dae]

Features Selected by SULOV depend on the version of featurewiz

Dear all,
I was using featurewiz version 0.0.38 in an old project (https://github.com/AutoViML/featurewiz/tree/6b870dae8dcf4f24873eb61bb48947ceb84e189c)
The number of features selected and returned by FE_remove_variables_using_SULOV_method was 18

I am using featurewiz version 0.1.87 in a new project
The number of features selected and returned by FE_remove_variables_using_SULOV_method is 43

The dataset in input and the inputs parameters to featurewiz are the same for both projects. I executed step by step the two versions of featurewiz and I could say that the results of the two versions diverge starting from the computation of the correlation matrix at the beginning of FE_remove_variables_using_SULOV_method. Moreover, I noticed that the target label is modified by mlb = My_LabelEncoder() and dataname[each_target] = mlb.fit_transform(dataname[each_target]) in the new version before calling SULOV but it didn't happen in the previous version.

Could you clarify what main differences have been introduced in the new version? As you can imagine, such a significant difference in the outputs of the two versions is unpleasant.

Imported featurewiz: advanced feature engg and selection library. Version=0.0.38
output = featurewiz(dataname, target, corr_limit=0.70,
verbose=2, sep=',', header=0, test_data='',
feature_engg='', category_encoders='')
Create new features via 'feature_engg' flag : ['interactions','groupby','target']

Skipping feature engineering since no feature_engg input...
Skipping category encoding since no category encoders specified in input...
Shape of your Data Set loaded: (38, 3385)
Filename is an empty string or file not able to be loaded
############## C L A S S I F Y I N G V A R I A B L E S ####################
Classifying variables in data set...
3384 Predictors classified...
2022-07-26 00:03:54.391147: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
142 variable(s) will be ignored since they are ID or low-information variables
Shape of your Data Set loaded: (38, 3385)
Number of processors on machine = 1
No GPU active on this device
Running XGBoost using CPU parameters
############## C L A S S I F Y I N G V A R I A B L E S ####################
Classifying variables in data set...
3384 Predictors classified...
142 variable(s) will be ignored since they are ID or low-information variables
Removing 142 columns from further processing since ID or low information variables
columns removed: ['x427', 'x433', 'x439', 'x771', 'x777', 'x783', 'x825', 'x831', 'x837', 'x850', 'x856', 'x862', 'x1216', 'x1228', 'x1240', 'x1248', 'x1254', 'x1260', 'x1273', 'x1279', 'x1285', 'x1289', 'x1301', 'x1313', 'x1617', 'x1623', 'x1629', 'x1633', 'x1639', 'x1645', 'x1651', 'x1657', 'x1663', 'x1671', 'x1677', 'x1683', 'x1696', 'x1702', 'x1708', 'x1712', 'x1724', 'x1736', 'x2056', 'x2062', 'x2068', 'x2074', 'x2080', 'x2086', 'x2094', 'x2100', 'x2106', 'x2119', 'x2125', 'x2131', 'x2135', 'x2147', 'x2159', 'x2479', 'x2491', 'x2503', 'x2517', 'x2523', 'x2529', 'x2542', 'x2548', 'x2554', 'x2558', 'x2570', 'x2582', 'x2965', 'x2971', 'x2977', 'x2981', 'x2993', 'x3005', 'x3325', 'x3337', 'x3349', 'x3363', 'x3369', 'x3375', 'x490', 'x495', 'x497', 'x502', 'x503', 'x507', 'x844', 'x913', 'x918', 'x920', 'x924', 'x925', 'x926', 'x936', 'x1267', 'x1268', 'x1330', 'x1332', 'x1334', 'x1336', 'x1341', 'x1343', 'x1347', 'x1348', 'x1349', 'x1351', 'x1360', 'x1361', 'x1753', 'x1755', 'x1757', 'x1759', 'x1764', 'x1766', 'x1770', 'x1771', 'x1772', 'x1778', 'x2176', 'x2178', 'x2180', 'x2182', 'x2187', 'x2189', 'x2194', 'x2195', 'x2207', 'x2599', 'x2601', 'x2603', 'x2605', 'x2610', 'x2616', 'x2617', 'x2959', 'x3022', 'x3024', 'x3026', 'x3028', 'x3039', 'x3046']
After removing redundant variables from further processing, features left = 3242

Single_Label Binary_Classification Feature Selection Started

Searching for highly correlated variables from 3242 variables using SULOV method

SULOV : Searching for Uncorrelated List Of Variables (takes time...)
Removing (3224) highly correlated variables:
Following (18) vars selected: ['x2', 'x10', 'x26', 'x69', 'x87', 'x129', 'x187', 'x417', 'x496', 'x554', 'x608', 'x975', 'x1033', 'x1156', 'x1765', 'x2134', 'x2910', 'x3176']

Imported version = 0.1.87.
from featurewiz import FeatureWiz
wiz = FeatureWiz(verbose=1)
X_train_selected = wiz.fit_transform(X_train, y_train)
X_test_selected = wiz.transform(X_test)
wiz.features ### provides a list of selected features ###

############################################################################################
############ F A S T F E A T U R E E N G G A N D S E L E C T I O N ! ########

Be judicious with featurewiz. Don't use it to create too many un-interpretable features!

############################################################################################
Skipping feature engineering since no feature_engg input...
Skipping category encoding since no category encoders specified in input...
**INFO: featurewiz can now read feather formatted files. Loading train data...
Shape of your Data Set loaded: (38, 3385)
Loaded train data. Shape = (38, 3385)
No test data filename given...
#######################################################################################
######################## C L A S S I F Y I N G V A R I A B L E S ####################
#######################################################################################
Classifying variables in data set...
3384 Predictors classified...
142 variable(s) to be removed since ID or low-information variables
more than 142 variables to be removed; too many to print...
train data shape before dropping 81 columns = (38, 3385)
train data shape after dropping columns = (38, 3304)
Converted pandas dataframe into a Dask dataframe ...
No GPU active on this device
Tuning XGBoost using CPU hyper-parameters. This will take time...
After removing redundant variables from further processing, features left = 3242
No interactions created for categorical vars since feature engg does not specify it

Single_Label Binary_Classification problem

target labels need to be converted...

Completed label encoding of target variable = target
How model predictions need to be transformed for target:
{0: 1}
#######################################################################################

Searching for Uncorrelated List Of Variables (SULOV) in 3242 features

#######################################################################################
there are no null values in dataset...
Removing (3199) highly correlated variables:
SULOV method is erroring. Continuing ...
Time taken for SULOV method = 138 seconds
Adding 0 categorical variables to reduced numeric variables of 3242
Final list of selected vars after SULOV = 3242
Readying dataset for Recursive XGBoost by converting all features to numeric...

Getting error: '<' not supported between instances of 'int' and 'str'

Hi I am trying to run a dataset which is having around 800000 rows and 42 columns but I am getting this error. given below:

TypeError Traceback (most recent call last)

in ()
43 Add_Poly=0, Stacking_Flag=False,
44 Imbalanced_Flag=True,
---> 45 verbose=1)
46
47

4 frames

<array_function internals> in unique(*args, **kwargs)

/usr/local/lib/python3.7/dist-packages/numpy/lib/arraysetops.py in unique1d(ar, return_index, return_inverse, return_counts)
320 aux = ar[perm]
321 else:
--> 322 ar.sort()
323 aux = ar
324 mask = np.empty(aux.shape, dtype=np.bool
)

TypeError: '<' not supported between instances of 'int' and 'str'

Can Anyone, Please help me which column having problem in my dataset. I am not getting any clue from error. Please help me to resolve this error

Got KeyError on date/string columns [featurewiz 0.1.99]

Hello , after update to featurewiz 0.1.99 , I got different error .

Code is

from featurewiz import FeatureWiz
features = FeatureWiz(corr_limit=0.70, feature_engg='', category_encoders='', dask_xgboost_flag=False, nrows=None, verbose=2)
X= features.fit_transform(X, y)
features.features  ### provides the list of selected features ###

traceback:

KeyError                                  Traceback (most recent call last)
Input In [71], in <cell line: 1>()
      8 from featurewiz import FeatureWiz
      9 features = FeatureWiz(corr_limit=0.70, feature_engg='', category_encoders='', dask_xgboost_flag=False, nrows=None, verbose=2)
---> 10 X = features.fit_transform(X, y)
     11 cols = features.features  ### provides the list of selected features ###
     12 print(features.features)

File ~\anaconda3\lib\site-packages\sklearn\base.py:870, in TransformerMixin.fit_transform(self, X, y, **fit_params)
    867     return self.fit(X, **fit_params).transform(X)
    868 else:
    869     # fit method of arity 2 (supervised transformation)
--> 870     return self.fit(X, y, **fit_params).transform(X)

File ~\anaconda3\lib\site-packages\featurewiz\featurewiz.py:2934, in FeatureWiz.fit(self, X, y)
   2931     return {}, {}
   2932 #### Send target variable as it is so that y_train is analyzed properly ###
   2933 # Select features using featurewiz
-> 2934 features, X_sel = featurewiz(df, target, self.corr_limit, self.verbose, self.sep,
   2935         self.header, self.test_data, self.feature_engg, self.category_encoders,
   2936         self.dask_xgboost_flag, self.nrows)
   2937 # Convert the remaining column names back to integers and drop the
   2938 difftime = max(1, int(time.time()-start_time))

File ~\anaconda3\lib\site-packages\featurewiz\featurewiz.py:1101, in featurewiz(dataname, target, corr_limit, verbose, sep, header, test_data, feature_engg, category_encoders, dask_xgboost_flag, nrows, **kwargs)
   1099     print('Since %s category encoding is done, dropping original categorical vars from predictors...' %feature_gen)
   1100     preds = left_subtract(preds, catvars)
-> 1101 train_p = train[preds]
   1102 if train_p.shape[1] <= 10:
   1103     iter_limit = 2

File ~\anaconda3\lib\site-packages\pandas\core\frame.py:3511, in DataFrame.__getitem__(self, key)
   3509     if is_iterator(key):
   3510         key = list(key)
-> 3511     indexer = self.columns._get_indexer_strict(key, "columns")[1]
   3513 # take() does not accept boolean indexers
   3514 if getattr(indexer, "dtype", None) == bool:

File ~\anaconda3\lib\site-packages\pandas\core\indexes\base.py:5782, in Index._get_indexer_strict(self, key, axis_name)
   5779 else:
   5780     keyarr, indexer, new_indexer = self._reindex_non_unique(keyarr)
-> 5782 self._raise_if_missing(keyarr, indexer, axis_name)
   5784 keyarr = self.take(indexer)
   5785 if isinstance(key, Index):
   5786     # GH 42790 - Preserve name from an Index

File ~\anaconda3\lib\site-packages\pandas\core\indexes\base.py:5845, in Index._raise_if_missing(self, key, indexer, axis_name)
   5842     raise KeyError(f"None of [{key}] are in the [{axis_name}]")
   5844 not_found = list(ensure_index(key)[missing_mask.nonzero()[0]].unique())
-> 5845 raise KeyError(f"{not_found} not in index")

KeyError: "['network_type__first', 'device_model__first', 'ad_account__first', 'os_version__first', 'carrier__first', 'reg_week_day', 'os__first', 'hour__first', 'ad_source__first', 'ad_serving_user_group__first', 'firstecpm__first', 'province__first', 'manufacturer__first'] not in index"

These error columns are date/string type .

Why train model on smaller and smaller set of features recursively

Hi,

I have some doubts on recursive xgboost model process. In the source code, it seems the models were trained on smaller and smaller set of features that were selected by their column index in an order.

   for i in range(0,train_p.shape[1],iter_limit):
        start_time2 = time.time()
        imp_feats = []
        if train_p.shape[1]-i < iter_limit:
            X_train = train_p.iloc[:,i:]
            cols_sel = X_train.columns.tolist()
        else:
            X_train = train_p[list(train_p.columns.values)[i:train_p.shape[1]]]
            cols_sel = X_train.columns.tolist()

Is there a reason to select the subset of features by column order? And why train models on shrinking set of features repeatedly?

Thank you

feature_engg is not working properly

I'm trying to use this but it's showing an error when I'm using the "feature_eng" parameter used. I'm attaching some screenshots please help me with this. when I'm using that parameter it's showing that the new features that were added by featurewiz is not find in original dataset.
image
image
image

dask_xgboost_error: 'Series' object has no attribute 'compute‘

Encountering the below error (environment WSL:Ubuntu) when trying to run with dask_xgboost_flag enabled

~/.local/lib/python3.8/site-packages/featurewiz/featurewiz.py in featurewiz(dataname, target, corr_limit, verbose, sep, header, test_data, feature_engg, category_encoders, dask_xgboost_flag, nrows, **kwargs)
   1340             if dask_xgboost_flag:
   1341                 ### since y_train is dask df and data_tuple.X_train is a pandas df, you can't merge them.
-> 1342                 y_test = y_test.compute()  ### remember you first have to convert them to a pandas df
   1343             data2 = data_tuple.X_test.join(y_test)
   1344             dataname = data1.append(data2)

~/.local/lib/python3.8/site-packages/pandas/core/generic.py in __getattr__(self, name)
   5485         ):
   5486             return self[name]
-> 5487         return object.__getattribute__(self, name)
   5488 
   5489     def __setattr__(self, name: str, value) -> None:

AttributeError: 'Series' object has no attribute 'compute'

Memory Runout Error for even less memory of dataset when using Featurewiz

I have a dataset whose memory is around (440.06 MiB) training data and test data (219.6 MiB) when I try to use the featurewiz with this dataset it's showing an error of memory run out on GPU (Kaggle and Google colab)

  1. Is there any method to solve this problem, rather than going to cloud platforms?
  2. Is there any method to free the space when working with featurewiz internally in the codes of featurewiz?
  3. I have loaded the dataset into the GPU environment and loaded those direct data frames into fearurewiz showing an error.
    image
    image
    image
    Dataset can be found at "https://www.kaggle.com/competitions/ventilator-pressure-prediction/data"

Error installing from source

I got this issue when trying to install from source on Colab

Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting git+https://github.com/AutoViML/featurewiz.git
  Cloning https://github.com/AutoViML/featurewiz.git to /tmp/pip-req-build-43tno_dg
  Running command git clone -q https://github.com/AutoViML/featurewiz.git /tmp/pip-req-build-43tno_dg
Requirement already satisfied: ipython in /usr/local/lib/python3.7/dist-packages (from featurewiz==0.1.95) (7.9.0)
Collecting jupyter
  Downloading jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Collecting xgboost>=1.5.1
  Downloading xgboost-1.6.1-py3-none-manylinux2014_x86_64.whl (192.9 MB)
     |████████████████████████████████| 192.9 MB 74 kB/s 
Requirement already satisfied: pandas>=1.3.4 in /usr/local/lib/python3.7/dist-packages (from featurewiz==0.1.95) (1.3.5)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from featurewiz==0.1.95) (3.2.2)
Requirement already satisfied: seaborn in /usr/local/lib/python3.7/dist-packages (from featurewiz==0.1.95) (0.11.2)
Collecting scikit-learn~=0.24
  Downloading scikit_learn-0.24.2-cp37-cp37m-manylinux2010_x86_64.whl (22.3 MB)
     |████████████████████████████████| 22.3 MB 1.2 MB/s 
ERROR: Could not find a version that satisfies the requirement networkx>=2.8.1 (from featurewiz) (from versions: 0.34, 0.35, 0.35.1, 0.36, 0.37, 0.99, 1.0rc1, 1.0, 1.0.1, 1.1, 1.2rc1, 1.2, 1.3rc1, 1.3, 1.4rc1, 1.4, 1.5rc1, 1.5, 1.6rc1, 1.6, 1.7rc1, 1.7, 1.8rc1, 1.8, 1.8.1, 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.10, 1.11rc1, 1.11rc2, 1.11, 2.0, 2.1, 2.2rc1, 2.2, 2.3rc3, 2.3rc4, 2.3, 2.4rc1, 2.4rc2, 2.4, 2.5rc1, 2.5, 2.5.1, 2.6rc1, 2.6rc2, 2.6, 2.6.1, 2.6.2, 2.6.3)
ERROR: No matching distribution found for networkx>=2.8.1

Featureviz for data with "No Target" Variable

Hi Team,

Thank you for creating this great library. I would like to know how can i modify the code to use it for my data which doesn't have Target or predictor. Like unsupervised data and i wanted to reduce the dimension of the techniques.

Please let me know,

saving transformers?

Is it possible to save all data transformers used during feature selection in order to apply them to a new dataset?
If yes, what could be the process and how to reuse them?

Thanks a lot.
Your work is amanzing, have to say

Columns must be same length as key

Hi there,
I am new to featurewize and I have got a wired error while I am using it. It happened after feature selection got completed.
I have followed the instruction and I am getting the following error.
I would be really grateful if anyone can help me.
Thanks in advance
image

How to enable GPU support?

Thanks again for this package! Though I do have an active GPU on the device, it doesn't seem to be detected. Is there some way of enabling GPU acceleration (and would it be useful)?

No GPU active on this device
    Tuning XGBoost using CPU hyper-parameters. This will take time...

Dask XGBoost is crashing. Continuing...

Hey, I am getting error as in the title, namely:

Dask XGBoost is crashing. Continuing...

I am trying to run featurewiz on data with 220 features and 63190 rows (and that is actually already shortened size) and I get the above error, but when I am trying to run it on 63190 x 10 (aka 10x amount of data), it did not give me any results yet, just either gets stuck or just takes so long and I never waited. I will try running it overnight/multiple days to see if it will be able to give any results but I doubt more data will work :D if less data does not.

On transforming after fit_transform

Hi, its me again :)
I tried your new feature of compatibility with scikit-learn following the suggested code where i found the line of transformation over X_test

from featurewiz import FeatureWiz
features = FeatureWiz(corr_limit=0.70, feature_engg='', category_encoders='', 
dask_xgboost_flag=False, nrows=None, verbose=2)
X_train_selected = features.fit_transform(X_train, y_train)
####################################  THIS LINE I'M TALKING ABOUT
X_test_selected = features.transform(X_test)
############################################################
features.features  ### provides the list of selected features ###

But I found that what features.transform(X_test) does is "only" filtering X_test by the selected features since this is the code inside FeatureWiz class:

def transform(self, X):
        return X[self.features]

Specifically, what I'm trying to do is:

  1. to use features (a FeatureWiz object type) to get the completely transformed and filtered dataset according to the selected features
  2. save the features object
  3. to train a X model using the dataset found in step (1)
  4. to load the features object saved in step (2)
  5. in real life to receive an input and transform it using the loaded features object in step (4)
  6. to feed my model X with the transformed input in step (5)

I just dont know how to complete the step (5) since any input is just filtered and not transformed
I wonder if not I should use the transform function of the My_Groupby_Encoder class. If that is true, how could I do that?

Thank you so much for your attention to my question
I don't hesitate to say that your work is simply wonderful and useful

AttributeError: 'int' object has no attribute 'split'

If I try this:

spectra.columns = spectra.columns.astype(str)
features = FeatureWiz(corr_limit=0.70, feature_engg='', category_encoders='', dask_xgboost_flag=False,
							nrows=None, verbose=2)
X_train_selected = features.fit_transform(spectra, mask_list)
selected_features = features.features 

I get this error message:

Imported DASK version = 0.1.00. nrows=None uses all rows. Set nrows=1000 to randomly sample fewer rows.
output = featurewiz(dataname, target, corr_limit=0.70, verbose=2, sep=',', 
		header=0, test_data='',feature_engg='', category_encoders='',
		dask_xgboost_flag=False, nrows=None)
Create new features via 'feature_engg' flag : ['interactions','groupby','target']
############################################################################################
############       F A S T   F E A T U R E  E N G G    A N D    S E L E C T I O N ! ########
# Be judicious with featurewiz. Don't use it to create too many un-interpretable features! #
############################################################################################
Skipping feature engineering since no feature_engg input...
Skipping category encoding since no category encoders specified in input...
Loading train data...
    Shape of your Data Set loaded: (26717, 788)
    Caution: We will try to reduce the memory usage of dataframe from 80.23 MB
        memory usage after optimization is: 40.16 MB
        decreased by 50.0%
     Loaded. Shape = (26717, 788)
Traceback (most recent call last):
  File "/snap/pycharm-professional/271/plugins/python/helpers/pydev/pydevd.py", line 1483, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/snap/pycharm-professional/271/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/home/saskra/PycharmProjects/bmc/bmc5.py", line 121, in <module>
    X_train_selected = features.fit_transform(spectra, mask_list)
  File "/home/saskra/anaconda3/envs/bmc/lib/python3.9/site-packages/sklearn/base.py", line 855, in fit_transform
    return self.fit(X, y, **fit_params).transform(X)
  File "/home/saskra/anaconda3/envs/bmc/lib/python3.9/site-packages/featurewiz/featurewiz.py", line 3553, in fit
    features, X_sel = featurewiz(df, target, self.corr_limit, self.verbose, self.sep, 
  File "/home/saskra/anaconda3/envs/bmc/lib/python3.9/site-packages/featurewiz/featurewiz.py", line 1029, in featurewiz
    dataname = remove_special_chars_in_names(dataname, target, verbose=1)
  File "/home/saskra/anaconda3/envs/bmc/lib/python3.9/site-packages/featurewiz/featurewiz.py", line 3586, in remove_special_chars_in_names
    sel_preds = ["_".join(x.split(" ")) for x in sel_preds]
  File "/home/saskra/anaconda3/envs/bmc/lib/python3.9/site-packages/featurewiz/featurewiz.py", line 3586, in <listcomp>
    sel_preds = ["_".join(x.split(" ")) for x in sel_preds]
AttributeError: 'int' object has no attribute 'split'
python-BaseException

The first line in my code was already a futile attempt to fix the supposed problem because the original column names in the dataframe were floating point numbers. Can anyone help?

UFuncTypeError: ufunc 'add' did not contain a loop with signature matching types (dtype('<U26'), dtype('int64')) -> None

thanks a lot for this package. It is very useful for me,

I am trying to follow a tutorial in hackernoon to select features from a dataset

When I execute the below code, I get an error like as shown below

from featurewiz import featurewiz
features, train = featurewiz(ord_train_t,y_train, corr_limit=0.7, verbose=2)

UFuncTypeError: ufunc 'add' did not contain a loop with signature
matching types (dtype('<U26'), dtype('int64')) -> None

However, I verified the dtypes for all my train data (ord_train_t) and target (y_train)

They all are of int64 and float64 types (as shown below) Don't understand why there is still an error. Even after converting float64 to int64, I get the same error. I also tried ord_train_t.isna().sum(), there are no NA's

[![enter image description here][5]][5]

Find below the full error

---------------------------------------------------------------------------
UFuncTypeError                            Traceback (most recent call last)
C:\Users\abcde1\AppData\Local\Temp/ipykernel_1888/1114387036.py in <module>
      1 from featurewiz import featurewiz
      2 
----> 3 features, train = featurewiz(ord_train_t,y_train, corr_limit=0.7, verbose=2)

~\Anaconda3\lib\site-packages\featurewiz\featurewiz.py in featurewiz(dataname, target, corr_limit, verbose, sep, header, test_data, feature_engg, category_encoders, dask_xgboost_flag, nrows, **kwargs)
   1027     ##################    L O A D    T E S T   D A T A      ######################
   1028     dataname = remove_duplicate_cols_in_dataset(dataname)
-> 1029     dataname = remove_special_chars_in_names(dataname, target, verbose=1)
   1030     if dask_xgboost_flag:
   1031         train = remove_special_chars_in_names(train, target)

~\Anaconda3\lib\site-packages\featurewiz\featurewiz.py in remove_special_chars_in_names(df, target, verbose)
   3581     else:
   3582         sel_preds = [x for x in list(df) if x not in target]
-> 3583         df = df[sel_preds+target]
   3584     orig_preds = copy.deepcopy(sel_preds)
   3585     #####   column names must not have any special characters #####

~\Anaconda3\lib\site-packages\pandas\core\ops\common.py in new_method(self, other)
     67         other = item_from_zerodim(other)
     68 
---> 69         return method(self, other)
     70 
     71     return new_method

~\Anaconda3\lib\site-packages\pandas\core\arraylike.py in __radd__(self, other)
     94     @unpack_zerodim_and_defer("__radd__")
     95     def __radd__(self, other):
---> 96         return self._arith_method(other, roperator.radd)
     97 
     98     @unpack_zerodim_and_defer("__sub__")

~\Anaconda3\lib\site-packages\pandas\core\series.py in _arith_method(self, other, op)
   5524 
   5525         with np.errstate(all="ignore"):
-> 5526             result = ops.arithmetic_op(lvalues, rvalues, op)
   5527 
   5528         return self._construct_result(result, name=res_name)

~\Anaconda3\lib\site-packages\pandas\core\ops\array_ops.py in arithmetic_op(left, right, op)
    222         _bool_arith_check(op, left, right)
    223 
--> 224         res_values = _na_arithmetic_op(left, right, op)
    225 
    226     return res_values

~\Anaconda3\lib\site-packages\pandas\core\ops\array_ops.py in _na_arithmetic_op(left, right, op, is_cmp)
    164 
    165     try:
--> 166         result = func(left, right)
    167     except TypeError:
    168         if is_object_dtype(left) or is_object_dtype(right) and not is_cmp:

~\Anaconda3\lib\site-packages\pandas\core\computation\expressions.py in evaluate(op, a, b, use_numexpr)
    237         if use_numexpr:
    238             # error: "None" not callable
--> 239             return _evaluate(op, op_str, a, b)  # type: ignore[misc]
    240     return _evaluate_standard(op, op_str, a, b)
    241 

~\Anaconda3\lib\site-packages\pandas\core\computation\expressions.py in _evaluate_standard(op, op_str, a, b)
     67     if _TEST_MODE:
     68         _store_test_result(False)
---> 69     return op(a, b)
     70 
     71 

~\Anaconda3\lib\site-packages\pandas\core\roperator.py in radd(left, right)
      7 
      8 def radd(left, right):
----> 9     return right + left
     10 
     11 

UFuncTypeError: ufunc 'add' did not contain a loop with signature matching types (dtype('<U26'), dtype('int64')) -> None

Prevent shuffling of data throughout featurewiz

Is it possible to prevent data shuffling throughout the featurewiz process?

My data has a temporal component (time series effectively) and shuffling doesn't make sense.

Perhaps there is a parameter already I can use to disable shuffling?

Thanks fro a great library.

Error when running with rows greater than 9999

I am running into an error when the rows of the dataframe to be feature reduced is greater than 9999. The stack trace is shown below:

outputs = featurewiz(dataset.iloc[:10500,:], collist, corr_limit=0.93, verbose=1, dask_xgboost_flag=False)

Skipping feature engineering since no feature_engg input...
Skipping category encoding since no category encoders specified in input...
Loading train data...
Shape of your Data Set loaded: (10500, 2880)
Loading test data...
No file given. Continuing...
Classifying features using 10000 rows...
loading a random sample of 10000 rows into pandas for EDA


ValueError Traceback (most recent call last)
/tmp/ipykernel_17760/2016861536.py in
----> 1 outputs = featurewiz(dataset.iloc[:10500,:], collist, corr_limit=0.93, verbose=1, dask_xgboost_flag=False)

~/miniconda3/envs/CS280/lib/python3.7/site-packages/featurewiz/featurewiz.py in featurewiz(dataname, target, corr_limit, verbose, sep, header, test_data, feature_engg, category_encoders, dask_xgboost_flag, nrows, **kwargs)
1082 targets = copy.deepcopy(target)
1083 ##### you can use
-> 1084 train_small = select_rows_from_dataframe(dataname, targets, nrows_limit, DS_LEN=dataname.shape[0])
1085 features_dict = classify_features(train_small, target)
1086 else:

/miniconda3/envs/CS280/lib/python3.7/site-packages/featurewiz/featurewiz.py in select_rows_from_dataframe(train_dataframe, targets, nrows_limit, DS_LEN)
3986 list_of_few_classes = train_dataframe[each_target].value_counts()[train_dataframe[each_target].value_counts()<=10].index.tolist()
3987 train_small = train_dataframe.loc[
(train_dataframe[each_target].isin(list_of_few_classes))]
-> 3988 train_small, _ = train_test_split(train_dataframe, test_size=test_size, stratify=train_dataframe[targets])
3989 else:
3990 ### For Regression problems: load a small sample of data into a pandas dataframe ##

~/miniconda3/envs/CS280/lib/python3.7/site-packages/sklearn/model_selection/_split.py in train_test_split(test_size, train_size, random_state, shuffle, stratify, *arrays)
2439 cv = CVClass(test_size=n_test, train_size=n_train, random_state=random_state)
2440
-> 2441 train, test = next(cv.split(X=arrays[0], y=stratify))
2442
2443 return list(

~/miniconda3/envs/CS280/lib/python3.7/site-packages/sklearn/model_selection/_split.py in split(self, X, y, groups)
1598 """
1599 X, y, groups = indexable(X, y, groups)
-> 1600 for train, test in self._iter_indices(X, y, groups):
1601 yield train, test
1602

~/miniconda3/envs/CS280/lib/python3.7/site-packages/sklearn/model_selection/_split.py in _iter_indices(self, X, y, groups)
1939 if np.min(class_counts) < 2:
1940 raise ValueError(
-> 1941 "The least populated class in y has only 1"
1942 " member, which is too few. The minimum"
1943 " number of groups for any class cannot"

ValueError: The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2.

I have tried replicating it over a dataframe with random numbers only and the same happened.

Question on nlp columns

Just a question
When a column in a dataset is considered nlp and when a categorical one?

I found this condition in your code:

def classify_columns(df_preds, verbose=0):
...

if train[col].map(lambda x: len(x) if type(x)==str else 0).mean(
                ) >= max_nlp_char_size and len(train[col].value_counts()
                        ) <= int(0.9*len(train)) and col not in string_bool_vars:
                var_df.loc[var_df['index']==col,'nlp_strings'] = 1

I wonder if it has not to be:

>= int(0.9*len(train))

Thanks,
Cheers

Getting error -> len(important_cats),len(final_list))) TypeError: object of type 'NoneType' has no len()

Hi,
I am trying to use the SULOV method to reduce the number of features from my dataset. The data I provide to the function is in a dataframe, all float type. Only the target variable is categorical (problem of classifying healthy and sick subjects).
I tried to give the data to the function both in a dataframe and with the path where the csv is. The result doesn't change.

this is the function call:
outputs = FW.featurewiz(path, "Healthy", corr_limit=0.70, sep=',', verbose=2, dask_xgboost_flag=False, nrows=None)

The algorithm gets to calculate and reduce the features, but then crashes with this error. What should I do to resolve it?

line 1386, in featurewiz
len(important_cats),len(final_list)))
TypeError: object of type 'NoneType' has no len()

Output exceeds the size limit. Open the full output data in a text editor

Hello, I meet this error while testing featurewiz , I want to do some auto feature engineering , so choose the old way , but unfortunately got Output exceeds the size limit. Open the full output data in a text editor .

Detail:

  • X shape : Shape = (128463, 1341) , mixed string, int , float and nan values.
  • code:
import featurewiz as FW
outputs = FW.featurewiz(dataname=X.reset_index(drop=True), target=y.reset_index(drop=True), corr_limit=0.70, verbose=2, sep=',', 
          header=0, test_data='',feature_engg='', category_encoders='',
          dask_xgboost_flag=False, nrows=None)
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
f:\Work\jupyter_pipeline\pj01\1.1.0 clean_data.ipynb Cell 126 in <cell line: 1>()
      1 if Config.add_feature:
      2     # # Add feature
      3     # from jinshu_model.build_models import HighDimensionFeatureAdder
   (...)
      8     # ce = HighDimensionFeatureAdder(max_gmm_component=4, onehot=False)
      9     # X = ce.fit_transform(X)
     10     import featurewiz as FW
---> 11     outputs = FW.featurewiz(dataname=X.reset_index(drop=True), target=y.reset_index(drop=True), corr_limit=0.70, verbose=2, sep=',', 
     12             header=0, test_data='',feature_engg='', category_encoders='',
     13             dask_xgboost_flag=False, nrows=None)
     14 else:
     15     ce = CategoricalEncoder()

File c:\Users\ufo\anaconda3\lib\site-packages\featurewiz\featurewiz.py:793, in featurewiz(dataname, target, corr_limit, verbose, sep, header, test_data, feature_engg, category_encoders, dask_xgboost_flag, nrows, **kwargs)
    791     print('Classifying features using a random sample of %s rows from dataset...' %nrows_limit)
    792     ##### you can use nrows_limit to select a small sample from data set ########################
--> 793     train_small = EDA_randomly_select_rows_from_dataframe(dataname, targets, nrows_limit, DS_LEN=dataname.shape[0])
    794     features_dict = classify_features(train_small, target)
    795 else:

File c:\Users\ufo\anaconda3\lib\site-packages\featurewiz\featurewiz.py:2977, in EDA_randomly_select_rows_from_dataframe(train_dataframe, targets, nrows_limit, DS_LEN)
   2975     test_size = 0.9
...
-> 5842     raise KeyError(f"None of [{key}] are in the [{axis_name}]")
   5844 not_found = list(ensure_index(key)[missing_mask.nonzero()[0]].unique())
   5845 raise KeyError(f"{not_found} not in index")

KeyError: "None of [Int64Index([0, 0, 0, 0, 1, 1, 0, 0, 0, 1,\n            ...\n            0, 0, 0, 0, 0, 0, 1, 0, 0, 0],\n           dtype='int64', length=128463)] are in the [columns]"

UnboundLocalError: local variable 'date_cols' referenced before assignment

UnboundLocalError                         Traceback (most recent call last)
/tmp/ipykernel_271453/4235074023.py in <module>
----> 1 X_train_selected = features.fit_transform(X_train, train_df['ground_truth_corrected'])

~/anaconda3/envs/XXX/lib/python3.8/site-packages/sklearn/base.py in fit_transform(self, X, y, **fit_params)
    853         else:
    854             # fit method of arity 2 (supervised transformation)
--> 855             return self.fit(X, y, **fit_params).transform(X)
    856 
    857 

~/anaconda3/envs/XXX/lib/python3.8/site-packages/featurewiz/featurewiz.py in fit(self, X, y)
   3613         #### Send target variable as it is so that y_train is analyzed properly ###
   3614         # Select features using featurewiz
-> 3615         features, X_sel = featurewiz(df, target, self.corr_limit, self.verbose, self.sep, 
   3616                 self.header, self.test_data, self.feature_engg, self.category_encoders,
   3617                 self.dask_xgboost_flag, self.nrows)

~/anaconda3/envs/XXX/lib/python3.8/site-packages/featurewiz/featurewiz.py in featurewiz(dataname, target, corr_limit, verbose, sep, header, test_data, feature_engg, category_encoders, dask_xgboost_flag, nrows, **kwargs)
   1720             print('    Could not revert column names to original. Try replacing them manually.')
   1721         print(f'Returning list of {len(important_features)} important features and a dataframe.')
-> 1722         if len(date_cols) > 0:
   1723             date_replacer = date_col_mappers.get  # For faster gets.
   1724             important_features1 = [date_replacer(n, n) for n in important_features2]

UnboundLocalError: local variable 'date_cols' referenced before assignment

I use featurewiz version 0.1.06.

Also I have no date columns, only int and float.

Why test has not target and how to use model to make predict

hello,
I have three questions:
1: When I use train, test = FW.featurewiz() , i find test return have not encoded target,but I need recalculate balanced_accuracy_score that need encoded target, so I add target in return, Is this correct?
image

2: How to use fitted model to predict? I get outputs[-1] as fitted model, but its predictions is all 1, its not the same as outputs[0]。This is my code, am I using it wrong? By the way ,Can you provide some examples for using simple_LightGBM_model ,simple_XGBoost_model, etc.
image

3: When I use fitted model to predict raw data, how can I get transformer for raw data?

Hope to get your reply,thanks!

Requirements.txt version mistake?

Should the requirements for some libraries be >= as opposed to ~=

featurewiz 0.1.991 requires Pillow~=9.0.0, but you have pillow 9.2.0 which is incompatible.
featurewiz 0.1.991 requires scikit-learn~=0.24, but you have scikit-learn 1.1.2 which is incompatible.

For scikit-learn, 0.24 is very old. And you're preventing more recent versions of scikit-learn from being usable.

Am I misunderstanding?

method do transform only data, after fitted

hi !
thanks for writing this package, looks very interesting, I saw the article on medium

I am working with a time series dataset

I can run featwiz on the existing time series data but once a new observation comes I don't want to retrain and generate new features...I want to use the same features that were identified as relevant but just transform the raw data into the featwiz features

maybe you could have methods like sklearn: fit() , fit_transform(), transform()

you could write it as a class:
features = FeatureWiz( corr_limit=0.70,verbose=2,feature_engg=["interactions","groupby","target"])
output_train=features .fit_transform(train,target)
output_test=features .transform(test)
relevant_features=features .get_feat_list()

also maybe you could have a method to get the feature importances, like MI scores or permutation importance in the test dataset

and the plot it is very nice but running this featurewiz as a background process the plot should not come up, there should be an option to switch off or have it as a method :features.make_plot()

[QUESTION] untransform encoded categorical values and change type of problem

Hello, I'm testing featurewiz with a dataframe with numerical and categorical variables, and a target variables that ranges from 0 - 55, with most of my values (for the target variable) between 0-6.

My first question comes to the fact that when I run:

outputs = FW.featurewiz(train_df, target='unique_offers_cut', feature_engg='', category_encoders='OneHotEncoder', dask_xgboost_flag=False, nrows=None, verbose=2)

Everything runs fine, but the final output is like this:

['OneHotEncoder_property_type_1',
 'OneHotEncoder_property_type_6',
 'OneHotEncoder_itv_region_10',
 'OneHotEncoder_itv_region_5',
 'OneHotEncoder_itv_region_8',
 'OneHotEncoder_listing_pricetype_12',
 'OneHotEncoder_property_type_3',
 'first_listed_price',
 'OneHotEncoder_property_type_4',...

Is there any change that I know what is property_type_1? Or at least have it transformed back to its original name?

On the other hand, for the type of problem, is there any way to override this? I do want to set it to a regression problem, but it is assuming the target variables as multi classification (and the XGBoost part ends up not working).

Thanks

cannot replicate feature selection result

Is there a way to keep the feature selection result the same for every times I run? I tried to run the function and it's giving me different result for each time.

Original data - 365 variables
1st run - select 87 variables
2nd run - select 82 variables

verbose=0 ?

verbose=0 is not silent?
How can I get featurewiz to work without outputting a SULOV seaborn plot?

XGB crashes

I have a matrix (191,758) and it runs in to an error with featurewiz.

!Current number of predictors = 569 
    Finding Important Features using Boosted Trees algorithm...
        using 569 variables...
Finding top features using XGB is crashing. Continuing with all predictors...!

ValueError: Columns must be same length as key when `len(date_cols) > 0`

This error is from pandas
throw at
image

This happened when len(date_cols)>0 .

I found important_features doesn't equal to old_important_features :

In [4]: 'reg_date_hour' in important_features
Out[4]: True

In [5]: 'reg_date_hour' in old_important_features
Out[5]: False

In [6]: len(date_cols)
Out[6]: 2

In [14]: len(important_features) == len(old_important_features)
Out[14]: False

In [15]: len(important_features)
Out[15]: 460

In [16]: len(old_important_features)
Out[16]: 471


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.