Comments (4)
Hey @berkeserol, so, I have two comments, first, this runs well, see if it helps:
import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from pytorch_widedeep import Trainer
from pytorch_widedeep.models import TabMlp, Wide, WideDeep # noqa: F401
from pytorch_widedeep.preprocessing import TabPreprocessor, WidePreprocessor
X, y = make_classification(n_samples=100, n_features=10, n_informative=10, n_redundant=0)
X = pd.DataFrame(X, columns=[f"col_{i}" for i in range(10)])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
tab_preprocessor = TabPreprocessor(continuous_cols=X.columns.tolist())
X_train_processed = tab_preprocessor.fit_transform(X_train)
X_test_processed = tab_preprocessor.transform(X_test)
wide_preprocessor = WidePreprocessor(wide_cols=X.columns.tolist())
X_wide = wide_preprocessor.fit_transform(X_train)
# Define the model
tab_mlp = TabMlp(
column_idx=tab_preprocessor.column_idx,
continuous_cols=tab_preprocessor.continuous_cols,
mlp_hidden_dims=[64, 32]
)
wide = Wide(input_dim=np.unique(X_train).shape[0])
model = WideDeep(wide=wide, deeptabular=tab_mlp)
# Define the trainer and train the model
trainer = Trainer(model, objective="binary")
trainer.fit(
X_tab=X_train_processed,
X_wide=X_wide,
target=y_train,
n_epochs=10,
batch_size=32
)
you do not need those .astype(float)
.
Second, and also important, the wide
component is not designed to work only with continuous features. In fact, the way that the WidePreprocessor
prepares the data is such that all ends up in a look up table (dictionary) and the linear layer is coded as embeddings (have a look here for an explanation). For example, if you access the encoding_dict
attribute in the example above you will see this:
>>> wide_preprocessor.encoding_dict
{'col_0_-3.956915512990843': 1,
'col_0_1.3006394509583112': 2,
'col_0_-2.464936360812435': 3,
'col_0_2.4968884101664472': 4,
'col_0_-1.0889744597533615': 5,
'col_0_-1.7765530407990036': 6,
'col_0_-2.9406750132381694': 7,
'col_0_-4.13009658776504': 8,
'col_0_1.3052810320392583': 9,
'col_0_-0.7368640448231503': 10,
'col_0_0.5910531307205757': 11,
'col_0_1.3899029479159295': 12,
'col_0_4.4319842979409145': 13,
'col_0_-1.2444478420364677': 14,
'col_0_1.818636056977371': 15,
...
i.e. one encoding per individual value, per column
Just that. Hope this helps and thanks for opening the issue and trying the library!
from pytorch-widedeep.
Hi @jrzaurin thank you for the answer. I tried your code exactly and it works fine. However, when I change the input dataframe using the file I attached, I got the error that I mentioned in the first message. Can you try?
widedeep_test_X.csv
from pytorch-widedeep.
ok, so I see what is happening, I will change the example and here is a solution.
ISSUE:
two of the values in your dataset are equal for different columns:
import numpy as np
import pandas as pd
# from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X = pd.read_csv("~/Desktop/widedeep_test_X.csv")
y = np.random.randint(2, size=X.shape[0])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
# unique elements column by column
print(len(np.hstack([X_train[col].unique() for col in X_train.columns])))
90
# unique elements consider the whole df at once
print(np.unique(X_train).shape[0])
89
Because of the way that the linear layer (wide component) is implemented, the number that matters is the first one. This is because as I wrote before, is implemented as an Embedding layer where each element will be an entry in the look up table. To that aim we append the column name and the value for each column. In your example, if we refer as the repeated float as repeated_float, and assuming it is in two columns 1 and 2, it will be encoded as:
col_1_repeated_float: encoding_n
...
col_2_repeated_float: encoding_m
Now, if you define your wide model as
wide = Wide(input_dim=np.unique(X_train).shape[0])
we are defining an embedding layer of 90 input dim (89 + index 0 left for 'unseen' categories) when in reality we need an embedding layer of 91 (90 + index 0 left for 'unseen' categories), since col_1_repeated_float
and col_2_repeated_float
should be encoded differently
SOLUTION
It is safer to define the wide model as: wide = Wide(input_dim=len(wide_preprocessor.encoding_dict))
then this code should run
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from pytorch_widedeep import Trainer
from pytorch_widedeep.models import TabMlp, Wide, WideDeep # noqa: F401
from pytorch_widedeep.preprocessing import TabPreprocessor, WidePreprocessor
X = pd.read_csv("~/Desktop/widedeep_test_X.csv")
y = np.random.randint(2, size=X.shape[0])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
tab_preprocessor = TabPreprocessor(continuous_cols=X.columns.tolist())
X_train_processed = tab_preprocessor.fit_transform(X_train)
wide_preprocessor = WidePreprocessor(wide_cols=X.columns.tolist())
X_wide = wide_preprocessor.fit_transform(X_train)
# Define the model
tab_mlp = TabMlp(
column_idx=tab_preprocessor.column_idx,
continuous_cols=tab_preprocessor.continuous_cols,
mlp_hidden_dims=[64, 32],
)
wide = Wide(input_dim=len(wide_preprocessor.encoding_dict))
model = WideDeep(wide=wide, deeptabular=tab_mlp)
# Define the trainer and train the model
trainer = Trainer(model, objective="binary")
trainer.fit(
X_tab=X_train_processed, X_wide=X_wide, target=y_train, n_epochs=1, batch_size=2
)
from pytorch-widedeep.
It is solved. Thanks
from pytorch-widedeep.
Related Issues (20)
- <frozen importlib._bootstrap>:914 error when importing on Google Colab HOT 2
- Image Preprocessing takes a lot of time HOT 2
- Not Being able to reproduce Bert results HOT 5
- pytorch vision module error HOT 1
- save_best_only error and NaN during training HOT 9
- CyclicLR throws ZeroDivisionError when finetuning with a single batch. HOT 2
- EarlyStopping does not store and restore the model HOT 5
- Can I use time series data HOT 6
- CUDA error: device-side assert triggered HOT 5
- Wrong paper links on ContrastiveDenoisingTrainer HOT 2
- how to save the best Epoch HOT 11
- Dropout layer being created on forward pass (in MultiHeadedAttention) HOT 1
- about Wide's input dim HOT 5
- ImportError: cannot import name 'LRScheduler' from 'torch.optim.lr_scheduler' HOT 8
- OSError when importing the package HOT 4
- AttributeError: 'TabMlp' object has no attribute 'with_fds' HOT 3
- Colab session crash on .fit HOT 3
- how to use lr warmup in traing stage? HOT 3
- Problems running transformer models HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-widedeep.