Comments (12)
I think that
from dask_ml.datasets import make_regression
from dask_glm.regularizers import L1
from dask_glm.estimators import LinearRegression
X, y = make_regression(n_samples=1000, chunks=100)
lr = LinearRegression(regularizer=L1())
lr.fit(X, y)
Is basically correct. I haven't looked at the various options for scikit-learn's Lasso.
from dask-ml.
Note: I think that all the pieces should be in place thanks to dask-glm. This should be a matter of translating the scikit-learn API to a linear regression with dask-glm's L1 regularizer.
from dask-ml.
Do you have any code snippets that I should look at for trying to do something like this?
from dask-ml.
Hmm...so when scikit-learn
implements these sorts of things, they seem to support a vector or matrix for y
. However it seems that dask-glm
only supports a vector for y
. Do you know why that is? Would it be possible to change it? If so, how difficult would that be?
Edit: Have migrated this concern to issue ( #201 ).
from dask-ml.
from dask-ml.
from dask-ml.
Meaning 2-D ndarray
(though it is a fair question). Should add that scikit-learn
typically coerces 1-D ndarray
s into singleton 2-D ndarray
s when 2-D ndarray
s are allowed.
Not sure whether squeezing make sense. More likely iterating over the 1-D slices and fitting them independently would make sense, which appears to be what scikit-learn
is doing. So this should benefit quite nicely from Distributed.
from dask-ml.
+1, interested in this as well. The provided code
from dask_ml.datasets import make_regression
from dask_glm.regularizers import L1
from dask_glm.estimators import LinearRegression
X, y = make_regression(n_samples=1000, chunks=100)
lr = LinearRegression(regularizer=L1())
lr.fit(X, y)
is missing the ability to set the alpha value - the coefficients seem to point to this not being a proper lasso regression.
The following example I quickly threw together also doesn't appear to work properly, but it piggybacks on top of Dask GLM's ElasticNet the same way scikit's Lasso runs on top of scikit's ElasticNet.
family = dask_glm.families.Normal()
regularizer = dask_glm.regularizers.ElasticNet(weight=1)
b = dask_glm.algorithms.gradient_descent(X=X, y=y, max_iter=100000, family=family, regularizer=regularizer, alpha=0.01, normalize=False)
from dask-ml.
Isn't it possible to set the regularization value with the code below?
from dask_ml.datasets import make_regression
from dask_ml.linear_model import LinearRegression
X, y = make_regression(n_samples=1000, chunks=100)
lr = LinearRegression(regularizer="l1", C=1e-6)
lr.fit(X, y)
assert np.abs(lr.coef_).max() < 1e-3, "C=1e-6 should produce mostly 0 coefs"
C
and alpha
/lamduh
control the strength of the regularization (but might be inverses of each other).
from dask-ml.
Isn't it possible to set the regularization value with the code below?
from dask_ml.datasets import make_regression from dask_ml.linear_model import LinearRegression X, y = make_regression(n_samples=1000, chunks=100) lr = LinearRegression(regularizer="l1", C=1e-6) lr.fit(X, y) assert np.abs(lr.coef_).max() < 1e-3, "C=1e-6 should produce mostly 0 coefs"
C
andalpha
/lamduh
control the strength of the regularization (but might be inverses of each other).
Indeed this is what I was missing, appreciate the pointer!
from dask-ml.
Given a small C, the regression does appear to function similarly to Lasso given a small C (implying a large alpha). However, you're also right in that C is inverse of the alpha parameter.
Scikit's documentation says alpha = 1/2C where C is given in other linreg libraries. So an alpha of 0.01 should correspond with a C of 50.
However, with the following code comparing the outputs of both scikit's lasso and Dask's "lasso"
from sklearn.linear_model import Lasso
from dask_ml.datasets import make_regression
from dask_ml.linear_model import LinearRegression
X, y = make_regression(n_samples=1000, chunks=100)
lr = LinearRegression(penalty='l1', C=50, fit_intercept=False)
lr.fit(X, y)
r = Lasso(alpha=0.01, fit_intercept=False)
r.fit(X.compute(), y.compute())
print(lr.coef_)
print(r.coef_)
The coefficients for the dask model fit appear unstable. For very small C, they do look the same.
I'm no ML expert - in fact I'm just slapping some code together - but it seems like there's definitely an inverse relationship, just not one that's 1/2C. Which would be fine, except the performance of dask ml at very small C is several times worse than scikit - about 30x worse, for values of C and alpha that empirically appear to give very similar coefficients.
Is there something else I am missing here? Or is this performance slowdown to be expected.
from dask-ml.
except the performance of dask ml at very small C is several times worse than scikit - about 30x worse, for values of C and alpha that empirically appear to give very similar coefficients.
What do you mean "30× worse"? I'm not sure I'd expect Dask-ML any kind of timing acceleration with a small array.
C and alpha that empirically appear to give very similar coefficients.
I've verified that C and alpha give very similar coefficients. The two sets of coefficients are very close with relative error, a standard benchmark in optimization:
# script above
import numpy.linalg as LA
rel_error = LA.norm(lr.coef_ - r.coef_) / LA.norm(r.coef_)
print(rel_error) # 0.00172; very small. Two vectors have close euclidean distance
print(np.abs(r.coef_).max()) # 89.2532; the scikit-learn coefs are large
print(np.abs(lr.coef_ - r.coef_).mean()) # 0.01543; the mean error is small
print(np.abs(lr.coef_ - r.coef_).max()) # 0.10180; the max error is still pretty large
print(np.median(np.abs(lr.coef_ - r.coef_))) # 0.01077; not as expected (1e-3 or 1e-4 expected), fair given debugging
from dask-ml.
Related Issues (20)
- sklearn handles text labels differently than ml_dask on OneHotEncoding
- Implementation for make_s_curve HOT 2
- Import dask_ml with python 3.10 failed due to conflict with dask.distributed HOT 4
- Python 3.11 support HOT 2
- LogisticRegression.score returns an empty dask array
- Incremental does not handle dask arrays of ndim>2 in estimator training HOT 2
- loading dask_ml gives error contextualversionconflict with sklearn HOT 4
- For a single record data frame train_test_split() sometimes assigns this single record to test set. HOT 2
- The `log_loss`-function crashes when using mixed types
- Area under the receiving operating characteristic curve (AUROC) calculation. HOT 2
- The latest version doesn't support perceptron model
- sklearn StandardScaler vs dask StandardScaler. HOT 1
- Nearest Neighbors
- `TypeError` when predicting non-array data with `dask-expr` HOT 6
- Undeclared runtime dependency on setuptools HOT 1
- Documentation on PCA expected max memory usage HOT 1
- Support GPU-backed data for metrics.
- Unexpected behavior in train_test_split with shuffle=False
- ColumnTransformer does not work with Dask dataframes HOT 1
- Add versions after 2023.3.24 to the Anaconda main channel HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dask-ml.