Cross-validated Lasso, using the LARS algorithm.
See glossary entry for :term:`cross-validation estimator`.
The optimization objective for Lasso is::
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the :ref:`User Guide <least_angle_regression>`.
Parameters ---------- fit_intercept : bool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbose : bool or int, default=False Sets the verbosity amount
max_iter : int, default=500 Maximum number of iterations to perform.
normalize : bool, default=True This parameter is ignored when ``fit_intercept`` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use :class:`sklearn.preprocessing.StandardScaler` before calling ``fit`` on an estimator with ``normalize=False``.
precompute : bool or 'auto' , default='auto' Whether to use a precomputed Gram matrix to speed up calculations. If set to ``'auto'`` let us decide. The Gram matrix cannot be passed as argument since we will use only subsets of X.
cv : int, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are:
- None, to use the default 5-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`,
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various cross-validation strategies that can be used here.
.. versionchanged:: 0.22 ``cv`` default value if None changed from 3-fold to 5-fold.
max_n_alphas : int, default=1000 The maximum number of points on the path used to compute the residuals in the cross-validation
n_jobs : int or None, default=None Number of CPUs to use during the cross validation. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary <n_jobs>` for more details.
eps : float, optional The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. By default, ``np.finfo(np.float).eps`` is used.
copy_X : bool, default=True If True, X will be copied; else, it may be overwritten.
positive : bool, default=False Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients do not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (``alphas_alphas_ >
0.
.min()`` when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator. As a consequence using LassoLarsCV only makes sense for problems where a sparse solution is expected and/or reached.
Attributes ---------- coef_ : array-like of shape (n_features,) parameter vector (w in the formulation formula)
intercept_ : float independent term in decision function.
coef_path_ : array-like of shape (n_features, n_alphas) the varying values of the coefficients along the path
alpha_ : float the estimated regularization parameter alpha
alphas_ : array-like of shape (n_alphas,) the different values of alpha along the path
cv_alphas_ : array-like of shape (n_cv_alphas,) all the values of alpha along the path for the different folds
mse_path_ : array-like of shape (n_folds, n_cv_alphas) the mean square error on left-out for each fold along the path (alpha values given by ``cv_alphas``)
n_iter_ : array-like or int the number of iterations run by Lars with the optimal alpha.
Examples -------- >>> from sklearn.linear_model import LassoLarsCV >>> from sklearn.datasets import make_regression >>> X, y = make_regression(noise=4.0, random_state=0) >>> reg = LassoLarsCV(cv=5).fit(X, y) >>> reg.score(X, y) 0.9992... >>> reg.alpha_ 0.0484... >>> reg.predict(X:1,
) array(-77.8723...
)
Notes -----
The object solves the same problem as the LassoCV object. However, unlike the LassoCV, it find the relevant alphas values by itself. In general, because of this property, it will be more stable. However, it is more fragile to heavily multicollinear datasets.
It is more efficient than the LassoCV if only a small number of features are selected compared to the total number, for instance if there are very few samples compared to the number of features.
See also -------- lars_path, LassoLars, LarsCV, LassoCV