Lasso model fit with Lars using BIC or AIC for model selection
The optimization objective for Lasso is::
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
AIC is the Akaike information criterion and BIC is the Bayes Information criterion. Such criteria are useful to select the value of the regularization parameter by making a trade-off between the goodness of fit and the complexity of the model. A good model should explain well the data while being simple.
Read more in the :ref:`User Guide <least_angle_regression>`.
Parameters ---------- criterion : 'bic' , 'aic'
, default='aic' The type of criterion to use.
fit_intercept : bool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbose : bool or int, default=False Sets the verbosity amount
normalize : bool, default=True This parameter is ignored when ``fit_intercept`` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use :class:`sklearn.preprocessing.StandardScaler` before calling ``fit`` on an estimator with ``normalize=False``.
precompute : bool, 'auto' or array-like, default='auto' Whether to use a precomputed Gram matrix to speed up calculations. If set to ``'auto'`` let us decide. The Gram matrix can also be passed as argument.
max_iter : int, default=500 Maximum number of iterations to perform. Can be used for early stopping.
eps : float, optional The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the ``tol`` parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization. By default, ``np.finfo(np.float).eps`` is used
copy_X : bool, default=True If True, X will be copied; else, it may be overwritten.
positive : bool, default=False Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients do not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (``alphas_alphas_ >
0.
.min()`` when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator. As a consequence using LassoLarsIC only makes sense for problems where a sparse solution is expected and/or reached.
Attributes ---------- coef_ : array-like of shape (n_features,) parameter vector (w in the formulation formula)
intercept_ : float independent term in decision function.
alpha_ : float the alpha parameter chosen by the information criterion
n_iter_ : int number of iterations run by lars_path to find the grid of alphas.
criterion_ : array-like of shape (n_alphas,) The value of the information criteria ('aic', 'bic') across all alphas. The alpha which has the smallest information criterion is chosen. This value is larger by a factor of ``n_samples`` compared to Eqns. 2.15 and 2.16 in (Zou et al, 2007).
Examples -------- >>> from sklearn import linear_model >>> reg = linear_model.LassoLarsIC(criterion='bic') >>> reg.fit([-1, 1], [0, 0], [1, 1]
, -1.1111, 0, -1.1111
) LassoLarsIC(criterion='bic') >>> print(reg.coef_) 0. -1.11...
Notes ----- The estimation of the number of degrees of freedom is given by:
'On the degrees of freedom of the lasso' Hui Zou, Trevor Hastie, and Robert Tibshirani Ann. Statist. Volume 35, Number 5 (2007), 2173-2192.
https://en.wikipedia.org/wiki/Akaike_information_criterion https://en.wikipedia.org/wiki/Bayesian_information_criterion
See also -------- lars_path, LassoLars, LassoLarsCV