Kernel ridge regression.
Kernel ridge regression (KRR) combines ridge regression (linear least squares with l2-norm regularization) with the kernel trick. It thus learns a linear function in the space induced by the respective kernel and the data. For non-linear kernels, this corresponds to a non-linear function in the original space.
The form of the model learned by KRR is identical to support vector regression (SVR). However, different loss functions are used: KRR uses squared error loss while support vector regression uses epsilon-insensitive loss, both combined with l2 regularization. In contrast to SVR, fitting a KRR model can be done in closed-form and is typically faster for medium-sized datasets. On the other hand, the learned model is non-sparse and thus slower than SVR, which learns a sparse model for epsilon > 0, at prediction-time.
This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape n_samples, n_targets
).
Read more in the :ref:`User Guide <kernel_ridge>`.
Parameters ---------- alpha : float, array-like
, shape = n_targets
Small positive values of alpha improve the conditioning of the problem and reduce the variance of the estimates. Alpha corresponds to ``(2*C)^-1`` in other linear models such as LogisticRegression or LinearSVC. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number.
kernel : string or callable, default='linear' Kernel mapping used internally. A callable should accept two arguments and the keyword arguments passed to this object as kernel_params, and should return a floating point number. Set to 'precomputed' in order to pass a precomputed kernel matrix to the estimator methods instead of samples.
gamma : float, default=None Gamma parameter for the RBF, laplacian, polynomial, exponential chi2 and sigmoid kernels. Interpretation of the default value is left to the kernel; see the documentation for sklearn.metrics.pairwise. Ignored by other kernels.
degree : float, default=3 Degree of the polynomial kernel. Ignored by other kernels.
coef0 : float, default=1 Zero coefficient for polynomial and sigmoid kernels. Ignored by other kernels.
kernel_params : mapping of string to any, optional Additional parameters (keyword arguments) for kernel function passed as callable object.
Attributes ---------- dual_coef_ : array, shape = n_samples
or n_samples, n_targets
Representation of weight vector(s) in kernel space
X_fit_ : array-like, sparse matrix
of shape (n_samples, n_features) Training data, which is also required for prediction. If kernel == 'precomputed' this is instead the precomputed training matrix, shape = n_samples, n_samples
.
References ---------- * Kevin P. Murphy 'Machine Learning: A Probabilistic Perspective', The MIT Press chapter 14.4.3, pp. 492-493
See also -------- sklearn.linear_model.Ridge: Linear ridge regression. sklearn.svm.SVR: Support Vector Regression implemented using libsvm.
Examples -------- >>> from sklearn.kernel_ridge import KernelRidge >>> import numpy as np >>> n_samples, n_features = 10, 5 >>> rng = np.random.RandomState(0) >>> y = rng.randn(n_samples) >>> X = rng.randn(n_samples, n_features) >>> clf = KernelRidge(alpha=1.0) >>> clf.fit(X, y) KernelRidge(alpha=1.0)