Approximate a kernel map using a subset of the training data.
Constructs an approximate feature map for an arbitrary kernel using a subset of the data as basis.
Read more in the :ref:`User Guide <nystroem_kernel_approx>`.
.. versionadded:: 0.13
Parameters ---------- kernel : string or callable, default="rbf" Kernel map to be approximated. A callable should accept two arguments and the keyword arguments passed to this object as kernel_params, and should return a floating point number.
gamma : float, default=None Gamma parameter for the RBF, laplacian, polynomial, exponential chi2 and sigmoid kernels. Interpretation of the default value is left to the kernel; see the documentation for sklearn.metrics.pairwise. Ignored by other kernels.
coef0 : float, default=None Zero coefficient for polynomial and sigmoid kernels. Ignored by other kernels.
degree : float, default=None Degree of the polynomial kernel. Ignored by other kernels.
kernel_params : mapping of string to any, optional Additional parameters (keyword arguments) for kernel function passed as callable object.
n_components : int Number of features to construct. How many data points will be used to construct the mapping.
random_state : int, RandomState instance or None, optional (default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`.
Attributes ---------- components_ : array, shape (n_components, n_features) Subset of training points used to construct the feature map.
component_indices_ : array, shape (n_components) Indices of ``components_`` in the training set.
normalization_ : array, shape (n_components, n_components) Normalization matrix needed for embedding. Square root of the kernel matrix on ``components_``.
Examples -------- >>> from sklearn import datasets, svm >>> from sklearn.kernel_approximation import Nystroem >>> X, y = datasets.load_digits(n_class=9, return_X_y=True) >>> data = X / 16. >>> clf = svm.LinearSVC() >>> feature_map_nystroem = Nystroem(gamma=.2, ... random_state=1, ... n_components=300) >>> data_transformed = feature_map_nystroem.fit_transform(data) >>> clf.fit(data_transformed, y) LinearSVC() >>> clf.score(data_transformed, y) 0.9987...
References ---------- * Williams, C.K.I. and Seeger, M. "Using the Nystroem method to speed up kernel machines", Advances in neural information processing systems 2001
* T. Yang, Y. Li, M. Mahdavi, R. Jin and Z. Zhou "Nystroem Method vs Random Fourier Features: A Theoretical and Empirical Comparison", Advances in Neural Information Processing Systems 2012
See also -------- RBFSampler : An approximation to the RBF kernel using random Fourier features.
sklearn.metrics.pairwise.kernel_metrics : List of built-in kernels.