package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type tag = [
  1. | `KernelDensity
]
type t = [ `BaseEstimator | `KernelDensity | `Object ] Obj.t
val of_pyobject : Py.Object.t -> t
val to_pyobject : [> tag ] Obj.t -> Py.Object.t
val as_estimator : t -> [ `BaseEstimator ] Obj.t
val create : ?bandwidth:float -> ?algorithm:string -> ?kernel:string -> ?metric:string -> ?atol:float -> ?rtol:float -> ?breadth_first:bool -> ?leaf_size:int -> ?metric_params:Dict.t -> unit -> t

Kernel Density Estimation.

Read more in the :ref:`User Guide <kernel_density>`.

Parameters ---------- bandwidth : float The bandwidth of the kernel.

algorithm : str The tree algorithm to use. Valid options are 'kd_tree'|'ball_tree'|'auto'. Default is 'auto'.

kernel : str The kernel to use. Valid kernels are 'gaussian'|'tophat'|'epanechnikov'|'exponential'|'linear'|'cosine' Default is 'gaussian'.

metric : str The distance metric to use. Note that not all metrics are valid with all algorithms. Refer to the documentation of :class:`BallTree` and :class:`KDTree` for a description of available algorithms. Note that the normalization of the density output is correct only for the Euclidean distance metric. Default is 'euclidean'.

atol : float The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 0.

rtol : float The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 1E-8.

breadth_first : bool If true (default), use a breadth-first approach to the problem. Otherwise use a depth-first approach.

leaf_size : int Specify the leaf size of the underlying tree. See :class:`BallTree` or :class:`KDTree` for details. Default is 40.

metric_params : dict Additional parameters to be passed to the tree for use with the metric. For more information, see the documentation of :class:`BallTree` or :class:`KDTree`.

See Also -------- sklearn.neighbors.KDTree : K-dimensional tree for fast generalized N-point problems. sklearn.neighbors.BallTree : Ball tree for fast generalized N-point problems.

Examples -------- Compute a gaussian kernel density estimate with a fixed bandwidth. >>> import numpy as np >>> rng = np.random.RandomState(42) >>> X = rng.random_sample((100, 3)) >>> kde = KernelDensity(kernel='gaussian', bandwidth=0.5).fit(X) >>> log_density = kde.score_samples(X:3) >>> log_density array(-1.52955942, -1.51462041, -1.60244657)

val fit : ?y:Py.Object.t -> ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> t

Fit the Kernel Density model on the data.

Parameters ---------- X : array_like, shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. y : None Ignored. This parameter exists only for compatibility with :class:`sklearn.pipeline.Pipeline`. sample_weight : array_like, shape (n_samples,), optional List of sample weights attached to the data X.

.. versionadded:: 0.20

Returns ------- self : object Returns instance of object.

val get_params : ?deep:bool -> [> tag ] Obj.t -> Dict.t

Get parameters for this estimator.

Parameters ---------- deep : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns ------- params : mapping of string to any Parameter names mapped to their values.

val sample : ?n_samples:int -> ?random_state:int -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Generate random samples from the model.

Currently, this is implemented only for gaussian and tophat kernels.

Parameters ---------- n_samples : int, optional Number of samples to generate. Defaults to 1.

random_state : int, RandomState instance, default=None Determines random number generation used to generate random samples. Pass an int for reproducible results across multiple function calls. See :term: `Glossary <random_state>`.

Returns ------- X : array_like, shape (n_samples, n_features) List of samples.

val score : ?y:Py.Object.t -> x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> float

Compute the total log probability density under the model.

Parameters ---------- X : array_like, shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. y : None Ignored. This parameter exists only for compatibility with :class:`sklearn.pipeline.Pipeline`.

Returns ------- logprob : float Total log-likelihood of the data in X. This is normalized to be a probability density, so the value will be low for high-dimensional data.

val score_samples : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Evaluate the log density model on the data.

Parameters ---------- X : array_like, shape (n_samples, n_features) An array of points to query. Last dimension should match dimension of training data (n_features).

Returns ------- density : ndarray, shape (n_samples,) The array of log(density) evaluations. These are normalized to be probability densities, so values will be low for high-dimensional data.

val set_params : ?params:(string * Py.Object.t) list -> [> tag ] Obj.t -> t

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form ``<component>__<parameter>`` so that it's possible to update each component of a nested object.

Parameters ---------- **params : dict Estimator parameters.

Returns ------- self : object Estimator instance.

val to_string : t -> string

Print the object to a human-readable representation.

val show : t -> string

Print the object to a human-readable representation.

val pp : Format.formatter -> t -> unit

Pretty-print the object to a formatter.

OCaml

Innovation. Community. Security.