package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type t
val of_pyobject : Py.Object.t -> t
val to_pyobject : t -> Py.Object.t
val create : ?weights:Arr.t -> ?n_jobs:int -> estimators:(string * Py.Object.t) list -> unit -> t

Prediction voting regressor for unfitted estimators.

.. versionadded:: 0.21

A voting regressor is an ensemble meta-estimator that fits base regressors each on the whole dataset. It, then, averages the individual predictions to form a final prediction.

Read more in the :ref:`User Guide <voting_regressor>`.

Parameters ---------- estimators : list of (str, estimator) tuples Invoking the ``fit`` method on the ``VotingRegressor`` will fit clones of those original estimators that will be stored in the class attribute ``self.estimators_``. An estimator can be set to ``'drop'`` using ``set_params``.

.. deprecated:: 0.22 Using ``None`` to drop an estimator is deprecated in 0.22 and support will be dropped in 0.24. Use the string ``'drop'`` instead.

weights : array-like, shape (n_regressors,), optional (default=`None`) Sequence of weights (`float` or `int`) to weight the occurrences of predicted values before averaging. Uses uniform weights if `None`.

n_jobs : int or None, optional (default=None) The number of jobs to run in parallel for ``fit``. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary <n_jobs>` for more details.

Attributes ---------- estimators_ : list of regressors The collection of fitted sub-estimators as defined in ``estimators`` that are not 'drop'.

named_estimators_ : Bunch object, a dictionary with attribute access Attribute to access any fitted sub-estimators by name.

.. versionadded:: 0.20

See Also -------- VotingClassifier: Soft Voting/Majority Rule classifier.

Examples -------- >>> import numpy as np >>> from sklearn.linear_model import LinearRegression >>> from sklearn.ensemble import RandomForestRegressor >>> from sklearn.ensemble import VotingRegressor >>> r1 = LinearRegression() >>> r2 = RandomForestRegressor(n_estimators=10, random_state=1) >>> X = np.array([1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36]) >>> y = np.array(2, 6, 12, 20, 30, 42) >>> er = VotingRegressor(('lr', r1), ('rf', r2)) >>> print(er.fit(X, y).predict(X)) 3.3 5.7 11.8 19.7 28. 40.3

val fit : ?sample_weight:Arr.t -> x:Arr.t -> y:Arr.t -> t -> t

Fit the estimators.

Parameters ---------- X : array-like, sparse matrix, shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features.

y : array-like, shape (n_samples,) Target values.

sample_weight : array-like, shape (n_samples,) or None Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights.

Returns ------- self : object Fitted estimator.

val fit_transform : ?y:Arr.t -> ?fit_params:(string * Py.Object.t) list -> x:Arr.t -> t -> Arr.t

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters ---------- X : numpy array of shape n_samples, n_features Training set.

y : numpy array of shape n_samples Target values.

**fit_params : dict Additional fit parameters.

Returns ------- X_new : numpy array of shape n_samples, n_features_new Transformed array.

val get_params : ?deep:bool -> t -> Py.Object.t

Get the parameters of an estimator from the ensemble.

Parameters ---------- deep : bool Setting it to True gets the various classifiers and the parameters of the classifiers as well.

val predict : x:Arr.t -> t -> Arr.t

Predict regression target for X.

The predicted regression target of an input sample is computed as the mean predicted regression targets of the estimators in the ensemble.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) The input samples.

Returns ------- y : array of shape (n_samples,) The predicted values.

val score : ?sample_weight:Arr.t -> x:Arr.t -> y:Arr.t -> t -> float

Return the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.

Parameters ---------- X : array-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead, shape = (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

y : array-like of shape (n_samples,) or (n_samples, n_outputs) True values for X.

sample_weight : array-like of shape (n_samples,), default=None Sample weights.

Returns ------- score : float R^2 of self.predict(X) wrt. y.

Notes ----- The R2 score used when calling ``score`` on a regressor will use ``multioutput='uniform_average'`` from version 0.23 to keep consistent with :func:`~sklearn.metrics.r2_score`. This will influence the ``score`` method of all the multioutput regressors (except for :class:`~sklearn.multioutput.MultiOutputRegressor`). To specify the default value manually and avoid the warning, please either call :func:`~sklearn.metrics.r2_score` directly or make a custom scorer with :func:`~sklearn.metrics.make_scorer` (the built-in scorer ``'r2'`` uses ``multioutput='uniform_average'``).

val set_params : ?params:(string * Py.Object.t) list -> t -> t

Set the parameters of an estimator from the ensemble.

Valid parameter keys can be listed with `get_params()`.

Parameters ---------- **params : keyword arguments Specific parameters using e.g. `set_params(parameter_name=new_value)`. In addition, to setting the parameters of the stacking estimator, the individual estimator of the stacking estimators can also be set, or can be removed by setting them to 'drop'.

val transform : x:Arr.t -> t -> Arr.t

Return predictions for X for each estimator.

Parameters ---------- X : array-like, sparse matrix, shape (n_samples, n_features) The input samples.

Returns ------- predictions: array of shape (n_samples, n_classifiers) Values predicted by each regressor.

val estimators_ : t -> Py.Object.t

Attribute estimators_: get value or raise Not_found if None.

val estimators_opt : t -> Py.Object.t option

Attribute estimators_: get value as an option.

val named_estimators_ : t -> Dict.t

Attribute named_estimators_: get value or raise Not_found if None.

val named_estimators_opt : t -> Dict.t option

Attribute named_estimators_: get value as an option.

val to_string : t -> string

Print the object to a human-readable representation.

val show : t -> string

Print the object to a human-readable representation.

val pp : Format.formatter -> t -> unit

Pretty-print the object to a formatter.

OCaml

Innovation. Community. Security.