classifier
¶
Classes:
Name | Description |
---|---|
Classifier |
Wrapper for PyTorch classification models that automatically handles |
ClassifierInitialized |
Wrapper for PyTorch classification models that automatically handles |
Classifier
¶
Classifier(
module: Type[Module],
loss_fn: Union[
str, Callable
] = "binary_cross_entropy_with_logits",
optimizer_fn: Union[str, Callable] = "sgd",
lr: float = 0.001,
output_is_logit: bool = True,
is_class_incremental: bool = False,
is_feature_incremental: bool = False,
device: str = "cpu",
seed: int = 42,
**kwargs
)
Bases: DeepEstimator
, MiniBatchClassifier
Wrapper for PyTorch classification models that automatically handles increases in the number of classes by adding output neurons in case the number of observed classes exceeds the current number of output neurons.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module
|
Type[Module]
|
Torch Module that builds the autoencoder to be wrapped.
The Module should accept parameter |
required |
loss_fn
|
Union[str, Callable]
|
Loss function to be used for training the wrapped model. Can be a
loss function provided by |
'binary_cross_entropy_with_logits'
|
optimizer_fn
|
Union[str, Callable]
|
Optimizer to be used for training the wrapped model.
Can be an optimizer class provided by |
'sgd'
|
lr
|
float
|
Learning rate of the optimizer. |
0.001
|
output_is_logit
|
bool
|
Whether the module produces logits as output. If true, either softmax or sigmoid is applied to the outputs when predicting. |
True
|
is_class_incremental
|
bool
|
Whether the classifier should adapt to the appearance of previously unobserved classes by adding a unit to the output layer of the network. This works only if the last trainable layer is a nn.Linear layer. Note also, that output activation functions can not be adapted, meaning that a binary classifier with a sigmoid output can not be altered to perform multi-class predictions. |
False
|
is_feature_incremental
|
bool
|
Whether the model should adapt to the appearance of previously features by adding units to the input layer of the network. |
False
|
device
|
str
|
to run the wrapped model on. Can be "cpu" or "cuda". |
'cpu'
|
seed
|
int
|
Random seed to be used for training the wrapped model. |
42
|
**kwargs
|
Parameters to be passed to the |
{}
|
Examples:
>>> from river import metrics, preprocessing, compose, datasets
>>> from deep_river import classification
>>> from torch import nn
>>> from torch import manual_seed
>>> class MyModule(nn.Module):
... def __init__(self, n_features):
... super(MyModule, self).__init__()
... self.dense0 = nn.Linear(n_features,5)
... self.nlin = nn.ReLU()
... self.dense1 = nn.Linear(5, 2)
... self.softmax = nn.Softmax(dim=-1)
...
... def forward(self, x, **kwargs):
... x = self.nlin(self.dense0(x))
... x = self.nlin(self.dense1(x))
... x = self.softmax(x)
... return x
>>> model_pipeline = compose.Pipeline(
... preprocessing.StandardScaler,
... Classifier(module=MyModule,
... loss_fn="binary_cross_entropy",
... optimizer_fn='adam')
... )
>>> for x, y in dataset:
... y_pred = model_pipeline.predict_one(x) # make a prediction
... metric.update(y, y_pred) # update the metric
... model_pipeline.learn_one(x,y)
Methods:
Name | Description |
---|---|
clone |
Clones the estimator. |
draw |
Draws the wrapped model. |
initialize_module |
Parameters |
learn_many |
Performs one step of training with a batch of examples. |
learn_one |
Performs one step of training with a single example. |
predict_proba_many |
Predict the probability of each label given the input. |
predict_proba_one |
Predict the probability of each label given the input. |
Source code in deep_river/classification/classifier.py
clone
¶
Clones the estimator.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict[Any, Any] | None
|
New parameters to be passed to the cloned estimator. |
None
|
include_attributes
|
If True, the attributes of the estimator will be copied to the cloned estimator. This is useful when the estimator is a transformer and the attributes are the learned parameters. |
False
|
Returns:
Type | Description |
---|---|
DeepEstimator
|
The cloned estimator. |
Source code in deep_river/base.py
draw
¶
Draws the wrapped model.
Source code in deep_river/base.py
initialize_module
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module
|
The instance or class or callable to be initialized, e.g.
|
required | |
kwargs
|
dict
|
The keyword arguments to initialize the instance or class. Can be an empty dict. |
{}
|
Returns:
Type | Description |
---|---|
instance
|
The initialized component. |
Source code in deep_river/base.py
learn_many
¶
Performs one step of training with a batch of examples.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X
|
DataFrame
|
Input examples. |
required |
y
|
Series
|
Target values. |
required |
Returns:
Type | Description |
---|---|
Classifier
|
The classifier itself. |
Source code in deep_river/classification/classifier.py
learn_one
¶
Performs one step of training with a single example.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Input example. |
required |
y
|
ClfTarget
|
Target value. |
required |
Returns:
Type | Description |
---|---|
Classifier
|
The classifier itself. |
Source code in deep_river/classification/classifier.py
predict_proba_many
¶
Predict the probability of each label given the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
DataFrame
|
Input examples. |
required |
Returns:
Type | Description |
---|---|
DataFrame
|
of probabilities for each label. |
Source code in deep_river/classification/classifier.py
predict_proba_one
¶
Predict the probability of each label given the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Input example. |
required |
Returns:
Type | Description |
---|---|
Dict[ClfTarget, float]
|
Dictionary of probabilities for each label. |
Source code in deep_river/classification/classifier.py
ClassifierInitialized
¶
ClassifierInitialized(
module: Module,
loss_fn: Union[str, Callable],
optimizer_fn: Union[str, type],
lr: float = 0.001,
output_is_logit: bool = True,
is_class_incremental: bool = False,
is_feature_incremental: bool = False,
device: str = "cpu",
seed: int = 42,
**kwargs
)
Bases: DeepEstimatorInitialized
, MiniBatchClassifier
Wrapper for PyTorch classification models that automatically handles increases in the number of classes by adding output neurons in case the number of observed classes exceeds the current number of output neurons.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module
|
Module
|
Torch Module that builds the autoencoder to be wrapped. |
required |
loss_fn
|
Union[str, Callable]
|
Loss function to be used for training the wrapped model. Can be a
loss function provided by |
required |
optimizer_fn
|
Union[str, type]
|
Optimizer to be used for training the wrapped model.
Can be an optimizer class provided by |
required |
lr
|
float
|
Learning rate of the optimizer. |
0.001
|
output_is_logit
|
bool
|
Whether the module produces logits as output. If true, either softmax or sigmoid is applied to the outputs when predicting. |
True
|
is_class_incremental
|
bool
|
Whether the classifier should adapt to the appearance of previously unobserved classes by adding a unit to the output layer of the network. This works only if the last trainable layer is a nn.Linear layer. Note also, that output activation functions can not be adapted, meaning that a binary classifier with a sigmoid output can not be altered to perform multi-class predictions. |
False
|
is_feature_incremental
|
bool
|
Whether the model should adapt to the appearance of previously features by adding units to the input layer of the network. |
False
|
device
|
str
|
to run the wrapped model on. Can be "cpu" or "cuda". |
'cpu'
|
seed
|
int
|
Random seed to be used for training the wrapped model. |
42
|
**kwargs
|
Parameters to be passed to the |
{}
|
Examples:
>>> from river import metrics, preprocessing, compose, datasets
>>> from deep_river import classification
>>> from torch import nn
>>> from torch import manual_seed
>>> class MyModule(nn.Module):
... def __init__(self):
... super(MyModule, self).__init__()
... self.dense0 = nn.Linear(10,5)
... self.nlin = nn.ReLU()
... self.dense1 = nn.Linear(5, 2)
... self.softmax = nn.Softmax(dim=-1)
...
... def forward(self, x, **kwargs):
... x = self.nlin(self.dense0(x))
... x = self.nlin(self.dense1(x))
... x = self.softmax(x)
... return x
>>> model_pipeline = compose.Pipeline(
... preprocessing.StandardScaler,
... Classifier(module=MyModule,
... loss_fn="binary_cross_entropy",
... optimizer_fn='adam')
... )
>>> for x, y in dataset:
... y_pred = model_pipeline.predict_one(x) # make a prediction
... metric.update(y, y_pred) # update the metric
... model_pipeline.learn_one(x,y)
Methods:
Name | Description |
---|---|
learn_many |
Updates the model with multiple instances for supervised learning. |
learn_one |
Learns from a single example. |
predict_proba_many |
Predicts probabilities for multiple examples. |
predict_proba_one |
Predicts probabilities for a single example. |
Source code in deep_river/classification/classifier.py
learn_many
¶
Updates the model with multiple instances for supervised learning.
The function updates the observed features and targets based on the input data. It converts the data from a pandas DataFrame to a tensor format before learning occurs. The updates to the model are executed through an internal learning mechanism.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X
|
DataFrame
|
The data-frame containing instances to be learned by the model. Each row represents a single instance, and each column represents a feature. |
required |
y
|
Series
|
The target values corresponding to the instances in |
required |
Returns:
Type | Description |
---|---|
None
|
|
Source code in deep_river/classification/classifier.py
learn_one
¶
Learns from a single example.
predict_proba_many
¶
Predicts probabilities for multiple examples.
Source code in deep_river/classification/classifier.py
predict_proba_one
¶
Predicts probabilities for a single example.