rolling_classifier
¶
Classes:
Name | Description |
---|---|
RollingClassifier |
Wrapper that feeds a sliding window of the most recent examples to the |
RollingClassifierInitialized |
RollingClassifierInitialized extends both ClassifierInitialized and |
RollingClassifier
¶
RollingClassifier(
module: Type[Module],
loss_fn: Union[
str, Callable
] = "binary_cross_entropy_with_logits",
optimizer_fn: Union[str, Callable] = "sgd",
lr: float = 0.001,
output_is_logit: bool = True,
is_class_incremental: bool = False,
is_feature_incremental: bool = False,
device: str = "cpu",
seed: int = 42,
window_size: int = 10,
append_predict: bool = False,
**kwargs
)
Bases: Classifier
, RollingDeepEstimator
Wrapper that feeds a sliding window of the most recent examples to the wrapped PyTorch classification model. The class also automatically handles increases in the number of classes by adding output neurons in case the number of observed classes exceeds the current number of output neurons.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module
|
Type[Module]
|
Torch Module that builds the autoencoder to be wrapped.
The Module should accept parameter |
required |
loss_fn
|
Union[str, Callable]
|
Loss function to be used for training the wrapped model. Can be a
loss function provided by |
'binary_cross_entropy_with_logits'
|
optimizer_fn
|
Union[str, Callable]
|
Optimizer to be used for training the wrapped model. Can be an
optimizer class provided by |
'sgd'
|
lr
|
float
|
Learning rate of the optimizer. |
0.001
|
output_is_logit
|
bool
|
|
True
|
is_class_incremental
|
bool
|
Whether the classifier should adapt to the appearance of previously unobserved classes by adding an unit to the output layer of the network. This works only if the last trainable layer is an nn.Linear layer. Note also, that output activation functions can not be adapted, meaning that a binary classifier with a sigmoid output can not be altered to perform multi-class predictions. |
False
|
is_feature_incremental
|
bool
|
Whether the model should adapt to the appearance of previously features by adding units to the input layer of the network. |
False
|
device
|
str
|
Device to run the wrapped model on. Can be "cpu" or "cuda". |
'cpu'
|
seed
|
int
|
Random seed to be used for training the wrapped model. |
42
|
window_size
|
int
|
Number of recent examples to be fed to the wrapped model at each step. |
10
|
append_predict
|
bool
|
Whether to append inputs passed for prediction to the rolling window. |
False
|
**kwargs
|
Parameters to be passed to the |
{}
|
Methods:
Name | Description |
---|---|
clone |
Clones the estimator. |
draw |
Draws the wrapped model. |
initialize_module |
Parameters |
learn_many |
Performs one step of training with the most recent training examples |
learn_one |
Performs one step of training with the most recent training examples |
predict_proba_many |
Predict the probability of each label given the most recent examples |
predict_proba_one |
Predict the probability of each label given the most recent examples |
Source code in deep_river/classification/rolling_classifier.py
clone
¶
Clones the estimator.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict[Any, Any] | None
|
New parameters to be passed to the cloned estimator. |
None
|
include_attributes
|
If True, the attributes of the estimator will be copied to the cloned estimator. This is useful when the estimator is a transformer and the attributes are the learned parameters. |
False
|
Returns:
Type | Description |
---|---|
DeepEstimator
|
The cloned estimator. |
Source code in deep_river/base.py
draw
¶
Draws the wrapped model.
Source code in deep_river/base.py
initialize_module
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module
|
The instance or class or callable to be initialized, e.g.
|
required | |
kwargs
|
dict
|
The keyword arguments to initialize the instance or class. Can be an empty dict. |
{}
|
Returns:
Type | Description |
---|---|
instance
|
The initialized component. |
Source code in deep_river/base.py
learn_many
¶
Performs one step of training with the most recent training examples stored in the sliding window.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X
|
DataFrame
|
Input examples. |
required |
y
|
Series
|
Target values. |
required |
Returns:
Type | Description |
---|---|
Classifier
|
The classifier itself. |
Source code in deep_river/classification/rolling_classifier.py
learn_one
¶
Performs one step of training with the most recent training examples stored in the sliding window.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Input example. |
required |
y
|
ClfTarget
|
Target value. |
required |
Returns:
Type | Description |
---|---|
Classifier
|
The classifier itself. |
Source code in deep_river/classification/rolling_classifier.py
predict_proba_many
¶
Predict the probability of each label given the most recent examples
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
DataFrame
|
|
required |
Returns:
Type | Description |
---|---|
DataFrame
|
DataFrame of probabilities for each label. |
Source code in deep_river/classification/rolling_classifier.py
predict_proba_one
¶
Predict the probability of each label given the most recent examples stored in the sliding window.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Input example. |
required |
Returns:
Type | Description |
---|---|
Dict[ClfTarget, float]
|
Dictionary of probabilities for each label. |
Source code in deep_river/classification/rolling_classifier.py
RollingClassifierInitialized
¶
RollingClassifierInitialized(
module: Module,
loss_fn: Union[
str, Callable
] = "binary_cross_entropy_with_logits",
optimizer_fn: Union[str, Type[Optimizer]] = "sgd",
lr: float = 0.001,
output_is_logit: bool = True,
is_class_incremental: bool = False,
is_feature_incremental: bool = False,
device: str = "cpu",
seed: int = 42,
window_size: int = 10,
append_predict: bool = False,
**kwargs
)
Bases: ClassifierInitialized
, RollingDeepEstimatorInitialized
RollingClassifierInitialized extends both ClassifierInitialized and RollingDeepEstimatorInitialized, incorporating a rolling window mechanism for sequential learning in an evolving feature and class space.
This classifier dynamically adapts to new features and classes over time while leveraging a rolling window for training. It supports single-instance and batch learning while maintaining adaptability.
Attributes:
Name | Type | Description |
---|---|---|
module |
Module
|
The PyTorch model used for classification. |
loss_fn |
Union[str, Callable]
|
The loss function for training, defaulting to binary cross-entropy with logits. |
optimizer_fn |
Union[str, Type[Optimizer]]
|
The optimizer function or class used for training. |
lr |
float
|
The learning rate for optimization. |
output_is_logit |
bool
|
Indicates whether model outputs logits or probabilities. |
is_class_incremental |
bool
|
Whether new classes should be dynamically added. |
is_feature_incremental |
bool
|
Whether new features should be dynamically added. |
device |
str
|
The computational device for training (e.g., "cpu", "cuda"). |
seed |
int
|
The random seed for reproducibility. |
window_size |
int
|
The number of past instances considered in the rolling window. |
append_predict |
bool
|
Whether predictions should be appended to the rolling window. |
observed_classes |
SortedSet
|
Tracks observed class labels for incremental learning. |
Examples:
>>> from deep_river.classification import RollingClassifier
>>> from river import metrics, preprocessing, datasets, compose
>>> import torch
>>> class RnnModule(torch.nn.Module):
... def __init__(self, n_features, hidden_size=1):
... super().__init__()
... self.n_features = n_features
... self.rnn = torch.nn.RNN(
... input_size=n_features, hidden_size=hidden_size, num_layers=1
... )
... self.softmax = torch.nn.Softmax(dim=-1)
...
... def forward(self, X, **kwargs):
... out, hn = self.rnn(X) # lstm with input, hidden, and internal state
... hn = hn.view(-1, self.rnn.hidden_size)
... return self.softmax(hn)
>>> model_pipeline = compose.Pipeline(
... preprocessing.StandardScaler,
... RollingClassifierInitialized(module=RnnModule(10,1),
... loss_fn="binary_cross_entropy",
... optimizer_fn='adam')
... )
>>> dataset = datasets.Keystroke()
>>> metric = metrics.Accuracy()
>>> optimizer_fn = torch.optim.SGD
>>> model_pipeline = preprocessing.StandardScaler()
>>> model_pipeline |= RollingClassifier(
... module=RnnModule,
... loss_fn="binary_cross_entropy",
... optimizer_fn=torch.optim.SGD,
... window_size=20,
... lr=1e-2,
... append_predict=True,
... is_class_incremental=False,
... )
>>> for x, y in dataset:
... y_pred = model_pipeline.predict_one(x) # make a prediction
... metric.update(y, y_pred) # update the metric
... model_pipeline.learn_one(x, y) # make the model learn
>>> print(f"Accuracy: {metric.get():.2f}")
Methods:
Name | Description |
---|---|
learn_many |
Learns from multiple examples using the rolling window. |
learn_one |
Learns from one example using the rolling window. |
predict_proba_many |
Predicts probabilities for many examples. |
predict_proba_one |
Predicts class probabilities using the rolling window. |
Source code in deep_river/classification/rolling_classifier.py
learn_many
¶
Learns from multiple examples using the rolling window.
Source code in deep_river/classification/rolling_classifier.py
learn_one
¶
Learns from one example using the rolling window.
Source code in deep_river/classification/rolling_classifier.py
predict_proba_many
¶
Predicts probabilities for many examples.
Source code in deep_river/classification/rolling_classifier.py
predict_proba_one
¶
Predicts class probabilities using the rolling window.