rolling_classifier
¶
Classes:
Name | Description |
---|---|
RollingClassifier |
Rolling window variant of :class: |
RollingClassifier
¶
RollingClassifier(
module: Module,
loss_fn: Union[
str, Callable
] = "binary_cross_entropy_with_logits",
optimizer_fn: Union[str, Type[Optimizer]] = "sgd",
lr: float = 0.001,
output_is_logit: bool = True,
is_class_incremental: bool = False,
is_feature_incremental: bool = False,
device: str = "cpu",
seed: int = 42,
window_size: int = 10,
append_predict: bool = False,
gradient_clip_value: float | None = None,
**kwargs
)
Bases: Classifier
, RollingDeepEstimator
Rolling window variant of :class:Classifier
.
Maintains a fixed-size deque of the most recent observations (window_size
)
and feeds them as a temporal slice to the underlying module. This enables
simple short-term sequence conditioning without explicit recurrent state
handling on the user side.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module
|
Module
|
Classification module consuming a rolling tensor shaped roughly as
|
required |
loss_fn
|
str | Callable
|
Loss identifier or callable. |
'binary_cross_entropy_with_logits'
|
optimizer_fn
|
str | type
|
Optimizer specification. |
'sgd'
|
lr
|
float
|
Learning rate. |
1e-3
|
output_is_logit
|
bool
|
Whether raw logits are produced (enables post-softmax via |
True
|
is_class_incremental
|
bool
|
Expand output layer when new class labels appear. |
False
|
is_feature_incremental
|
bool
|
Expand input layer when new feature names appear. |
False
|
device
|
str
|
Torch device. |
'cpu'
|
seed
|
int
|
Random seed. |
42
|
window_size
|
int
|
Number of past samples kept. |
10
|
append_predict
|
bool
|
If True, predictions are appended to internal window during inference (useful for autoregressive generation). |
False
|
gradient_clip_value
|
float | None
|
Optional gradient clipping threshold. |
None
|
**kwargs
|
Forwarded to parent constructors. |
{}
|
Examples:
Streaming binary classification on the Phishing dataset with a tiny RNN.
We only assert the final Accuracy lies in ``[0, 1]`` for doctest stability.
>>> import random, numpy as np, torch
>>> from torch import nn, manual_seed
>>> from river import datasets, metrics
>>> from deep_river.classification import RollingClassifier
>>> _ = manual_seed(42); random.seed(42); np.random.seed(42)
>>> first_x, _ = next(iter(datasets.Phishing()))
>>> n_features = len(first_x)
>>> class TinyRNN(nn.Module):
... def __init__(self, n_features):
... super().__init__()
... self.rnn = nn.RNN(n_features, 8)
... self.head = nn.Linear(8, 2)
... def forward(self, x):
... out, _ = self.rnn(x)
... return self.head(out[-1]) # logits
>>> rclf = RollingClassifier(
... module=TinyRNN(n_features),
... loss_fn='cross_entropy',
... optimizer_fn='sgd',
... lr=5e-3,
... window_size=8,
... is_class_incremental=True
... )
>>> acc = metrics.Accuracy()
>>> for i, (x, y) in enumerate(datasets.Phishing().take(200)):
... if i > 0:
... y_pred = rclf.predict_one(x)
... acc.update(y, y_pred)
... rclf.learn_one(x, y)
>>> print(f"Accuracy: {acc.get():.4f}")
Accuracy: ...
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
learn_many |
Batch update: extend window with rows of X and perform a step. |
learn_one |
Learn from a single (x, y) updating the rolling window. |
load |
Load a previously saved estimator. |
predict_proba_many |
Return probability DataFrame for multiple samples with rolling context. |
predict_proba_one |
Return class probability mapping for one sample using rolling context. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/classification/rolling_classifier.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
learn_many
¶
Batch update: extend window with rows of X and perform a step.
Source code in deep_river/classification/rolling_classifier.py
learn_one
¶
Learn from a single (x, y) updating the rolling window.
Source code in deep_river/classification/rolling_classifier.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
predict_proba_many
¶
Return probability DataFrame for multiple samples with rolling context.
Source code in deep_river/classification/rolling_classifier.py
predict_proba_one
¶
Return class probability mapping for one sample using rolling context.
Source code in deep_river/classification/rolling_classifier.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |