zoo
¶
Classes:
Name | Description |
---|---|
LSTMClassifier |
Rolling LSTM classifier with dynamic class expansion. |
LogisticRegression |
Incremental logistic regression with optional dynamic class expansion. |
MultiLayerPerceptron |
Configurable multi-layer perceptron with dynamic class expansion. |
RNNClassifier |
Rolling RNN classifier with dynamic class expansion. |
LSTMClassifier
¶
LSTMClassifier(
n_features: int = 10,
hidden_size: int = 16,
n_init_classes: int = 2,
loss_fn: Union[str, Callable] = "cross_entropy",
optimizer_fn: Union[str, Type[Optimizer]] = "sgd",
lr: float = 0.001,
output_is_logit: bool = True,
is_feature_incremental: bool = False,
is_class_incremental: bool = True,
device: str = "cpu",
seed: int = 42,
gradient_clip_value: float | None = None,
**kwargs
)
Bases: RollingClassifier
Rolling LSTM classifier with dynamic class expansion.
An LSTM backbone feeds into a linear head that produces logits. Designed for
sequential/temporal streams processed via a rolling window (see
:class:RollingClassifier
). The output layer (head
) expands
when new classes are observed (if enabled).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_features
|
int
|
Number of input features per timestep. |
10
|
hidden_size
|
int
|
Hidden state dimensionality of the LSTM. |
16
|
n_init_classes
|
int
|
Initial number of output classes. |
2
|
loss_fn
|
str | Callable
|
Training loss. |
'cross_entropy'
|
optimizer_fn
|
str | type
|
Optimizer specification. |
'sgd'
|
lr
|
float
|
Learning rate. |
1e-3
|
output_is_logit
|
bool
|
Indicates outputs are logits (enables proper conversion in |
True
|
is_feature_incremental
|
bool
|
Whether to dynamically expand the input layer when new features appear. |
False
|
is_class_incremental
|
bool
|
Whether to expand the output layer for new class labels. |
True
|
device
|
str
|
Torch device. |
'cpu'
|
seed
|
int
|
Random seed. |
42
|
gradient_clip_value
|
float | None
|
Optional gradient norm clipping value. |
None
|
Examples:
Deterministischer Test mit dem Phishing-Datenstrom: Rekurrente Gewichte & Kopf-Parameter werden genullt; Bias erzwingt Klasse 0 unabhängig vom Input. (Nur zur Illustration; Lernrate 0 verhindert Updates.)::
>>> import torch, random, numpy as np
>>> from torch import manual_seed
>>> from river import datasets
>>> from river import metrics
>>> from deep_river.classification.zoo import LSTMClassifier
>>> _ = manual_seed(42); random.seed(42); np.random.seed(42)
>>> stream = datasets.Phishing()
>>> samples = {}
>>> for x, y in stream:
... if y not in samples:
... samples[y] = x
... if len(samples) == 2:
... break
>>> x0, x1 = samples[0], samples[1]
>>> n_features = len(x0)
>>> lstm_clf = LSTMClassifier(n_features=n_features, hidden_size=3, n_init_classes=2,
... is_class_incremental=False, is_feature_incremental=False,
... lr=0.0, optimizer_fn='sgd')
>>> lstm_clf.learn_one(x0, 0)
>>> acc = metrics.Accuracy()
>>> for i, (x, y) in enumerate(datasets.Phishing().take(200)):
... lstm_clf.learn_one(x, y)
... if i > 0:
... y_pred = lstm_clf.predict_one(x)
... acc.update(y, y_pred)
>>> assert 0.0 <= acc.get() <= 1.0
>>> print(f"Accuracy: {acc.get():.4f}") # doctest: +ELLIPSIS
Accuracy: ...
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
learn_many |
Batch update: extend window with rows of X and perform a step. |
learn_one |
Learn from a single (x, y) updating the rolling window. |
load |
Load a previously saved estimator. |
predict_proba_many |
Return probability DataFrame for multiple samples with rolling context. |
predict_proba_one |
Return class probability mapping for one sample using rolling context. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/classification/zoo.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
learn_many
¶
Batch update: extend window with rows of X and perform a step.
Source code in deep_river/classification/rolling_classifier.py
learn_one
¶
Learn from a single (x, y) updating the rolling window.
Source code in deep_river/classification/rolling_classifier.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
predict_proba_many
¶
Return probability DataFrame for multiple samples with rolling context.
Source code in deep_river/classification/rolling_classifier.py
predict_proba_one
¶
Return class probability mapping for one sample using rolling context.
Source code in deep_river/classification/rolling_classifier.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |
Source code in deep_river/base.py
LogisticRegression
¶
LogisticRegression(
n_features: int = 10,
n_init_classes: int = 2,
loss_fn: Union[str, Callable] = "cross_entropy",
optimizer_fn: Union[str, Type[Optimizer]] = "sgd",
lr: float = 0.001,
output_is_logit: bool = True,
is_feature_incremental: bool = False,
is_class_incremental: bool = True,
device: str = "cpu",
seed: int = 42,
gradient_clip_value: float | None = None,
**kwargs
)
Bases: Classifier
Incremental logistic regression with optional dynamic class expansion.
This variant outputs raw logits (no internal softmax) so that losses like
cross_entropy
can be applied directly. The output layer can grow in
response to newly observed class labels when is_class_incremental=True
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_features
|
int
|
Initial number of input features. |
10
|
n_init_classes
|
int
|
Initial number of output units/classes. Expanded automatically if new classes appear and class incrementality is enabled. |
2
|
loss_fn
|
str | Callable
|
Training loss. |
'cross_entropy'
|
optimizer_fn
|
str | type
|
Optimizer specification. |
'sgd'
|
lr
|
float
|
Learning rate. |
1e-3
|
output_is_logit
|
bool
|
Indicates outputs are logits (enables proper conversion in |
True
|
is_feature_incremental
|
bool
|
Whether to dynamically expand the input layer when new features appear. |
False
|
is_class_incremental
|
bool
|
Whether to expand the output layer for new class labels. |
True
|
device
|
str
|
Torch device. |
'cpu'
|
seed
|
int
|
Random seed. |
42
|
gradient_clip_value
|
float | None
|
Optional gradient norm clipping value. |
None
|
**kwargs
|
Forwarded to the parent constructor. |
{}
|
Examples:
Streaming binary classification on the Phishing dataset. The exact Accuracy value may vary depending on library version and hardware::
>>> import random, numpy as np, torch
>>> from torch import manual_seed
>>> from river import datasets, metrics
>>> from deep_river.classification.zoo import LogisticRegression
>>> _ = manual_seed(42); random.seed(42); np.random.seed(42)
>>> first_x, _ = next(iter(datasets.Phishing()))
>>> clf = LogisticRegression(
... n_features=len(first_x), n_init_classes=2,
... optimizer_fn='sgd', lr=1e-2, is_class_incremental=True,
... )
>>> acc = metrics.Accuracy()
>>> for i, (x, y) in enumerate(datasets.Phishing().take(200)):
... clf.learn_one(x, y)
... if i > 0:
... y_pred = clf.predict_one(x)
... acc.update(y, y_pred)
>>> assert 0.5 <= acc.get() <= 1.0
>>> print(f"Accuracy: {acc.get():.4f}") # doctest: +ELLIPSIS
Accuracy: ...
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
learn_many |
Learn from a batch of instances. |
learn_one |
Learn from a single instance. |
load |
Load a previously saved estimator. |
predict_proba_many |
Predict probabilities for a batch of instances. |
predict_proba_one |
Predict class membership probabilities for one instance. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/classification/zoo.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
learn_many
¶
Learn from a batch of instances.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X
|
DataFrame
|
Batch of feature rows. |
required |
y
|
Series
|
Corresponding labels. |
required |
Source code in deep_river/classification/classifier.py
learn_one
¶
Learn from a single instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Feature dictionary. |
required |
y
|
hashable
|
Class label. |
required |
Source code in deep_river/classification/classifier.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
predict_proba_many
¶
Predict probabilities for a batch of instances.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X
|
DataFrame
|
Feature matrix. |
required |
Returns:
Type | Description |
---|---|
DataFrame
|
Each row sums to 1 (multi-class) or has two columns for binary. |
Source code in deep_river/classification/classifier.py
predict_proba_one
¶
Predict class membership probabilities for one instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Feature dictionary. |
required |
Returns:
Type | Description |
---|---|
dict
|
Mapping from label -> probability. |
Source code in deep_river/classification/classifier.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |
Source code in deep_river/base.py
MultiLayerPerceptron
¶
MultiLayerPerceptron(
n_features: int = 10,
n_width: int = 5,
n_layers: int = 5,
n_init_classes: int = 2,
loss_fn: Union[str, Callable] = "cross_entropy",
optimizer_fn: Union[str, Type[Optimizer]] = "sgd",
lr: float = 0.001,
output_is_logit: bool = True,
is_feature_incremental: bool = False,
is_class_incremental: bool = True,
device: str = "cpu",
seed: int = 42,
gradient_clip_value: float | None = None,
**kwargs
)
Bases: Classifier
Configurable multi-layer perceptron with dynamic class expansion.
Hidden layers use ReLU activations; the output layer emits raw logits.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_features
|
int
|
Initial number of features. |
10
|
n_width
|
int
|
Width (units) of each hidden layer. |
5
|
n_layers
|
int
|
Number of hidden layers (>=1). If 1, only the input layer feeds the output. |
5
|
n_init_classes
|
int
|
Initial number of classes/output units. |
2
|
loss_fn
|
Union[str, Callable]
|
is_class_incremental, device, seed, gradient_clip_value, **kwargs
See :class: |
'cross_entropy'
|
optimizer_fn
|
Union[str, Callable]
|
is_class_incremental, device, seed, gradient_clip_value, **kwargs
See :class: |
'cross_entropy'
|
lr
|
Union[str, Callable]
|
is_class_incremental, device, seed, gradient_clip_value, **kwargs
See :class: |
'cross_entropy'
|
output_is_logit
|
Union[str, Callable]
|
is_class_incremental, device, seed, gradient_clip_value, **kwargs
See :class: |
'cross_entropy'
|
is_feature_incremental
|
Union[str, Callable]
|
is_class_incremental, device, seed, gradient_clip_value, **kwargs
See :class: |
'cross_entropy'
|
Examples:
Phishing dataset stream with online Accuracy. The exact value may vary
depending on library version and hardware::
>>> import random, numpy as np, torch
>>> from torch import manual_seed
>>> from river import datasets, metrics
>>> from deep_river.classification.zoo import MultiLayerPerceptron
>>> _ = manual_seed(42); random.seed(42); np.random.seed(42)
>>> first_x, _ = next(iter(datasets.Phishing()))
>>> mlp = MultiLayerPerceptron(
... n_features=len(first_x), n_width=8, n_layers=2, n_init_classes=2,
... optimizer_fn='sgd', lr=5e-3, is_class_incremental=True,
... )
>>> acc = metrics.Accuracy()
>>> for i, (x, y) in enumerate(datasets.Phishing().take(200)):
... mlp.learn_one(x, y)
... if i > 0:
... y_pred = mlp.predict_one(x)
... acc.update(y, y_pred)
>>> assert 0.5 <= acc.get() <= 1.0
>>> print(f"Accuracy: {acc.get():.4f}") # doctest: +ELLIPSIS
Accuracy: ...
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
learn_many |
Learn from a batch of instances. |
learn_one |
Learn from a single instance. |
load |
Load a previously saved estimator. |
predict_proba_many |
Predict probabilities for a batch of instances. |
predict_proba_one |
Predict class membership probabilities for one instance. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/classification/zoo.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
learn_many
¶
Learn from a batch of instances.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X
|
DataFrame
|
Batch of feature rows. |
required |
y
|
Series
|
Corresponding labels. |
required |
Source code in deep_river/classification/classifier.py
learn_one
¶
Learn from a single instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Feature dictionary. |
required |
y
|
hashable
|
Class label. |
required |
Source code in deep_river/classification/classifier.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
predict_proba_many
¶
Predict probabilities for a batch of instances.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X
|
DataFrame
|
Feature matrix. |
required |
Returns:
Type | Description |
---|---|
DataFrame
|
Each row sums to 1 (multi-class) or has two columns for binary. |
Source code in deep_river/classification/classifier.py
predict_proba_one
¶
Predict class membership probabilities for one instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Feature dictionary. |
required |
Returns:
Type | Description |
---|---|
dict
|
Mapping from label -> probability. |
Source code in deep_river/classification/classifier.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |
Source code in deep_river/base.py
RNNClassifier
¶
RNNClassifier(
n_features: int = 10,
hidden_size: int = 16,
num_layers: int = 1,
nonlinearity: str = "tanh",
n_init_classes: int = 2,
loss_fn: Union[str, Callable] = "cross_entropy",
optimizer_fn: Union[str, Type[Optimizer]] = "adam",
lr: float = 0.001,
output_is_logit: bool = True,
is_feature_incremental: bool = False,
is_class_incremental: bool = True,
device: str = "cpu",
seed: int = 42,
gradient_clip_value: float | None = None,
**kwargs
)
Bases: RollingClassifier
Rolling RNN classifier with dynamic class expansion.
Uses a (stacked) nn.RNN
backbone followed by a linear head that produces
raw logits. Designed for streaming sequential data via a fixed-size rolling
window handled by :class:RollingClassifier
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_features
|
int
|
Number of input features per timestep. |
10
|
hidden_size
|
int
|
Hidden state dimensionality of the RNN. |
16
|
num_layers
|
int
|
Number of stacked RNN layers. |
1
|
nonlinearity
|
str
|
Non-linearity used inside the RNN ('tanh' or 'relu'). |
'tanh'
|
n_init_classes
|
int
|
Initial number of classes/output units. |
2
|
loss_fn
|
str | Callable
|
Training loss. |
'cross_entropy'
|
optimizer_fn
|
str | type
|
Optimizer specification. |
'sgd'
|
lr
|
float
|
Learning rate. |
1e-3
|
output_is_logit
|
bool
|
Indicates outputs are logits (enables proper conversion in |
True
|
is_feature_incremental
|
bool
|
Whether to dynamically expand the input layer when new features appear. |
False
|
is_class_incremental
|
bool
|
Whether to expand the output layer for new class labels. |
True
|
device
|
str
|
Torch device. |
'cpu'
|
seed
|
int
|
Random seed. |
42
|
gradient_clip_value
|
float | None
|
Optional gradient norm clipping value. |
None
|
Examples:
>>> import torch, random, numpy as np
>>> from torch import manual_seed
>>> from river import metrics
>>> from river import datasets
>>> from deep_river.classification.zoo import RNNClassifier
>>> _ = manual_seed(42); random.seed(42); np.random.seed(42)
>>> stream = datasets.Phishing()
>>> samples = {}
>>> for x, y in stream:
... if y not in samples:
... samples[y] = x
... if len(samples) == 2:
... break
>>> x0, x1 = samples[0], samples[1]
>>> n_features = len(x0)
>>> rnn_clf = RNNClassifier(n_features=n_features, hidden_size=3, n_init_classes=2,
... is_class_incremental=False, is_feature_incremental=False)
>>> acc = metrics.Accuracy()
>>> for i, (x, y) in enumerate(datasets.Phishing().take(200)):
... rnn_clf.learn_one(x, y)
... if i > 0:
... y_pred = rnn_clf.predict_one(x)
... acc.update(y, y_pred)
>>> assert 0.0 <= acc.get() <= 1.0
>>> print(f"Accuracy: {acc.get():.4f}") # doctest: +ELLIPSIS
Accuracy: ...
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
learn_many |
Batch update: extend window with rows of X and perform a step. |
learn_one |
Learn from a single (x, y) updating the rolling window. |
load |
Load a previously saved estimator. |
predict_proba_many |
Return probability DataFrame for multiple samples with rolling context. |
predict_proba_one |
Return class probability mapping for one sample using rolling context. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/classification/zoo.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
learn_many
¶
Batch update: extend window with rows of X and perform a step.
Source code in deep_river/classification/rolling_classifier.py
learn_one
¶
Learn from a single (x, y) updating the rolling window.
Source code in deep_river/classification/rolling_classifier.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
predict_proba_many
¶
Return probability DataFrame for multiple samples with rolling context.
Source code in deep_river/classification/rolling_classifier.py
predict_proba_one
¶
Return class probability mapping for one sample using rolling context.
Source code in deep_river/classification/rolling_classifier.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |