zoo
¶
Classes:
Name | Description |
---|---|
LSTMRegressor |
Rolling LSTM regressor for sequential / time-series data. |
LinearRegression |
Incremental linear regression with optional feature growth and gradient clipping. |
MultiLayerPerceptron |
Multi-layer perceptron regressor with optional feature growth. |
RNNRegressor |
Rolling RNN regressor for sequential / time-series data. |
LSTMRegressor
¶
LSTMRegressor(
n_features: int = 10,
hidden_size: int = 32,
num_layers: int = 1,
dropout: float = 0.0,
gradient_clip_value: float | None = 1.0,
loss_fn: Union[str, Callable] = "mse",
optimizer_fn: Union[str, Type[Optimizer]] = "adam",
lr: float = 0.001,
is_feature_incremental: bool = False,
device: str = "cpu",
seed: int = 42,
**kwargs
)
Bases: RollingRegressor
Rolling LSTM regressor for sequential / time-series data.
Improves over a naïve single-unit LSTM by separating the hidden representation
(hidden_size
) from the 1D regression output head. Supports optional
dropout and multiple LSTM layers. Designed to work with a rolling window
maintained by :class:~deep_river.base.RollingDeepEstimator
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_features
|
int
|
Number of input features per timestep (may grow if feature-incremental). |
10
|
hidden_size
|
int
|
Dimensionality of the LSTM hidden state. |
32
|
num_layers
|
int
|
Number of stacked LSTM layers. |
1
|
dropout
|
float
|
Dropout probability applied after the LSTM (and internally by PyTorch if
|
0.0
|
gradient_clip_value
|
float | None
|
Gradient norm clipping threshold (helps stability). |
1.0
|
loss_fn
|
Union[str, Callable]
|
Standard configuration. |
'mse'
|
optimizer_fn
|
Union[str, Callable]
|
Standard configuration. |
'mse'
|
lr
|
Union[str, Callable]
|
Standard configuration. |
'mse'
|
is_feature_incremental
|
Union[str, Callable]
|
Standard configuration. |
'mse'
|
device
|
Union[str, Callable]
|
Standard configuration. |
'mse'
|
seed
|
Union[str, Callable]
|
Standard configuration. |
'mse'
|
**kwargs
|
Union[str, Callable]
|
Standard configuration. |
'mse'
|
Examples:
Streaming regression on the Bikes dataset (only numeric features kept). The exact MAE value may vary depending on library version and hardware::
>>> import random, numpy as np, torch
>>> from torch import manual_seed
>>> from river import datasets, metrics
>>> from deep_river.regression.zoo import LSTMRegressor
>>> _ = manual_seed(42); random.seed(42); np.random.seed(42)
>>> first_x, _ = next(iter(datasets.Bikes()))
>>> numeric_keys = sorted([k for k,v in first_x.items() if isinstance(v,(int,float))])
>>> reg = LSTMRegressor(
... n_features=len(numeric_keys), hidden_size=8, num_layers=1,
... optimizer_fn='sgd', lr=1e-2, is_feature_incremental=True,
... )
>>> mae = metrics.MAE()
>>> for i, (x, y) in enumerate(datasets.Bikes().take(200)):
... x_num = {k: x[k] for k in numeric_keys}
... if i > 0:
... y_pred = reg.predict_one(x_num)
... mae.update(y, y_pred)
... reg.learn_one(x_num, y)
>>> assert 0.0 <= mae.get() < 20.0
>>> print(f"MAE: {mae.get():.4f}") # doctest: +ELLIPSIS
MAE: ...
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
learn_many |
Batch update with multiple samples using the rolling window. |
learn_one |
Update model using a single (x, y) and current rolling window. |
load |
Load a previously saved estimator. |
predict_many |
Predict targets for multiple samples (appends to a copy of the window). |
predict_one |
Predict a single regression target using rolling context. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/regression/zoo.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
learn_many
¶
Batch update with multiple samples using the rolling window.
Only performs an optimisation step once the internal window has reached
window_size
length to ensure a full sequence is available.
Source code in deep_river/regression/rolling_regressor.py
learn_one
¶
Update model using a single (x, y) and current rolling window.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Feature mapping. |
required |
y
|
float
|
Target value. |
required |
Source code in deep_river/regression/rolling_regressor.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
predict_many
¶
Predict targets for multiple samples (appends to a copy of the window).
Returns a single-column DataFrame named 'y_pred'
.
Source code in deep_river/regression/rolling_regressor.py
predict_one
¶
Predict a single regression target using rolling context.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Feature mapping. |
required |
Returns:
Type | Description |
---|---|
float
|
Predicted target value. |
Source code in deep_river/regression/rolling_regressor.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |
Source code in deep_river/base.py
LinearRegression
¶
LinearRegression(
n_features: int = 10,
loss_fn: Union[str, Callable] = "mse",
optimizer_fn: Union[str, Type[Optimizer]] = "sgd",
lr: float = 0.001,
is_feature_incremental: bool = False,
device: str = "cpu",
seed: int = 42,
gradient_clip_value: float | None = 1.0,
**kwargs
)
Bases: Regressor
Incremental linear regression with optional feature growth and gradient clipping.
A thin wrapper that instantiates a single linear layer and enables
dynamic feature expansion when is_feature_incremental=True
. The model
outputs a single continuous target value.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_features
|
int
|
Initial number of input features (columns). The input layer can expand if feature incrementality is enabled and new feature names appear. |
10
|
loss_fn
|
str | Callable
|
Loss used for optimisation. |
'mse'
|
optimizer_fn
|
str | type
|
Optimizer specification. |
'sgd'
|
lr
|
float
|
Learning rate. |
1e-3
|
is_feature_incremental
|
bool
|
Whether to expand the input layer when new features appear. |
False
|
device
|
str
|
Torch device. |
'cpu'
|
seed
|
int
|
Random seed. |
42
|
gradient_clip_value
|
float | None
|
Gradient norm clipping threshold. Disabled if |
None
|
**kwargs
|
Forwarded to :class: |
{}
|
Examples:
Streaming regression on the Bikes dataset (only numeric features kept).
The exact MAE value may vary depending on library version and hardware::
>>> import random, numpy as np, torch
>>> from torch import manual_seed
>>> from river import datasets, metrics
>>> from deep_river.regression.zoo import LinearRegression
>>> _ = manual_seed(42); random.seed(42); np.random.seed(42)
>>> first_x, _ = next(iter(datasets.Bikes()))
>>> numeric_keys = sorted([k for k,v in first_x.items() if isinstance(v,(int,float))])
>>> reg = LinearRegression(n_features=len(numeric_keys),
... loss_fn='mse', lr=1e-2,
... is_feature_incremental=True)
>>> mae = metrics.MAE()
>>> for i, (x, y) in enumerate(datasets.Bikes().take(200)):
... x_num = {k: x[k] for k in numeric_keys}
... if i > 0:
... y_pred = reg.predict_one(x_num)
... mae.update(y, y_pred)
... reg.learn_one(x_num, y)
>>> assert 0.0 <= mae.get() < 20.0
>>> print(f"MAE: {mae.get():.4f}") # doctest: +ELLIPSIS
MAE: ...
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
load |
Load a previously saved estimator. |
predict_many |
Predict target values for multiple instances (returns single-column DataFrame). |
predict_one |
Predict target value for a single instance. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/regression/zoo.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
predict_many
¶
Predict target values for multiple instances (returns single-column DataFrame).
Source code in deep_river/regression/regressor.py
predict_one
¶
Predict target value for a single instance.
Source code in deep_river/regression/regressor.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |
Source code in deep_river/base.py
MultiLayerPerceptron
¶
MultiLayerPerceptron(
n_features: int = 10,
n_width: int = 5,
n_layers: int = 5,
loss_fn: Union[str, Callable] = "mse",
optimizer_fn: Union[str, Type[Optimizer]] = "sgd",
lr: float = 0.001,
is_feature_incremental: bool = False,
device: str = "cpu",
seed: int = 42,
gradient_clip_value: float | None = None,
**kwargs
)
Bases: Regressor
Multi-layer perceptron regressor with optional feature growth.
Stacks n_layers
fully connected layers of width n_width
with a
sigmoid non-linearity (kept for backward compatibility) followed by a single
output unit. Can expand its input layer when new feature names appear.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_features
|
int
|
Initial number of input features. |
10
|
n_width
|
int
|
Hidden layer width. |
5
|
n_layers
|
int
|
Number of hidden layers. Must be >=1. |
5
|
loss_fn
|
Union[str, Callable]
|
Standard estimator configuration. |
'mse'
|
optimizer_fn
|
Union[str, Callable]
|
Standard estimator configuration. |
'mse'
|
lr
|
Union[str, Callable]
|
Standard estimator configuration. |
'mse'
|
is_feature_incremental
|
Union[str, Callable]
|
Standard estimator configuration. |
'mse'
|
device
|
Union[str, Callable]
|
Standard estimator configuration. |
'mse'
|
seed
|
Union[str, Callable]
|
Standard estimator configuration. |
'mse'
|
gradient_clip_value
|
Union[str, Callable]
|
Standard estimator configuration. |
'mse'
|
**kwargs
|
Union[str, Callable]
|
Standard estimator configuration. |
'mse'
|
Notes
The use of sigmoid
after each hidden layer can cause saturation; for
deeper networks consider replacing with ReLU or GELU in a custom module.
Examples:
Streaming regression on the Bikes dataset (only numeric features kept). The exact MAE value may vary depending on library version and hardware::
>>> import random, numpy as np, torch
>>> from torch import manual_seed
>>> from river import datasets, metrics
>>> from deep_river.regression.zoo import MultiLayerPerceptron
>>> _ = manual_seed(42); random.seed(42); np.random.seed(42)
>>> first_x, _ = next(iter(datasets.Bikes()))
>>> numeric_keys = sorted([k for k,v in first_x.items() if isinstance(v,(int,float))])
>>> reg = MultiLayerPerceptron(
... n_features=len(numeric_keys), n_width=8, n_layers=2,
... optimizer_fn='sgd', lr=1e-2, is_feature_incremental=True,
... )
>>> mae = metrics.MAE()
>>> for i, (x, y) in enumerate(datasets.Bikes().take(200)):
... x_num = {k: x[k] for k in numeric_keys}
... if i > 0:
... y_pred = reg.predict_one(x_num)
... mae.update(y, y_pred)
... reg.learn_one(x_num, y)
>>> assert 0.0 <= mae.get() < 20.0
>>> print(f"MAE: {mae.get():.4f}") # doctest: +ELLIPSIS
MAE: ...
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
load |
Load a previously saved estimator. |
predict_many |
Predict target values for multiple instances (returns single-column DataFrame). |
predict_one |
Predict target value for a single instance. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/regression/zoo.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
predict_many
¶
Predict target values for multiple instances (returns single-column DataFrame).
Source code in deep_river/regression/regressor.py
predict_one
¶
Predict target value for a single instance.
Source code in deep_river/regression/regressor.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |
Source code in deep_river/base.py
RNNRegressor
¶
RNNRegressor(
n_features: int = 10,
hidden_size: int = 32,
num_layers: int = 1,
nonlinearity: str = "tanh",
dropout: float = 0.0,
gradient_clip_value: float | None = 1.0,
loss_fn: Union[str, Callable] = "mse",
optimizer_fn: Union[str, Type[Optimizer]] = "adam",
lr: float = 0.001,
is_feature_incremental: bool = False,
device: str = "cpu",
seed: int = 42,
**kwargs
)
Bases: RollingRegressor
Rolling RNN regressor for sequential / time-series data.
Uses a nn.RNN
backbone and a linear head to output a single continuous
target. Leverages the rolling window maintained by :class:RollingRegressor
to feed the last window_size
observations as a sequence.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_features
|
int
|
Number of input features per timestep. |
10
|
hidden_size
|
int
|
Hidden state dimensionality of the RNN. |
32
|
num_layers
|
int
|
Number of stacked RNN layers. |
1
|
nonlinearity
|
str
|
Non-linearity used inside the RNN ( |
'tanh'
|
dropout
|
float
|
Dropout applied after extracting the last hidden state (no internal RNN dropout). |
0.0
|
gradient_clip_value
|
float | None
|
Gradient norm clipping threshold. |
1.0
|
loss_fn
|
Union[str, Callable]
|
Standard configuration as in other regressors. |
'mse'
|
optimizer_fn
|
Union[str, Callable]
|
Standard configuration as in other regressors. |
'mse'
|
lr
|
Union[str, Callable]
|
Standard configuration as in other regressors. |
'mse'
|
is_feature_incremental
|
Union[str, Callable]
|
Standard configuration as in other regressors. |
'mse'
|
device
|
Union[str, Callable]
|
Standard configuration as in other regressors. |
'mse'
|
seed
|
Union[str, Callable]
|
Standard configuration as in other regressors. |
'mse'
|
**kwargs
|
Union[str, Callable]
|
Standard configuration as in other regressors. |
'mse'
|
Examples:
Streaming regression on the Bikes dataset (only numeric features kept). The exact MAE value may vary depending on library version and hardware::
>>> import random, numpy as np, torch
>>> from torch import manual_seed
>>> from river import datasets, metrics
>>> from deep_river.regression.zoo import RNNRegressor
>>> _ = manual_seed(42); random.seed(42); np.random.seed(42)
>>> first_x, _ = next(iter(datasets.Bikes()))
>>> numeric_keys = sorted([k for k,v in first_x.items() if isinstance(v,(int,float))])
>>> reg = RNNRegressor(
... n_features=len(numeric_keys), hidden_size=8, num_layers=1,
... optimizer_fn='sgd', lr=1e-2, is_feature_incremental=True,
... )
>>> mae = metrics.MAE()
>>> for i, (x, y) in enumerate(datasets.Bikes().take(200)):
... x_num = {k: x[k] for k in numeric_keys}
... if i > 0:
... y_pred = reg.predict_one(x_num)
... mae.update(y, y_pred)
... reg.learn_one(x_num, y)
>>> assert 0.0 <= mae.get() < 20.0
>>> print(f"MAE: {mae.get():.4f}") # doctest: +ELLIPSIS
MAE: ...
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
learn_many |
Batch update with multiple samples using the rolling window. |
learn_one |
Update model using a single (x, y) and current rolling window. |
load |
Load a previously saved estimator. |
predict_many |
Predict targets for multiple samples (appends to a copy of the window). |
predict_one |
Predict a single regression target using rolling context. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/regression/zoo.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
learn_many
¶
Batch update with multiple samples using the rolling window.
Only performs an optimisation step once the internal window has reached
window_size
length to ensure a full sequence is available.
Source code in deep_river/regression/rolling_regressor.py
learn_one
¶
Update model using a single (x, y) and current rolling window.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Feature mapping. |
required |
y
|
float
|
Target value. |
required |
Source code in deep_river/regression/rolling_regressor.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
predict_many
¶
Predict targets for multiple samples (appends to a copy of the window).
Returns a single-column DataFrame named 'y_pred'
.
Source code in deep_river/regression/rolling_regressor.py
predict_one
¶
Predict a single regression target using rolling context.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
dict
|
Feature mapping. |
required |
Returns:
Type | Description |
---|---|
float
|
Predicted target value. |
Source code in deep_river/regression/rolling_regressor.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |