base
¶
Classes:
Name | Description |
---|---|
DeepEstimator |
Incremental wrapper around a PyTorch module with dynamic feature adaptation. |
RollingDeepEstimator |
Extension of :class: |
DeepEstimator
¶
DeepEstimator(
module: Module,
loss_fn: Union[str, Callable] = "mse",
optimizer_fn: Union[str, Callable] = "sgd",
lr: float = 0.001,
device: str = "cpu",
seed: int = 42,
is_feature_incremental: bool = False,
gradient_clip_value: float | None = None,
**kwargs
)
Bases: Estimator
Incremental wrapper around a PyTorch module with dynamic feature adaptation.
This class augments a regular torch.nn.Module
with utilities that make it
compatible with the river
incremental learning API. Beyond standard online
optimisation it optionally supports feature-incremental learning: whenever
previously unseen input feature names appear, the first trainable layer (the
input layer) can be expanded on‑the‑fly so that the model seamlessly accepts
the enlarged feature space without re‑initialisation.
The class also provides a persistence protocol (save
/load
/clone
)
that captures both the module weights and the runtime state (observed feature
names, rolling buffers, etc.), allowing exact round‑trips across Python
sessions. Optimisers are transparently rebuilt after structural changes so any
newly created parameters participate in subsequent optimisation steps.
Typical workflow
- Instantiate with a vanilla PyTorch module (e.g. an
nn.Sequential
or a custom subclass). - Feed samples via higher level task specific subclasses (e.g. classifier)
that call
_learn
internally. - (Optional) Enable
is_feature_incremental=True
for dynamic input growth. - Persist with
save
and later restore withload
.
Example
import torch from torch import nn from deep_river.base import DeepEstimator class TinyNet(nn.Module): ... def init(self, n_features=3): ... super().init() ... self.fc = nn.Linear(n_features, 2) ... def forward(self, x): ... return self.fc(x) est = DeepEstimator( ... module=TinyNet(3), ... loss_fn='mse', ... optimizer_fn='sgd', ... is_feature_incremental=True, ... ) est._update_observed_features({'a': 1.0, 'b': 2.0, 'c': 3.0}) # internal bookkeeping True
Notes
- The class itself is task‑agnostic. Task specific behaviour (e.g. converting
labels to one‑hot encodings) lives in subclasses such as
Classifier
orRegressor
. - Only the first and last trainable leaf modules are treated as input and
output layers. Non‑parametric layers (e.g.
ReLU
) are skipped.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module
|
Module
|
The PyTorch model whose parameters are to be updated incrementally. |
required |
loss_fn
|
str | Callable
|
Loss identifier or callable passed to :func: |
'mse'
|
optimizer_fn
|
str | Callable
|
Optimiser identifier or optimiser class / factory. |
'sgd'
|
lr
|
float
|
Learning rate. |
1e-3
|
device
|
str
|
Device on which the module is run. |
'cpu'
|
seed
|
int
|
Random seed (sets |
42
|
is_feature_incremental
|
bool
|
If True, expands the input layer when new feature names are encountered. |
False
|
gradient_clip_value
|
float | None
|
If provided, gradient norm is clipped to this value each optimisation step. |
None
|
**kwargs
|
dict
|
Additional custom arguments retained for reconstruction on |
{}
|
Attributes:
Name | Type | Description |
---|---|---|
module |
Module
|
The wrapped PyTorch module. |
loss_func |
Callable
|
Resolved loss function callable. |
optimizer |
Optimizer
|
Optimiser instance (rebuilt after structural changes). |
input_layer |
Module | None
|
First trainable leaf module (may be |
output_layer |
Module | None
|
Last trainable leaf module. |
observed_features |
SortedSet[str]
|
Ordered set of feature names seen so far. |
module_input_len |
int | None
|
Cached original input size of the input layer (if identifiable). |
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
load |
Load a previously saved estimator. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/base.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |
Source code in deep_river/base.py
RollingDeepEstimator
¶
RollingDeepEstimator(
module: Module,
loss_fn: Union[str, Callable] = "mse",
optimizer_fn: Union[str, Callable] = "sgd",
lr: float = 0.001,
device: str = "cpu",
seed: int = 42,
window_size: int = 10,
append_predict: bool = False,
**kwargs
)
Bases: DeepEstimator
Extension of :class:DeepEstimator
with a fixed-size rolling window.
Maintains a collections.deque
of the most recent window_size
inputs
enabling models (e.g. sequence learners) to condition on a short history.
Optionally the model's own predictions can be appended to the window
(via append_predict
) to facilitate iterative forecasting.
Methods:
Name | Description |
---|---|
clone |
Return a fresh estimator instance with (optionally) copied state. |
draw |
Render a (partial) computational graph of the wrapped model. |
load |
Load a previously saved estimator. |
save |
Persist the estimator (architecture, weights, optimiser & runtime state). |
Source code in deep_river/base.py
clone
¶
Return a fresh estimator instance with (optionally) copied state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
new_params
|
dict | None
|
Parameter overrides for the cloned instance. |
None
|
include_attributes
|
bool
|
If True, runtime state (observed features, buffers) is also copied. |
False
|
copy_weights
|
bool
|
If True, model weights are copied (otherwise the module is re‑initialised). |
False
|
Source code in deep_river/base.py
draw
¶
Render a (partial) computational graph of the wrapped model.
Imports graphviz
and torchviz
lazily. Raises an informative
ImportError if the optional dependencies are not installed.
Source code in deep_river/base.py
load
classmethod
¶
Load a previously saved estimator.
The method reconstructs the estimator class, its wrapped module, optimiser state and runtime information (feature names, buffers, etc.).
Source code in deep_river/base.py
save
¶
Persist the estimator (architecture, weights, optimiser & runtime state).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
filepath
|
str | Path
|
Destination file. Parent directories are created automatically. |
required |