base
DeepEstimator(module, loss_fn='mse', optimizer_fn='sgd', lr=0.001, is_feature_incremental=False, device='cpu', seed=42, **kwargs)
¶
Bases: Estimator
Abstract base class that implements basic functionality of River-compatible PyTorch wrappers.
PARAMETER | DESCRIPTION |
---|---|
module |
Torch Module that builds the autoencoder to be wrapped.
The Module should accept parameter
TYPE:
|
loss_fn |
Loss function to be used for training the wrapped model. Can be a loss
function provided by
TYPE:
|
optimizer_fn |
Optimizer to be used for training the wrapped model.
Can be an optimizer class provided by
TYPE:
|
lr |
Learning rate of the optimizer.
TYPE:
|
device |
Device to run the wrapped model on. Can be "cpu" or "cuda".
TYPE:
|
seed |
Random seed to be used for training the wrapped model.
TYPE:
|
**kwargs |
Parameters to be passed to the
DEFAULT:
|
clone(new_params=None, include_attributes=False)
¶
Clones the estimator.
PARAMETER | DESCRIPTION |
---|---|
new_params |
New parameters to be passed to the cloned estimator.
TYPE:
|
include_attributes |
If True, the attributes of the estimator will be copied to the cloned estimator. This is useful when the estimator is a transformer and the attributes are the learned parameters.
DEFAULT:
|
RETURNS | DESCRIPTION |
---|---|
DeepEstimator
|
The cloned estimator. |
draw()
¶
Draws the wrapped model.
initialize_module(x, **kwargs)
¶
PARAMETER | DESCRIPTION |
---|---|
module |
The instance or class or callable to be initialized, e.g.
|
kwargs |
The keyword arguments to initialize the instance or class. Can be an empty dict.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
instance
|
The initialized component. |
RollingDeepEstimator(module, loss_fn='mse', optimizer_fn='sgd', lr=0.001, device='cpu', seed=42, window_size=10, append_predict=False, **kwargs)
¶
Bases: DeepEstimator
Abstract base class that implements basic functionality of River-compatible PyTorch wrappers including a rolling window to allow the model to make predictions based on multiple previous examples.
PARAMETER | DESCRIPTION |
---|---|
module |
Torch Module that builds the autoencoder to be wrapped. The Module
should accept parameter
TYPE:
|
loss_fn |
Loss function to be used for training the wrapped model. Can be a loss
function provided by
TYPE:
|
optimizer_fn |
Optimizer to be used for training the wrapped model.
Can be an optimizer class provided by
TYPE:
|
lr |
Learning rate of the optimizer.
TYPE:
|
device |
Device to run the wrapped model on. Can be "cpu" or "cuda".
TYPE:
|
seed |
Random seed to be used for training the wrapped model.
TYPE:
|
window_size |
Size of the rolling window used for storing previous examples.
TYPE:
|
append_predict |
Whether to append inputs passed for prediction to the rolling window.
TYPE:
|
**kwargs |
Parameters to be passed to the
DEFAULT:
|