rolling_classifier
RollingClassifier(module, loss_fn='binary_cross_entropy', optimizer_fn='sgd', lr=0.001, output_is_logit=True, is_class_incremental=False, device='cpu', seed=42, window_size=10, append_predict=False, **kwargs)
¶
Bases: Classifier
, RollingDeepEstimator
Wrapper that feeds a sliding window of the most recent examples to the wrapped PyTorch classification model. The class also automatically handles increases in the number of classes by adding output neurons in case the number of observed classes exceeds the current number of output neurons.
PARAMETER | DESCRIPTION |
---|---|
module |
Torch Module that builds the autoencoder to be wrapped.
The Module should accept parameter
TYPE:
|
loss_fn |
Loss function to be used for training the wrapped model. Can be a
loss function provided by
TYPE:
|
optimizer_fn |
Optimizer to be used for training the wrapped model. Can be an
optimizer class provided by
TYPE:
|
lr |
Learning rate of the optimizer.
TYPE:
|
output_is_logit |
TYPE:
|
is_class_incremental |
Whether the classifier should adapt to the appearance of previously unobserved classes by adding an unit to the output layer of the network. This works only if the last trainable layer is an nn.Linear layer. Note also, that output activation functions can not be adapted, meaning that a binary classifier with a sigmoid output can not be altered to perform multi-class predictions.
TYPE:
|
device |
Device to run the wrapped model on. Can be "cpu" or "cuda".
TYPE:
|
seed |
Random seed to be used for training the wrapped model.
TYPE:
|
window_size |
Number of recent examples to be fed to the wrapped model at each step.
TYPE:
|
append_predict |
Whether to append inputs passed for prediction to the rolling window.
TYPE:
|
**kwargs |
Parameters to be passed to the
DEFAULT:
|
Examples:
>>> from deep_river.classification import RollingClassifier
>>> from river import metrics, datasets, compose, preprocessing
>>> import torch
>>> class MyModule(torch.nn.Module):
...
... def __init__(self, n_features, hidden_size=1):
... super().__init__()
... self.n_features=n_features
... self.hidden_size = hidden_size
... self.lstm = torch.nn.LSTM(input_size=n_features,
... hidden_size=hidden_size,
... batch_first=False,
... num_layers=1,
... bias=False)
... self.softmax = torch.nn.Softmax(dim=-1)
...
... def forward(self, X, **kwargs):
... output, (hn, cn) = self.lstm(X)
... hn = hn.view(-1, self.lstm.hidden_size)
... return self.softmax(hn)
>>> dataset = datasets.Keystroke()
>>> metric = metrics.Accuracy()
>>> optimizer_fn = torch.optim.SGD
>>> model_pipeline = preprocessing.StandardScaler()
>>> model_pipeline |= RollingClassifier(
... module=MyModule,
... loss_fn="binary_cross_entropy",
... optimizer_fn=torch.optim.SGD,
... window_size=20,
... lr=1e-2,
... append_predict=True,
... is_class_incremental=True
... )
>>> for x, y in dataset.take(5000):
... y_pred = model_pipeline.predict_one(x) # make a prediction
... metric = metric.update(y, y_pred) # update the metric
... model = model_pipeline.learn_one(x, y) # make the model learn
>>> print(f'Accuracy: {metric.get()}')
Accuracy: 0.4552
learn_many(X, y)
¶
Performs one step of training with the most recent training examples stored in the sliding window.
PARAMETER | DESCRIPTION |
---|---|
X |
Input examples.
TYPE:
|
y |
Target values.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Classifier
|
The classifier itself. |
learn_one(x, y, **kwargs)
¶
Performs one step of training with the most recent training examples stored in the sliding window.
PARAMETER | DESCRIPTION |
---|---|
x |
Input example.
TYPE:
|
y |
Target value.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Classifier
|
The classifier itself. |
predict_proba_many(X)
¶
Predict the probability of each label given the most recent examples
PARAMETER | DESCRIPTION |
---|---|
X |
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
DataFrame
|
DataFrame of probabilities for each label. |
predict_proba_one(x)
¶
Predict the probability of each label given the most recent examples stored in the sliding window.
PARAMETER | DESCRIPTION |
---|---|
x |
Input example.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Dict[ClfTarget, float]
|
Dictionary of probabilities for each label. |