classifier
Classifier(module, loss_fn='binary_cross_entropy_with_logits', optimizer_fn='sgd', lr=0.001, output_is_logit=True, is_class_incremental=False, is_feature_incremental=False, device='cpu', seed=42, **kwargs)
¶
Bases: DeepEstimator
, MiniBatchClassifier
Wrapper for PyTorch classification models that automatically handles increases in the number of classes by adding output neurons in case the number of observed classes exceeds the current number of output neurons.
PARAMETER | DESCRIPTION |
---|---|
module |
Torch Module that builds the autoencoder to be wrapped.
The Module should accept parameter
TYPE:
|
loss_fn |
Loss function to be used for training the wrapped model. Can be a
loss function provided by
TYPE:
|
optimizer_fn |
Optimizer to be used for training the wrapped model.
Can be an optimizer class provided by
TYPE:
|
lr |
Learning rate of the optimizer.
TYPE:
|
output_is_logit |
Whether the module produces logits as output. If true, either softmax or sigmoid is applied to the outputs when predicting.
TYPE:
|
is_class_incremental |
Whether the classifier should adapt to the appearance of previously unobserved classes by adding a unit to the output layer of the network. This works only if the last trainable layer is a nn.Linear layer. Note also, that output activation functions can not be adapted, meaning that a binary classifier with a sigmoid output can not be altered to perform multi-class predictions.
TYPE:
|
is_feature_incremental |
Whether the model should adapt to the appearance of previously features by adding units to the input layer of the network.
TYPE:
|
device |
to run the wrapped model on. Can be "cpu" or "cuda".
TYPE:
|
seed |
Random seed to be used for training the wrapped model.
TYPE:
|
**kwargs |
Parameters to be passed to the
DEFAULT:
|
Examples:
>>> from river import metrics, preprocessing, compose, datasets
>>> from deep_river import classification
>>> from torch import nn
>>> from torch import manual_seed
>>> _ = manual_seed(42)
>>> class MyModule(nn.Module):
... def __init__(self, n_features):
... super(MyModule, self).__init__()
... self.dense0 = nn.Linear(n_features,5)
... self.nlin = nn.ReLU()
... self.dense1 = nn.Linear(5, 2)
... self.softmax = nn.Softmax(dim=-1)
...
... def forward(self, x, **kwargs):
... x = self.nlin(self.dense0(x))
... x = self.nlin(self.dense1(x))
... x = self.softmax(x)
... return x
>>> model_pipeline = compose.Pipeline(
... preprocessing.StandardScaler,
... Classifier(module=MyModule,
... loss_fn="binary_cross_entropy",
... optimizer_fn='adam')
... )
>>> dataset = datasets.Phishing()
>>> metric = metrics.Accuracy()
>>> for x, y in dataset:
... y_pred = model_pipeline.predict_one(x) # make a prediction
... metric.update(y, y_pred) # update the metric
... model_pipeline.learn_one(x,y)
>>> print(f'Accuracy: {metric.get()}')
Accuracy: 0.7264
learn_many(x, y)
¶
Performs one step of training with a batch of examples.
PARAMETER | DESCRIPTION |
---|---|
x |
Input examples.
TYPE:
|
y |
Target values.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Classifier
|
The classifier itself. |
learn_one(x, y, **kwargs)
¶
Performs one step of training with a single example.
PARAMETER | DESCRIPTION |
---|---|
x |
Input example.
TYPE:
|
y |
Target value.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Classifier
|
The classifier itself. |
predict_proba_many(x)
¶
Predict the probability of each label given the input.
PARAMETER | DESCRIPTION |
---|---|
x |
Input examples.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
DataFrame
|
of probabilities for each label. |
predict_proba_one(x)
¶
Predict the probability of each label given the input.
PARAMETER | DESCRIPTION |
---|---|
x |
Input example.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Dict[ClfTarget, float]
|
Dictionary of probabilities for each label. |