zoo
LogisticRegression(loss_fn='binary_cross_entropy_with_logits', optimizer_fn='sgd', lr=0.001, output_is_logit=True, is_class_incremental=False, is_feature_incremental=False, device='cpu', seed=42, **kwargs)
¶
Bases: Classifier
This class implements a logistic regression model in PyTorch.
PARAMETER | DESCRIPTION |
---|---|
loss_fn |
TYPE:
|
optimizer_fn |
Optimizer to be used for training the wrapped model.
Can be an optimizer class provided by
TYPE:
|
lr |
Learning rate of the optimizer.
TYPE:
|
output_is_logit |
Whether the module produces logits as output. If true, either softmax or sigmoid is applied to the outputs when predicting.
TYPE:
|
is_class_incremental |
Whether the classifier should adapt to the appearance of previously unobserved classes by adding an unit to the output layer of the network. This works only if the last trainable layer is an nn.Linear layer. Note also, that output activation functions can not be adapted, meaning that a binary classifier with a sigmoid output can not be altered to perform multi-class predictions.
TYPE:
|
is_feature_incremental |
Whether the model should adapt to the appearance of previously features by adding units to the input layer of the network.
TYPE:
|
device |
Device to run the wrapped model on. Can be "cpu" or "cuda".
TYPE:
|
seed |
Random seed to be used for training the wrapped model.
TYPE:
|
**kwargs |
Parameters to be passed to the
DEFAULT:
|
Examples:
>>> from deep_river.classification import LogisticRegression
>>> from river import metrics, preprocessing, compose, datasets
>>> from torch import nn, manual_seed
>>> _ = manual_seed(42)
>>> model_pipeline = compose.Pipeline(
... preprocessing.StandardScaler(),
... LogisticRegression()
... )
>>> dataset = datasets.Phishing()
>>> metric = metrics.Accuracy()
>>> for x, y in dataset:
... y_pred = model_pipeline.predict_one(x) # make a prediction
... metric.update(y, y_pred) # update the metric
... model_pipeline.learn_one(x, y) # update the model
>>> print(f"Accuracy: {metric.get():.2f}")
Accuracy: 0.56
MultiLayerPerceptron(n_width=5, n_layers=5, loss_fn='binary_cross_entropy_with_logits', optimizer_fn='sgd', lr=0.001, output_is_logit=True, is_class_incremental=False, is_feature_incremental=False, device='cpu', seed=42, **kwargs)
¶
Bases: Classifier
This class implements a logistic regression model in PyTorch.
PARAMETER | DESCRIPTION |
---|---|
n_width |
Number of units in each hidden layer.
TYPE:
|
n_layers |
Number of hidden layers.
TYPE:
|
loss_fn |
TYPE:
|
optimizer_fn |
Optimizer to be used for training the wrapped model.
Can be an optimizer class provided by
TYPE:
|
lr |
Learning rate of the optimizer.
TYPE:
|
output_is_logit |
Whether the module produces logits as output. If true, either softmax or sigmoid is applied to the outputs when predicting.
TYPE:
|
is_class_incremental |
Whether the classifier should adapt to the appearance of previously unobserved classes by adding an unit to the output layer of the network. This works only if the last trainable layer is an nn.Linear layer. Note also, that output activation functions can not be adapted, meaning that a binary classifier with a sigmoid output can not be altered to perform multi-class predictions.
TYPE:
|
is_feature_incremental |
Whether the model should adapt to the appearance of previously features by adding units to the input layer of the network.
TYPE:
|
device |
Device to run the wrapped model on. Can be "cpu" or "cuda".
TYPE:
|
seed |
Random seed to be used for training the wrapped model.
TYPE:
|
**kwargs |
Parameters to be passed to the
DEFAULT:
|
Examples:
>>> from deep_river.classification import MultiLayerPerceptron
>>> from river import metrics, preprocessing, compose, datasets
>>> from torch import nn, manual_seed
>>> _ = manual_seed(42)
>>> model_pipeline = compose.Pipeline(
... preprocessing.StandardScaler(),
... MultiLayerPerceptron(n_width=5,n_layers=5)
... )
>>> dataset = datasets.Phishing()
>>> metric = metrics.Accuracy()
>>> for x, y in dataset:
... y_pred = model_pipeline.predict_one(x) # make a prediction
... metric.update(y, y_pred) # update the metric
... model_pipeline.learn_one(x, y) # update the model
>>> print(f"Accuracy: {metric.get():.2f}")
Accuracy: 0.44