Skip to content

zoo

LogisticRegression(loss_fn='binary_cross_entropy_with_logits', optimizer_fn='sgd', lr=0.001, output_is_logit=True, is_class_incremental=False, is_feature_incremental=False, device='cpu', seed=42, **kwargs)

Bases: Classifier

This class implements a logistic regression model in PyTorch.

PARAMETER DESCRIPTION
loss_fn
Loss function to be used for training the wrapped model. Can be a
loss function provided by `torch.nn.functional` or one of the
following: 'mse', 'l1', 'cross_entropy',
'binary_cross_entropy_with_logits', 'binary_crossentropy',
'smooth_l1', 'kl_div'.

TYPE: Union[str, Callable] DEFAULT: 'binary_cross_entropy_with_logits'

optimizer_fn

Optimizer to be used for training the wrapped model. Can be an optimizer class provided by torch.optim or one of the following: "adam", "adam_w", "sgd", "rmsprop", "lbfgs".

TYPE: Union[str, Callable] DEFAULT: 'sgd'

lr

Learning rate of the optimizer.

TYPE: float DEFAULT: 0.001

output_is_logit

Whether the module produces logits as output. If true, either softmax or sigmoid is applied to the outputs when predicting.

TYPE: bool DEFAULT: True

is_class_incremental

Whether the classifier should adapt to the appearance of previously unobserved classes by adding an unit to the output layer of the network. This works only if the last trainable layer is an nn.Linear layer. Note also, that output activation functions can not be adapted, meaning that a binary classifier with a sigmoid output can not be altered to perform multi-class predictions.

TYPE: bool DEFAULT: False

is_feature_incremental

Whether the model should adapt to the appearance of previously features by adding units to the input layer of the network.

TYPE: bool DEFAULT: False

device

Device to run the wrapped model on. Can be "cpu" or "cuda".

TYPE: str DEFAULT: 'cpu'

seed

Random seed to be used for training the wrapped model.

TYPE: int DEFAULT: 42

**kwargs

Parameters to be passed to the build_fn function aside from n_features.

DEFAULT: {}

Examples:

>>> from deep_river.classification import LogisticRegression
>>> from river import metrics, preprocessing, compose, datasets
>>> from torch import nn, manual_seed
>>> _ = manual_seed(42)
>>> model_pipeline = compose.Pipeline(
...     preprocessing.StandardScaler(),
...     LogisticRegression()
... )
>>> dataset = datasets.Phishing()
>>> metric = metrics.Accuracy()
>>> for x, y in dataset:
...     y_pred = model_pipeline.predict_one(x) # make a prediction
...     metric.update(y, y_pred) # update the metric
...     model_pipeline.learn_one(x, y) # update the model
>>> print(f"Accuracy: {metric.get():.2f}")
Accuracy: 0.56

MultiLayerPerceptron(n_width=5, n_layers=5, loss_fn='binary_cross_entropy_with_logits', optimizer_fn='sgd', lr=0.001, output_is_logit=True, is_class_incremental=False, is_feature_incremental=False, device='cpu', seed=42, **kwargs)

Bases: Classifier

This class implements a logistic regression model in PyTorch.

PARAMETER DESCRIPTION
n_width

Number of units in each hidden layer.

TYPE: int DEFAULT: 5

n_layers

Number of hidden layers.

TYPE: int DEFAULT: 5

loss_fn
Loss function to be used for training the wrapped model. Can be a
loss function provided by `torch.nn.functional` or one of the
following: 'mse', 'l1', 'cross_entropy',
'binary_cross_entropy_with_logits', 'binary_crossentropy',
'smooth_l1', 'kl_div'.

TYPE: Union[str, Callable] DEFAULT: 'binary_cross_entropy_with_logits'

optimizer_fn

Optimizer to be used for training the wrapped model. Can be an optimizer class provided by torch.optim or one of the following: "adam", "adam_w", "sgd", "rmsprop", "lbfgs".

TYPE: Union[str, Callable] DEFAULT: 'sgd'

lr

Learning rate of the optimizer.

TYPE: float DEFAULT: 0.001

output_is_logit

Whether the module produces logits as output. If true, either softmax or sigmoid is applied to the outputs when predicting.

TYPE: bool DEFAULT: True

is_class_incremental

Whether the classifier should adapt to the appearance of previously unobserved classes by adding an unit to the output layer of the network. This works only if the last trainable layer is an nn.Linear layer. Note also, that output activation functions can not be adapted, meaning that a binary classifier with a sigmoid output can not be altered to perform multi-class predictions.

TYPE: bool DEFAULT: False

is_feature_incremental

Whether the model should adapt to the appearance of previously features by adding units to the input layer of the network.

TYPE: bool DEFAULT: False

device

Device to run the wrapped model on. Can be "cpu" or "cuda".

TYPE: str DEFAULT: 'cpu'

seed

Random seed to be used for training the wrapped model.

TYPE: int DEFAULT: 42

**kwargs

Parameters to be passed to the build_fn function aside from n_features.

DEFAULT: {}

Examples:

>>> from deep_river.classification import MultiLayerPerceptron
>>> from river import metrics, preprocessing, compose, datasets
>>> from torch import nn, manual_seed
>>> _ = manual_seed(42)
>>> model_pipeline = compose.Pipeline(
...     preprocessing.StandardScaler(),
...     MultiLayerPerceptron(n_width=5,n_layers=5)
... )
>>> dataset = datasets.Phishing()
>>> metric = metrics.Accuracy()
>>> for x, y in dataset:
...     y_pred = model_pipeline.predict_one(x) # make a prediction
...     metric.update(y, y_pred) # update the metric
...     model_pipeline.learn_one(x, y) # update the model
>>> print(f"Accuracy: {metric.get():.2f}")
Accuracy: 0.44