PatchTSModel#
- class PatchTSModel(decoder_length: int, encoder_length: int, patch_len: int = 4, stride: int = 1, num_layers: int = 3, hidden_size: int = 128, feedforward_size: int = 256, nhead: int = 16, lr: float = 0.001, loss: Module | None = None, train_batch_size: int = 128, test_batch_size: int = 128, optimizer_params: dict | None = None, trainer_params: dict | None = None, train_dataloader_params: dict | None = None, test_dataloader_params: dict | None = None, val_dataloader_params: dict | None = None, split_params: dict | None = None)[source]#
Bases:
DeepBaseModel
PatchTS model using PyTorch layers.
Note
This model requires
torch
extension to be installed. Read more about this at installation page.Init PatchTS model.
- Parameters:
encoder_length (int) – encoder length
decoder_length (int) –
patch_len (int) –
stride (int) –
num_layers (int) –
hidden_size (int) –
feedforward_size (int) –
nhead (int) –
lr (float) –
loss (torch.nn.Module | None) –
train_batch_size (int) –
test_batch_size (int) –
optimizer_params (dict | None) –
trainer_params (dict | None) –
train_dataloader_params (dict | None) –
test_dataloader_params (dict | None) –
val_dataloader_params (dict | None) –
split_params (dict | None) –
- decoder_length:
decoder length
- patch_len:
size of patch
- stride:
step of patch
- num_layers:
number of layers
- hidden_size:
size of the hidden state
- feedforward_size:
size of feedforward layers in transformer
- nhead:
number of transformer heads
- lr:
learning rate
- loss:
loss function, MSELoss by default
- train_batch_size:
batch size for training
- test_batch_size:
batch size for testing
- optimizer_params:
parameters for optimizer for Adam optimizer (api reference
torch.optim.Adam
)- trainer_params:
Pytorch ligthning trainer parameters (api reference
pytorch_lightning.trainer.trainer.Trainer
)- train_dataloader_params:
parameters for train dataloader like sampler for example (api reference
torch.utils.data.DataLoader
)- test_dataloader_params:
parameters for test dataloader
- val_dataloader_params:
parameters for validation dataloader
- split_params:
- dictionary with parameters for
torch.utils.data.random_split()
for train-test splitting train_size: (float) value from 0 to 1 - fraction of samples to use for training
generator: (Optional[torch.Generator]) - generator for reproducibile train-test splitting
torch_dataset_size: (Optional[int]) - number of samples in dataset, in case of dataset not implementing
__len__
- dictionary with parameters for
Methods
fit
(ts)Fit model.
forecast
(ts, prediction_size[, ...])Make predictions.
Get model.
load
(path)Load an object.
Get default grid for tuning hyperparameters.
predict
(ts, prediction_size[, return_components])Make predictions.
raw_fit
(torch_dataset)Fit model on torch like Dataset.
raw_predict
(torch_dataset)Make inference on torch like Dataset.
save
(path)Save the object.
set_params
(**params)Return new object instance with modified parameters.
to_dict
()Collect all information about etna object in dict.
Attributes
This class stores its
__init__
parameters as attributes.Context size of the model.
- fit(ts: TSDataset) DeepBaseModel [source]#
Fit model.
- Parameters:
ts (TSDataset) – TSDataset with features
- Returns:
Model after fit
- Return type:
DeepBaseModel
- forecast(ts: TSDataset, prediction_size: int, return_components: bool = False) TSDataset [source]#
Make predictions.
This method will make autoregressive predictions.
- Parameters:
- Returns:
Dataset with predictions
- Return type:
- classmethod load(path: Path) Self [source]#
Load an object.
- Parameters:
path (Path) – Path to load object from.
- Returns:
Loaded object.
- Return type:
Self
- params_to_tune() Dict[str, BaseDistribution] [source]#
Get default grid for tuning hyperparameters.
This grid tunes parameters:
num_layers
,hidden_size
,lr
,encoder_length
. Other parameters are expected to be set by the user.- Returns:
Grid to tune.
- Return type:
- predict(ts: TSDataset, prediction_size: int, return_components: bool = False) TSDataset [source]#
Make predictions.
This method will make predictions using true values instead of predicted on a previous step. It can be useful for making in-sample forecasts.
- Parameters:
- Returns:
Dataset with predictions
- Return type:
- raw_fit(torch_dataset: Dataset) DeepBaseModel [source]#
Fit model on torch like Dataset.
- Parameters:
torch_dataset (Dataset) – Torch like dataset for model fit
- Returns:
Model after fit
- Return type:
DeepBaseModel
- raw_predict(torch_dataset: Dataset) Dict[Tuple[str, str], ndarray] [source]#
Make inference on torch like Dataset.
- set_params(**params: dict) Self [source]#
Return new object instance with modified parameters.
Method also allows to change parameters of nested objects within the current object. For example, it is possible to change parameters of a
model
in aPipeline
.Nested parameters are expected to be in a
<component_1>.<...>.<parameter>
form, where components are separated by a dot.- Parameters:
**params (dict) – Estimator parameters
- Returns:
New instance with changed parameters
- Return type:
Self
Examples
>>> from etna.pipeline import Pipeline >>> from etna.models import NaiveModel >>> from etna.transforms import AddConstTransform >>> model = model=NaiveModel(lag=1) >>> transforms = [AddConstTransform(in_column="target", value=1)] >>> pipeline = Pipeline(model, transforms=transforms, horizon=3) >>> pipeline.set_params(**{"model.lag": 3, "transforms.0.value": 2}) Pipeline(model = NaiveModel(lag = 3, ), transforms = [AddConstTransform(in_column = 'target', value = 2, inplace = True, out_column = None, )], horizon = 3, )