View Jupyter notebook on the GitHub.
Deep learning examples#
This notebooks contains examples with neural network models.
Table of contents
Loading dataset
Architecture
Testing models
Baseline
DeepAR
RNN
Deep State Model
N-BEATS Model
PatchTS Model
[1]:
!pip install "etna[torch]" -q
[2]:
import warnings
warnings.filterwarnings("ignore")
[3]:
import random
import numpy as np
import pandas as pd
import torch
from etna.analysis import plot_backtest
from etna.datasets.tsdataset import TSDataset
from etna.metrics import MAE
from etna.metrics import MAPE
from etna.metrics import SMAPE
from etna.models import SeasonalMovingAverageModel
from etna.pipeline import Pipeline
from etna.transforms import DateFlagsTransform
from etna.transforms import LagTransform
from etna.transforms import LinearTrendTransform
[4]:
def set_seed(seed: int = 42):
"""Set random seed for reproducibility."""
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
1. Loading dataset#
We are going to take some toy dataset. Let’s load and look at it.
[5]:
original_df = pd.read_csv("data/example_dataset.csv")
original_df.head()
[5]:
timestamp | segment | target | |
---|---|---|---|
0 | 2019-01-01 | segment_a | 170 |
1 | 2019-01-02 | segment_a | 243 |
2 | 2019-01-03 | segment_a | 267 |
3 | 2019-01-04 | segment_a | 287 |
4 | 2019-01-05 | segment_a | 279 |
Our library works with the special data structure TSDataset
. Let’s create it as it was done in “Get started” notebook.
[6]:
df = TSDataset.to_dataset(original_df)
ts = TSDataset(df, freq="D")
ts.head(5)
[6]:
segment | segment_a | segment_b | segment_c | segment_d |
---|---|---|---|---|
feature | target | target | target | target |
timestamp | ||||
2019-01-01 | 170 | 102 | 92 | 238 |
2019-01-02 | 243 | 123 | 107 | 358 |
2019-01-03 | 267 | 130 | 103 | 366 |
2019-01-04 | 287 | 138 | 103 | 385 |
2019-01-05 | 279 | 137 | 104 | 384 |
2. Architecture#
Our library has two types of models:
Models from PyTorch Forecasting
Native models.
First, let’s describe the pytorch-forecasting
models, because they require a special handling. There are two ways to use these models: default one and via using PytorchForecastingDatasetBuilder
for using extra features.
To include extra features we use PytorchForecastingDatasetBuilder
class.
Let’s look at it closer.
[7]:
from etna.models.nn.utils import PytorchForecastingDatasetBuilder
[8]:
?PytorchForecastingDatasetBuilder
We can see a pretty scary signature, but don’t panic, we will look at the most important parameters.
time_varying_known_reals
— known real values that change across the time (real regressors), now it it necessary to add “time_idx” variable to the list;time_varying_unknown_reals
— our real value target, set it to["target"]
;max_prediction_length
— our horizon for forecasting;max_encoder_length
— length of past context to use;static_categoricals
— static categorical values, for example, if we use multiple segments it can be some its characteristics including identifier: “segment”;time_varying_known_categoricals
— known categorical values that change across the time (categorical regressors);target_normalizer
— class for normalization targets across different segments.
Our library currently supports these pytorch-forecasting
models:
As for the native neural network models, they are simpler to use, because they don’t require PytorchForecastingTransform
. We will see how to use them on examples.
3. Testing models#
In this section we will test our models on example.
[9]:
HORIZON = 7
metrics = [SMAPE(), MAPE(), MAE()]
3.1 Baseline#
For comparison let’s train some simple model as a baseline.
[10]:
model_sma = SeasonalMovingAverageModel(window=5, seasonality=7)
linear_trend_transform = LinearTrendTransform(in_column="target")
pipeline_sma = Pipeline(model=model_sma, horizon=HORIZON, transforms=[linear_trend_transform])
[11]:
metrics_sma, forecast_sma, fold_info_sma = pipeline_sma.backtest(ts, metrics=metrics, n_folds=3, n_jobs=1)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.2s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.3s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.3s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.2s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.2s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[12]:
metrics_sma
[12]:
segment | SMAPE | MAPE | MAE | fold_number | |
---|---|---|---|---|---|
3 | segment_a | 6.343943 | 6.124296 | 33.196532 | 0 |
3 | segment_a | 5.346946 | 5.192455 | 27.938101 | 1 |
3 | segment_a | 7.510347 | 7.189999 | 40.028565 | 2 |
2 | segment_b | 7.178822 | 6.920176 | 17.818102 | 0 |
2 | segment_b | 5.672504 | 5.554555 | 13.719200 | 1 |
2 | segment_b | 3.327846 | 3.359712 | 7.680919 | 2 |
0 | segment_c | 6.430429 | 6.200580 | 10.877718 | 0 |
0 | segment_c | 5.947090 | 5.727531 | 10.701336 | 1 |
0 | segment_c | 6.186545 | 5.943679 | 11.359563 | 2 |
1 | segment_d | 4.707899 | 4.644170 | 39.918646 | 0 |
1 | segment_d | 5.403426 | 5.600978 | 43.047332 | 1 |
1 | segment_d | 2.505279 | 2.543719 | 19.347565 | 2 |
[13]:
score = metrics_sma["SMAPE"].mean()
print(f"Average SMAPE for Seasonal MA: {score:.3f}")
Average SMAPE for Seasonal MA: 5.547
[14]:
plot_backtest(forecast_sma, ts, history_len=20)
3.2 DeepAR#
[15]:
from etna.models.nn import DeepARModel
Before training let’s fix seeds for reproducibility.
[16]:
set_seed()
Default way#
[17]:
model_deepar = DeepARModel(
encoder_length=HORIZON,
decoder_length=HORIZON,
trainer_params=dict(max_epochs=150, gpus=0, gradient_clip_val=0.1),
lr=0.01,
train_batch_size=64,
)
metrics = [SMAPE(), MAPE(), MAE()]
pipeline_deepar = Pipeline(model=model_deepar, horizon=HORIZON)
[18]:
metrics_deepar, forecast_deepar, fold_info_deepar = pipeline_deepar.backtest(ts, metrics=metrics, n_folds=3, n_jobs=1)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------------------------------
0 | loss | NormalDistributionLoss | 0
1 | logging_metrics | ModuleList | 0
2 | embeddings | MultiEmbedding | 0
3 | rnn | LSTM | 1.6 K
4 | distribution_projector | Linear | 22
------------------------------------------------------------------
1.6 K Trainable params
0 Non-trainable params
1.6 K Total params
0.006 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=150` reached.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 2.4min remaining: 0.0s
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------------------------------
0 | loss | NormalDistributionLoss | 0
1 | logging_metrics | ModuleList | 0
2 | embeddings | MultiEmbedding | 0
3 | rnn | LSTM | 1.6 K
4 | distribution_projector | Linear | 22
------------------------------------------------------------------
1.6 K Trainable params
0 Non-trainable params
1.6 K Total params
0.006 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=150` reached.
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 4.9min remaining: 0.0s
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------------------------------
0 | loss | NormalDistributionLoss | 0
1 | logging_metrics | ModuleList | 0
2 | embeddings | MultiEmbedding | 0
3 | rnn | LSTM | 1.6 K
4 | distribution_projector | Linear | 22
------------------------------------------------------------------
1.6 K Trainable params
0 Non-trainable params
1.6 K Total params
0.006 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=150` reached.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 7.3min remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 7.3min finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 2.8s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 6.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 9.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 9.1s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[19]:
metrics_deepar
[19]:
segment | SMAPE | MAPE | MAE | fold_number | |
---|---|---|---|---|---|
3 | segment_a | 7.436000 | 7.181761 | 38.649353 | 0 |
3 | segment_a | 4.475632 | 4.612645 | 22.656799 | 1 |
3 | segment_a | 10.447339 | 9.920780 | 54.777339 | 2 |
2 | segment_b | 7.934781 | 7.638997 | 19.769865 | 0 |
2 | segment_b | 5.531223 | 5.611013 | 13.110147 | 1 |
2 | segment_b | 4.139600 | 4.343228 | 9.261771 | 2 |
0 | segment_c | 3.846271 | 3.858167 | 6.489190 | 0 |
0 | segment_c | 5.991873 | 5.917594 | 10.582672 | 1 |
0 | segment_c | 6.220782 | 6.153251 | 11.090365 | 2 |
1 | segment_d | 7.151575 | 7.089758 | 60.376517 | 0 |
1 | segment_d | 4.633763 | 4.750410 | 37.190622 | 1 |
1 | segment_d | 3.839424 | 3.753969 | 32.364301 | 2 |
To summarize it we will take mean value of SMAPE metric because it is scale tolerant.
[20]:
score = metrics_deepar["SMAPE"].mean()
print(f"Average SMAPE for DeepAR: {score:.3f}")
Average SMAPE for DeepAR: 5.971
Dataset Builder: creating dataset for DeepAR with etxtra features.#
[21]:
from pytorch_forecasting.data import GroupNormalizer
set_seed()
transform_date = DateFlagsTransform(day_number_in_week=True, day_number_in_month=False, out_column="dateflag")
num_lags = 10
transform_lag = LagTransform(
in_column="target",
lags=[HORIZON + i for i in range(num_lags)],
out_column="target_lag",
)
lag_columns = [f"target_lag_{HORIZON+i}" for i in range(num_lags)]
dataset_builder_deepar = PytorchForecastingDatasetBuilder(
max_encoder_length=HORIZON,
max_prediction_length=HORIZON,
time_varying_known_reals=["time_idx"] + lag_columns,
time_varying_unknown_reals=["target"],
time_varying_known_categoricals=["dateflag_day_number_in_week"],
target_normalizer=GroupNormalizer(groups=["segment"]),
)
Now we are going to start backtest.
[22]:
model_deepar = DeepARModel(
dataset_builder=dataset_builder_deepar,
trainer_params=dict(max_epochs=150, gpus=0, gradient_clip_val=0.1),
lr=0.01,
train_batch_size=64,
)
pipeline_deepar = Pipeline(
model=model_deepar,
horizon=HORIZON,
transforms=[transform_lag, transform_date],
)
[23]:
metrics_deepar, forecast_deepar, fold_info_deepar = pipeline_deepar.backtest(ts, metrics=metrics, n_folds=3, n_jobs=1)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------------------------------
0 | loss | NormalDistributionLoss | 0
1 | logging_metrics | ModuleList | 0
2 | embeddings | MultiEmbedding | 35
3 | rnn | LSTM | 2.2 K
4 | distribution_projector | Linear | 22
------------------------------------------------------------------
2.3 K Trainable params
0 Non-trainable params
2.3 K Total params
0.009 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=150` reached.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 2.9min remaining: 0.0s
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------------------------------
0 | loss | NormalDistributionLoss | 0
1 | logging_metrics | ModuleList | 0
2 | embeddings | MultiEmbedding | 35
3 | rnn | LSTM | 2.2 K
4 | distribution_projector | Linear | 22
------------------------------------------------------------------
2.3 K Trainable params
0 Non-trainable params
2.3 K Total params
0.009 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=150` reached.
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 5.5min remaining: 0.0s
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------------------------------
0 | loss | NormalDistributionLoss | 0
1 | logging_metrics | ModuleList | 0
2 | embeddings | MultiEmbedding | 35
3 | rnn | LSTM | 2.2 K
4 | distribution_projector | Linear | 22
------------------------------------------------------------------
2.3 K Trainable params
0 Non-trainable params
2.3 K Total params
0.009 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=150` reached.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 8.4min remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 8.4min finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 2.8s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 5.7s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 8.6s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 8.6s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.2s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.2s finished
Let’s compare results across different segments.
[24]:
metrics_deepar
[24]:
segment | SMAPE | MAPE | MAE | fold_number | |
---|---|---|---|---|---|
3 | segment_a | 4.256332 | 4.134898 | 22.178040 | 0 |
3 | segment_a | 3.425756 | 3.413402 | 17.290174 | 1 |
3 | segment_a | 4.173554 | 4.065541 | 22.463553 | 2 |
2 | segment_b | 5.513409 | 5.351334 | 13.707116 | 0 |
2 | segment_b | 3.552119 | 3.501611 | 8.782763 | 1 |
2 | segment_b | 2.885803 | 2.932560 | 6.565127 | 2 |
0 | segment_c | 4.073923 | 4.029869 | 6.958703 | 0 |
0 | segment_c | 4.915982 | 4.773671 | 8.839678 | 1 |
0 | segment_c | 3.596198 | 3.579249 | 6.488384 | 2 |
1 | segment_d | 5.151372 | 5.005077 | 43.551644 | 0 |
1 | segment_d | 4.609116 | 4.768092 | 36.859846 | 1 |
1 | segment_d | 2.910666 | 2.884649 | 24.493966 | 2 |
To summarize it we will take mean value of SMAPE metric because it is scale tolerant.
[25]:
score = metrics_deepar["SMAPE"].mean()
print(f"Average SMAPE for DeepAR: {score:.3f}")
Average SMAPE for DeepAR: 4.089
Visualize results.
[26]:
plot_backtest(forecast_deepar, ts, history_len=20)
3.3 TFT#
Let’s move to the next model.
[27]:
from etna.models.nn import TFTModel
[28]:
set_seed()
Default way#
[29]:
model_tft = TFTModel(
encoder_length=HORIZON,
decoder_length=HORIZON,
trainer_params=dict(max_epochs=200, gpus=0, gradient_clip_val=0.1),
lr=0.01,
train_batch_size=64,
)
pipeline_tft = Pipeline(
model=model_tft,
horizon=HORIZON,
)
[30]:
metrics_tft, forecast_tft, fold_info_tft = pipeline_tft.backtest(ts, metrics=metrics, n_folds=3, n_jobs=1)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
----------------------------------------------------------------------------------------
0 | loss | QuantileLoss | 0
1 | logging_metrics | ModuleList | 0
2 | input_embeddings | MultiEmbedding | 0
3 | prescalers | ModuleDict | 96
4 | static_variable_selection | VariableSelectionNetwork | 1.7 K
5 | encoder_variable_selection | VariableSelectionNetwork | 1.8 K
6 | decoder_variable_selection | VariableSelectionNetwork | 1.2 K
7 | static_context_variable_selection | GatedResidualNetwork | 1.1 K
8 | static_context_initial_hidden_lstm | GatedResidualNetwork | 1.1 K
9 | static_context_initial_cell_lstm | GatedResidualNetwork | 1.1 K
10 | static_context_enrichment | GatedResidualNetwork | 1.1 K
11 | lstm_encoder | LSTM | 2.2 K
12 | lstm_decoder | LSTM | 2.2 K
13 | post_lstm_gate_encoder | GatedLinearUnit | 544
14 | post_lstm_add_norm_encoder | AddNorm | 32
15 | static_enrichment | GatedResidualNetwork | 1.4 K
16 | multihead_attn | InterpretableMultiHeadAttention | 676
17 | post_attn_gate_norm | GateAddNorm | 576
18 | pos_wise_ff | GatedResidualNetwork | 1.1 K
19 | pre_output_gate_norm | GateAddNorm | 576
20 | output_layer | Linear | 119
----------------------------------------------------------------------------------------
18.4 K Trainable params
0 Non-trainable params
18.4 K Total params
0.074 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=200` reached.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 4.5min remaining: 0.0s
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
----------------------------------------------------------------------------------------
0 | loss | QuantileLoss | 0
1 | logging_metrics | ModuleList | 0
2 | input_embeddings | MultiEmbedding | 0
3 | prescalers | ModuleDict | 96
4 | static_variable_selection | VariableSelectionNetwork | 1.7 K
5 | encoder_variable_selection | VariableSelectionNetwork | 1.8 K
6 | decoder_variable_selection | VariableSelectionNetwork | 1.2 K
7 | static_context_variable_selection | GatedResidualNetwork | 1.1 K
8 | static_context_initial_hidden_lstm | GatedResidualNetwork | 1.1 K
9 | static_context_initial_cell_lstm | GatedResidualNetwork | 1.1 K
10 | static_context_enrichment | GatedResidualNetwork | 1.1 K
11 | lstm_encoder | LSTM | 2.2 K
12 | lstm_decoder | LSTM | 2.2 K
13 | post_lstm_gate_encoder | GatedLinearUnit | 544
14 | post_lstm_add_norm_encoder | AddNorm | 32
15 | static_enrichment | GatedResidualNetwork | 1.4 K
16 | multihead_attn | InterpretableMultiHeadAttention | 676
17 | post_attn_gate_norm | GateAddNorm | 576
18 | pos_wise_ff | GatedResidualNetwork | 1.1 K
19 | pre_output_gate_norm | GateAddNorm | 576
20 | output_layer | Linear | 119
----------------------------------------------------------------------------------------
18.4 K Trainable params
0 Non-trainable params
18.4 K Total params
0.074 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=200` reached.
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 9.3min remaining: 0.0s
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
----------------------------------------------------------------------------------------
0 | loss | QuantileLoss | 0
1 | logging_metrics | ModuleList | 0
2 | input_embeddings | MultiEmbedding | 0
3 | prescalers | ModuleDict | 96
4 | static_variable_selection | VariableSelectionNetwork | 1.7 K
5 | encoder_variable_selection | VariableSelectionNetwork | 1.8 K
6 | decoder_variable_selection | VariableSelectionNetwork | 1.2 K
7 | static_context_variable_selection | GatedResidualNetwork | 1.1 K
8 | static_context_initial_hidden_lstm | GatedResidualNetwork | 1.1 K
9 | static_context_initial_cell_lstm | GatedResidualNetwork | 1.1 K
10 | static_context_enrichment | GatedResidualNetwork | 1.1 K
11 | lstm_encoder | LSTM | 2.2 K
12 | lstm_decoder | LSTM | 2.2 K
13 | post_lstm_gate_encoder | GatedLinearUnit | 544
14 | post_lstm_add_norm_encoder | AddNorm | 32
15 | static_enrichment | GatedResidualNetwork | 1.4 K
16 | multihead_attn | InterpretableMultiHeadAttention | 676
17 | post_attn_gate_norm | GateAddNorm | 576
18 | pos_wise_ff | GatedResidualNetwork | 1.1 K
19 | pre_output_gate_norm | GateAddNorm | 576
20 | output_layer | Linear | 119
----------------------------------------------------------------------------------------
18.4 K Trainable params
0 Non-trainable params
18.4 K Total params
0.074 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=200` reached.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 14.0min remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 14.0min finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 2.6s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 5.2s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 8.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 8.1s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[31]:
metrics_tft
[31]:
segment | SMAPE | MAPE | MAE | fold_number | |
---|---|---|---|---|---|
3 | segment_a | 40.039759 | 32.982658 | 179.232117 | 0 |
3 | segment_a | 39.083006 | 32.326048 | 174.062077 | 1 |
3 | segment_a | 7.144988 | 7.064285 | 37.769597 | 2 |
2 | segment_b | 33.445114 | 41.429850 | 98.910727 | 0 |
2 | segment_b | 35.803055 | 44.765669 | 105.080654 | 1 |
2 | segment_b | 10.085875 | 11.185378 | 23.583431 | 2 |
0 | segment_c | 68.193480 | 104.467792 | 177.625017 | 0 |
0 | segment_c | 64.955858 | 97.350015 | 171.223432 | 1 |
0 | segment_c | 8.823127 | 8.804263 | 15.938734 | 2 |
1 | segment_d | 82.632731 | 58.126766 | 507.660675 | 0 |
1 | segment_d | 77.984466 | 55.828027 | 458.204899 | 1 |
1 | segment_d | 24.253525 | 21.291908 | 189.809352 | 2 |
[32]:
score = metrics_tft["SMAPE"].mean()
print(f"Average SMAPE for TFT: {score:.3f}")
Average SMAPE for TFT: 41.037
Dataset Builder#
[33]:
set_seed()
transform_date = DateFlagsTransform(day_number_in_week=True, day_number_in_month=False, out_column="dateflag")
num_lags = 10
transform_lag = LagTransform(
in_column="target",
lags=[HORIZON + i for i in range(num_lags)],
out_column="target_lag",
)
lag_columns = [f"target_lag_{HORIZON+i}" for i in range(num_lags)]
dataset_builder_tft = PytorchForecastingDatasetBuilder(
max_encoder_length=HORIZON,
max_prediction_length=HORIZON,
time_varying_known_reals=["time_idx"],
time_varying_unknown_reals=["target"],
time_varying_known_categoricals=["dateflag_day_number_in_week"],
static_categoricals=["segment"],
target_normalizer=GroupNormalizer(groups=["segment"]),
)
[34]:
model_tft = TFTModel(
dataset_builder=dataset_builder_tft,
trainer_params=dict(max_epochs=200, gpus=0, gradient_clip_val=0.1),
lr=0.01,
train_batch_size=64,
)
pipeline_tft = Pipeline(
model=model_tft,
horizon=HORIZON,
transforms=[transform_lag, transform_date],
)
[35]:
metrics_tft, forecast_tft, fold_info_tft = pipeline_tft.backtest(ts, metrics=metrics, n_folds=3, n_jobs=1)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
----------------------------------------------------------------------------------------
0 | loss | QuantileLoss | 0
1 | logging_metrics | ModuleList | 0
2 | input_embeddings | MultiEmbedding | 47
3 | prescalers | ModuleDict | 96
4 | static_variable_selection | VariableSelectionNetwork | 1.8 K
5 | encoder_variable_selection | VariableSelectionNetwork | 1.9 K
6 | decoder_variable_selection | VariableSelectionNetwork | 1.3 K
7 | static_context_variable_selection | GatedResidualNetwork | 1.1 K
8 | static_context_initial_hidden_lstm | GatedResidualNetwork | 1.1 K
9 | static_context_initial_cell_lstm | GatedResidualNetwork | 1.1 K
10 | static_context_enrichment | GatedResidualNetwork | 1.1 K
11 | lstm_encoder | LSTM | 2.2 K
12 | lstm_decoder | LSTM | 2.2 K
13 | post_lstm_gate_encoder | GatedLinearUnit | 544
14 | post_lstm_add_norm_encoder | AddNorm | 32
15 | static_enrichment | GatedResidualNetwork | 1.4 K
16 | multihead_attn | InterpretableMultiHeadAttention | 676
17 | post_attn_gate_norm | GateAddNorm | 576
18 | pos_wise_ff | GatedResidualNetwork | 1.1 K
19 | pre_output_gate_norm | GateAddNorm | 576
20 | output_layer | Linear | 119
----------------------------------------------------------------------------------------
18.9 K Trainable params
0 Non-trainable params
18.9 K Total params
0.075 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=200` reached.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 5.0min remaining: 0.0s
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
----------------------------------------------------------------------------------------
0 | loss | QuantileLoss | 0
1 | logging_metrics | ModuleList | 0
2 | input_embeddings | MultiEmbedding | 47
3 | prescalers | ModuleDict | 96
4 | static_variable_selection | VariableSelectionNetwork | 1.8 K
5 | encoder_variable_selection | VariableSelectionNetwork | 1.9 K
6 | decoder_variable_selection | VariableSelectionNetwork | 1.3 K
7 | static_context_variable_selection | GatedResidualNetwork | 1.1 K
8 | static_context_initial_hidden_lstm | GatedResidualNetwork | 1.1 K
9 | static_context_initial_cell_lstm | GatedResidualNetwork | 1.1 K
10 | static_context_enrichment | GatedResidualNetwork | 1.1 K
11 | lstm_encoder | LSTM | 2.2 K
12 | lstm_decoder | LSTM | 2.2 K
13 | post_lstm_gate_encoder | GatedLinearUnit | 544
14 | post_lstm_add_norm_encoder | AddNorm | 32
15 | static_enrichment | GatedResidualNetwork | 1.4 K
16 | multihead_attn | InterpretableMultiHeadAttention | 676
17 | post_attn_gate_norm | GateAddNorm | 576
18 | pos_wise_ff | GatedResidualNetwork | 1.1 K
19 | pre_output_gate_norm | GateAddNorm | 576
20 | output_layer | Linear | 119
----------------------------------------------------------------------------------------
18.9 K Trainable params
0 Non-trainable params
18.9 K Total params
0.075 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=200` reached.
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 10.2min remaining: 0.0s
/Users/d.a.binin/Documents/tasks/etna-github/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
----------------------------------------------------------------------------------------
0 | loss | QuantileLoss | 0
1 | logging_metrics | ModuleList | 0
2 | input_embeddings | MultiEmbedding | 47
3 | prescalers | ModuleDict | 96
4 | static_variable_selection | VariableSelectionNetwork | 1.8 K
5 | encoder_variable_selection | VariableSelectionNetwork | 1.9 K
6 | decoder_variable_selection | VariableSelectionNetwork | 1.3 K
7 | static_context_variable_selection | GatedResidualNetwork | 1.1 K
8 | static_context_initial_hidden_lstm | GatedResidualNetwork | 1.1 K
9 | static_context_initial_cell_lstm | GatedResidualNetwork | 1.1 K
10 | static_context_enrichment | GatedResidualNetwork | 1.1 K
11 | lstm_encoder | LSTM | 2.2 K
12 | lstm_decoder | LSTM | 2.2 K
13 | post_lstm_gate_encoder | GatedLinearUnit | 544
14 | post_lstm_add_norm_encoder | AddNorm | 32
15 | static_enrichment | GatedResidualNetwork | 1.4 K
16 | multihead_attn | InterpretableMultiHeadAttention | 676
17 | post_attn_gate_norm | GateAddNorm | 576
18 | pos_wise_ff | GatedResidualNetwork | 1.1 K
19 | pre_output_gate_norm | GateAddNorm | 576
20 | output_layer | Linear | 119
----------------------------------------------------------------------------------------
18.9 K Trainable params
0 Non-trainable params
18.9 K Total params
0.075 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=200` reached.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 15.3min remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 15.3min finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 2.5s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 5.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 7.5s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 7.5s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[36]:
metrics_tft
[36]:
segment | SMAPE | MAPE | MAE | fold_number | |
---|---|---|---|---|---|
3 | segment_a | 3.793087 | 3.677836 | 19.948386 | 0 |
3 | segment_a | 7.637957 | 7.524893 | 39.495466 | 1 |
3 | segment_a | 4.114276 | 3.988228 | 22.739297 | 2 |
2 | segment_b | 6.176802 | 5.891657 | 15.709222 | 0 |
2 | segment_b | 5.770399 | 5.818135 | 14.060697 | 1 |
2 | segment_b | 4.449299 | 4.579005 | 10.232365 | 2 |
0 | segment_c | 4.990803 | 4.917120 | 8.564157 | 0 |
0 | segment_c | 5.137094 | 4.971328 | 9.320369 | 1 |
0 | segment_c | 7.509126 | 7.280333 | 13.691023 | 2 |
1 | segment_d | 9.761393 | 9.563648 | 82.511100 | 0 |
1 | segment_d | 5.638815 | 5.902947 | 47.316467 | 1 |
1 | segment_d | 5.462926 | 5.019810 | 43.853978 | 2 |
[37]:
score = metrics_tft["SMAPE"].mean()
print(f"Average SMAPE for TFT: {score:.3f}")
Average SMAPE for TFT: 5.870
[38]:
plot_backtest(forecast_tft, ts, history_len=20)
3.4 RNN#
We’ll use RNN model based on LSTM cell
[39]:
from etna.models.nn import RNNModel
from etna.transforms import StandardScalerTransform
[40]:
model_rnn = RNNModel(
decoder_length=HORIZON,
encoder_length=2 * HORIZON,
input_size=11,
trainer_params=dict(max_epochs=5),
lr=1e-3,
)
pipeline_rnn = Pipeline(
model=model_rnn,
horizon=HORIZON,
transforms=[StandardScalerTransform(in_column="target"), transform_lag],
)
[41]:
metrics_rnn, forecast_rnn, fold_info_rnn = pipeline_rnn.backtest(ts, metrics=metrics, n_folds=3, n_jobs=1)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
---------------------------------------
0 | loss | MSELoss | 0
1 | rnn | LSTM | 4.0 K
2 | projection | Linear | 17
---------------------------------------
4.0 K Trainable params
0 Non-trainable params
4.0 K Total params
0.016 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=5` reached.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 3.8s remaining: 0.0s
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
---------------------------------------
0 | loss | MSELoss | 0
1 | rnn | LSTM | 4.0 K
2 | projection | Linear | 17
---------------------------------------
4.0 K Trainable params
0 Non-trainable params
4.0 K Total params
0.016 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=5` reached.
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 7.5s remaining: 0.0s
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
---------------------------------------
0 | loss | MSELoss | 0
1 | rnn | LSTM | 4.0 K
2 | projection | Linear | 17
---------------------------------------
4.0 K Trainable params
0 Non-trainable params
4.0 K Total params
0.016 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=5` reached.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 11.5s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 11.5s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.2s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.2s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.2s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[42]:
score = metrics_rnn["SMAPE"].mean()
print(f"Average SMAPE for LSTM: {score:.3f}")
Average SMAPE for LSTM: 5.643
[43]:
plot_backtest(forecast_rnn, ts, history_len=20)
3.5 Deep State Model#
Deep State Model
works well with multiple similar time-series. It inffers shared patterns from them.
We have to determine the type of seasonality in data (based on data granularity), SeasonalitySSM
class is responsible for this. In this example, we have daily data, so we use day-of-week (7 seasons) and day-of-month (31 seasons) models. We also set the trend component using the LevelTrendSSM
class. Also in the model we use time-based features like day-of-week, day-of-month and time independent feature representing the segment of time series.
[44]:
from etna.models.nn import DeepStateModel
from etna.models.nn.deepstate import CompositeSSM
from etna.models.nn.deepstate import LevelTrendSSM
from etna.models.nn.deepstate import SeasonalitySSM
from etna.transforms import DateFlagsTransform
from etna.transforms import SegmentEncoderTransform
from etna.transforms import StandardScalerTransform
[45]:
transforms = [
SegmentEncoderTransform(),
StandardScalerTransform(in_column="target"),
DateFlagsTransform(
day_number_in_week=True,
day_number_in_month=True,
week_number_in_month=False,
week_number_in_year=False,
month_number_in_year=False,
year_number=False,
is_weekend=False,
out_column="df",
),
]
[46]:
monthly_smm = SeasonalitySSM(num_seasons=31, timestamp_transform=lambda x: x.day - 1)
weekly_smm = SeasonalitySSM(num_seasons=7, timestamp_transform=lambda x: x.weekday())
[47]:
model_dsm = DeepStateModel(
ssm=CompositeSSM(seasonal_ssms=[weekly_smm, monthly_smm], nonseasonal_ssm=LevelTrendSSM()),
decoder_length=HORIZON,
encoder_length=2 * HORIZON,
input_size=3,
trainer_params=dict(max_epochs=5),
lr=1e-3,
)
pipeline_dsm = Pipeline(
model=model_dsm,
horizon=HORIZON,
transforms=transforms,
)
[48]:
metrics_dsm, forecast_dsm, fold_info_dsm = pipeline_dsm.backtest(ts, metrics=metrics, n_folds=3, n_jobs=1)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------
0 | RNN | LSTM | 7.2 K
1 | projectors | ModuleDict | 5.0 K
------------------------------------------
12.2 K Trainable params
0 Non-trainable params
12.2 K Total params
0.049 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=5` reached.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 13.8s remaining: 0.0s
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------
0 | RNN | LSTM | 7.2 K
1 | projectors | ModuleDict | 5.0 K
------------------------------------------
12.2 K Trainable params
0 Non-trainable params
12.2 K Total params
0.049 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=5` reached.
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 28.1s remaining: 0.0s
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------
0 | RNN | LSTM | 7.2 K
1 | projectors | ModuleDict | 5.0 K
------------------------------------------
12.2 K Trainable params
0 Non-trainable params
12.2 K Total params
0.049 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=5` reached.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 42.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 42.1s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.2s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.3s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.3s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[49]:
score = metrics_dsm["SMAPE"].mean()
print(f"Average SMAPE for DeepStateModel: {score:.3f}")
Average SMAPE for DeepStateModel: 5.523
[50]:
plot_backtest(forecast_dsm, ts, history_len=20)
3.6 N-BEATS Model#
This architecture is based on backward and forward residual links and a deep stack of fully connected layers.
There are two types of models in the library. The NBeatsGenericModel
class implements a generic deep learning model, while the NBeatsInterpretableModel
is augmented with certain inductive biases to be interpretable (trend and seasonality).
[51]:
from etna.models.nn import NBeatsGenericModel
from etna.models.nn import NBeatsInterpretableModel
[52]:
model_nbeats_generic = NBeatsGenericModel(
input_size=2 * HORIZON,
output_size=HORIZON,
loss="smape",
stacks=30,
layers=4,
layer_size=256,
trainer_params=dict(max_epochs=1000),
lr=1e-3,
)
pipeline_nbeats_generic = Pipeline(
model=model_nbeats_generic,
horizon=HORIZON,
transforms=[],
)
[53]:
metrics_nbeats_generic, forecast_nbeats_generic, _ = pipeline_nbeats_generic.backtest(
ts, metrics=metrics, n_folds=3, n_jobs=1
)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
--------------------------------------
0 | model | NBeats | 206 K
1 | loss | NBeatsSMAPE | 0
--------------------------------------
206 K Trainable params
0 Non-trainable params
206 K Total params
0.826 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=1000` reached.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 55.9s remaining: 0.0s
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
--------------------------------------
0 | model | NBeats | 206 K
1 | loss | NBeatsSMAPE | 0
--------------------------------------
206 K Trainable params
0 Non-trainable params
206 K Total params
0.826 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=1000` reached.
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 1.8min remaining: 0.0s
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
--------------------------------------
0 | model | NBeats | 206 K
1 | loss | NBeatsSMAPE | 0
--------------------------------------
206 K Trainable params
0 Non-trainable params
206 K Total params
0.826 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=1000` reached.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 2.7min remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 2.7min finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[54]:
score = metrics_nbeats_generic["SMAPE"].mean()
print(f"Average SMAPE for N-BEATS Generic: {score:.3f}")
Average SMAPE for N-BEATS Generic: 5.026
[55]:
plot_backtest(forecast_nbeats_generic, ts, history_len=20)
[56]:
model_nbeats_interp = NBeatsInterpretableModel(
input_size=4 * HORIZON,
output_size=HORIZON,
loss="smape",
trend_layer_size=64,
seasonality_layer_size=256,
trainer_params=dict(max_epochs=2000),
lr=1e-3,
)
pipeline_nbeats_interp = Pipeline(
model=model_nbeats_interp,
horizon=HORIZON,
transforms=[],
)
[57]:
metrics_nbeats_interp, forecast_nbeats_interp, _ = pipeline_nbeats_interp.backtest(
ts, metrics=metrics, n_folds=3, n_jobs=1
)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
--------------------------------------
0 | model | NBeats | 224 K
1 | loss | NBeatsSMAPE | 0
--------------------------------------
223 K Trainable params
385 Non-trainable params
224 K Total params
0.896 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=2000` reached.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.0min remaining: 0.0s
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
--------------------------------------
0 | model | NBeats | 224 K
1 | loss | NBeatsSMAPE | 0
--------------------------------------
223 K Trainable params
385 Non-trainable params
224 K Total params
0.896 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=2000` reached.
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 2.0min remaining: 0.0s
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
--------------------------------------
0 | model | NBeats | 224 K
1 | loss | NBeatsSMAPE | 0
--------------------------------------
223 K Trainable params
385 Non-trainable params
224 K Total params
0.896 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=2000` reached.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 3.1min remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 3.1min finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[58]:
score = metrics_nbeats_interp["SMAPE"].mean()
print(f"Average SMAPE for N-BEATS Interpretable: {score:.3f}")
Average SMAPE for N-BEATS Interpretable: 5.218
[59]:
plot_backtest(forecast_nbeats_interp, ts, history_len=20)
3.7 PatchTS Model#
Model with transformer encoder that uses patches of timeseries as input words and linear decoder.
[60]:
from etna.models.nn import PatchTSModel
model_patchts = PatchTSModel(
decoder_length=HORIZON,
encoder_length=2 * HORIZON,
patch_len=1,
trainer_params=dict(max_epochs=100),
lr=1e-3,
)
pipeline_patchts = Pipeline(
model=model_patchts, horizon=HORIZON, transforms=[StandardScalerTransform(in_column="target")]
)
metrics_patchts, forecast_patchts, fold_info_patchs = pipeline_patchts.backtest(
ts, metrics=metrics, n_folds=3, n_jobs=1
)
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------
0 | loss | MSELoss | 0
1 | model | Sequential | 397 K
2 | projection | Sequential | 1.8 K
------------------------------------------
399 K Trainable params
0 Non-trainable params
399 K Total params
1.598 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=100` reached.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 12.5min remaining: 0.0s
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------
0 | loss | MSELoss | 0
1 | model | Sequential | 397 K
2 | projection | Sequential | 1.8 K
------------------------------------------
399 K Trainable params
0 Non-trainable params
399 K Total params
1.598 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=100` reached.
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 25.3min remaining: 0.0s
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
------------------------------------------
0 | loss | MSELoss | 0
1 | model | Sequential | 397 K
2 | projection | Sequential | 1.8 K
------------------------------------------
399 K Trainable params
0 Non-trainable params
399 K Total params
1.598 Total estimated model params size (MB)
`Trainer.fit` stopped: `max_epochs=100` reached.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 38.4min remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 38.4min finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.2s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.2s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.1s finished
[61]:
score = metrics_patchts["SMAPE"].mean()
print(f"Average SMAPE for PatchTS: {score:.3f}")
Average SMAPE for PatchTS: 6.376
[62]:
plot_backtest(forecast_patchts, ts, history_len=20)