Daily Temperature Data Set - Time Series Prediction

You can get the dataset from here: https://github.com/jbrownlee/Datasets/blob/master/daily-min-temperatures.csv

In [1]:
import csv
import pickle
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from dataclasses import dataclass

Begin by looking at the structure of the csv that contains the data:

In [2]:
TEMPERATURES_CSV = './data/daily-min-temperatures.csv'

with open(TEMPERATURES_CSV, 'r') as csvfile:
    print(f"Header looks like this:\n\n{csvfile.readline()}")    
    print(f"First data point looks like this:\n\n{csvfile.readline()}")
    print(f"Second data point looks like this:\n\n{csvfile.readline()}")
Header looks like this:

"Date","Temp"

First data point looks like this:

"1981-01-01",20.7

Second data point looks like this:

"1981-01-02",17.9

As you can see, each data point is composed of the date and the recorded minimum temperature for that date.

Run the next cell to load a helper function to plot the time series.

In [3]:
def plot_series(time, series, format="-", start=0, end=None):
    plt.plot(time[start:end], series[start:end], format)
    plt.xlabel("Time")
    plt.ylabel("Value")
    plt.grid(True)

Parsing the raw data

A couple of things to note:

  • We should omit the first line as the file contains headers.
  • There is no need to save the data points as numpy arrays, regular lists is fine.
  • The times list should contain every timestep (starting at zero), which is just a sequence of ordered numbers with the same length as the temperatures list.
  • The values of the temperatures should be of float type. You can use Python's built-in float function to ensure this.
In [9]:
def parse_data_from_file(filename):
    
    times = []
    temperatures = []

    with open(filename) as csvfile:

        reader = csv.reader(csvfile, delimiter=',')
        time_step = -1
        # Skip the first line
        next(reader)
        for row in reader:
            times.append(time_step+1)
            temperatures.append(float(row[1]))
            time_step +=1
        # Convert lists to numpy arrays
        times = np.array(times)
        temperatures = np.array(temperatures)

    return times, temperatures

The next cell will use your function to compute the times and temperatures and will save these as numpy arrays within the G dataclass. This cell will also plot the time series:

In [10]:
# Test your function and save all "global" variables within the G class (G stands for global)
@dataclass
class G:
    TEMPERATURES_CSV = './data/daily-min-temperatures.csv'
    times, temperatures = parse_data_from_file(TEMPERATURES_CSV)
    TIME = np.array(times)
    SERIES = np.array(temperatures)
    SPLIT_TIME = 2500
    WINDOW_SIZE = 64
    BATCH_SIZE = 256
    SHUFFLE_BUFFER_SIZE = 1000


plt.figure(figsize=(10, 6))
plot_series(G.TIME, G.SERIES)
plt.show()

Processing the data

In [1]:
def train_val_split(time, series, time_step=G.SPLIT_TIME):

    time_train = time[:time_step]
    series_train = series[:time_step]
    time_valid = time[time_step:]
    series_valid = series[time_step:]

    return time_train, series_train, time_valid, series_valid


# Split the dataset
time_train, series_train, time_valid, series_valid = train_val_split(G.TIME, G.SERIES)
In [12]:
def windowed_dataset(series, window_size=G.WINDOW_SIZE, batch_size=G.BATCH_SIZE, shuffle_buffer=G.SHUFFLE_BUFFER_SIZE):
    ds = tf.data.Dataset.from_tensor_slices(series)
    ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
    ds = ds.flat_map(lambda w: w.batch(window_size + 1))
    ds = ds.shuffle(shuffle_buffer)
    ds = ds.map(lambda w: (w[:-1], w[-1]))
    ds = ds.batch(batch_size).prefetch(1)
    return ds


# Apply the transformation to the training set
train_set = windowed_dataset(series_train, window_size=G.WINDOW_SIZE, batch_size=G.BATCH_SIZE, shuffle_buffer=G.SHUFFLE_BUFFER_SIZE)

Defining the model architecture

In [52]:
def create_uncompiled_model():

    model = tf.keras.models.Sequential([
          tf.keras.layers.Conv1D(filters=64, kernel_size=3,
                              strides=1,
                              activation="relu",
                              padding='causal',
                              input_shape=[None, 1]),
          tf.keras.layers.LSTM(64, return_sequences=True),
          tf.keras.layers.LSTM(64),
          tf.keras.layers.Dense(30, activation="relu"),
          tf.keras.layers.Dense(10, activation="relu"),
          tf.keras.layers.Dense(1),
          tf.keras.layers.Lambda(lambda x: x * 400)        
    ]) 

    return model
In [53]:
# Test our uncompiled model
uncompiled_model = create_uncompiled_model()

try:
    uncompiled_model.predict(train_set)
except:
    print("Your current architecture is incompatible with the windowed dataset, try adjusting it.")
else:
    print("Your current architecture is compatible with the windowed dataset! :)")
Your current architecture is compatible with the windowed dataset! :)
In [54]:
def adjust_learning_rate(dataset):
    
    model = create_uncompiled_model()
    
    lr_schedule = tf.keras.callbacks.LearningRateScheduler(lambda epoch: 1e-4 * 10**(epoch / 20))

    # Select your optimizer
    optimizer = tf.keras.optimizers.SGD(momentum=0.99)
    
    # Compile the model passing in the appropriate loss
    model.compile(loss=tf.keras.losses.Huber(),
                  optimizer=optimizer, 
                  metrics=["mae"]) 

    history = model.fit(dataset, epochs=100, callbacks=[lr_schedule])
    
    return history
In [55]:
# Run the training with dynamic LR
lr_history = adjust_learning_rate(train_set)
Epoch 1/100
10/10 [==============================] - 6s 337ms/step - loss: 36.2400 - mae: 36.7373 - lr: 1.0000e-04
Epoch 2/100
10/10 [==============================] - 3s 323ms/step - loss: 16.6695 - mae: 17.1638 - lr: 1.1220e-04
Epoch 3/100
10/10 [==============================] - 3s 332ms/step - loss: 41.7609 - mae: 42.2609 - lr: 1.2589e-04
Epoch 4/100
10/10 [==============================] - 4s 342ms/step - loss: 40.2483 - mae: 40.7448 - lr: 1.4125e-04
Epoch 5/100
10/10 [==============================] - 4s 326ms/step - loss: 57.0601 - mae: 57.5595 - lr: 1.5849e-04
Epoch 6/100
10/10 [==============================] - 4s 332ms/step - loss: 71.3623 - mae: 71.8612 - lr: 1.7783e-04
Epoch 7/100
10/10 [==============================] - 3s 325ms/step - loss: 75.0987 - mae: 75.5941 - lr: 1.9953e-04
Epoch 8/100
10/10 [==============================] - 4s 336ms/step - loss: 80.5133 - mae: 81.0133 - lr: 2.2387e-04
Epoch 9/100
10/10 [==============================] - 4s 354ms/step - loss: 72.9043 - mae: 73.4007 - lr: 2.5119e-04
Epoch 10/100
10/10 [==============================] - 4s 354ms/step - loss: 63.6317 - mae: 64.1317 - lr: 2.8184e-04
Epoch 11/100
10/10 [==============================] - 4s 346ms/step - loss: 82.8288 - mae: 83.3249 - lr: 3.1623e-04
Epoch 12/100
10/10 [==============================] - 4s 354ms/step - loss: 128.4276 - mae: 128.9276 - lr: 3.5481e-04
Epoch 13/100
10/10 [==============================] - 4s 346ms/step - loss: 86.9896 - mae: 87.4894 - lr: 3.9811e-04
Epoch 14/100
10/10 [==============================] - 4s 334ms/step - loss: 121.2827 - mae: 121.7827 - lr: 4.4668e-04
Epoch 15/100
10/10 [==============================] - 4s 346ms/step - loss: 178.5934 - mae: 179.0934 - lr: 5.0119e-04
Epoch 16/100
10/10 [==============================] - 4s 354ms/step - loss: 234.1097 - mae: 234.6081 - lr: 5.6234e-04
Epoch 17/100
10/10 [==============================] - 4s 345ms/step - loss: 323.7406 - mae: 324.2406 - lr: 6.3096e-04
Epoch 18/100
10/10 [==============================] - 4s 334ms/step - loss: 180.9281 - mae: 181.4281 - lr: 7.0795e-04
Epoch 19/100
10/10 [==============================] - 4s 326ms/step - loss: 60.2547 - mae: 60.7547 - lr: 7.9433e-04
Epoch 20/100
10/10 [==============================] - 4s 334ms/step - loss: 156.6095 - mae: 157.1095 - lr: 8.9125e-04
Epoch 21/100
10/10 [==============================] - 4s 326ms/step - loss: 257.6653 - mae: 258.1653 - lr: 0.0010
Epoch 22/100
10/10 [==============================] - 4s 334ms/step - loss: 292.0819 - mae: 292.5819 - lr: 0.0011
Epoch 23/100
10/10 [==============================] - 4s 342ms/step - loss: 459.5514 - mae: 460.0514 - lr: 0.0013
Epoch 24/100
10/10 [==============================] - 4s 345ms/step - loss: 204.0699 - mae: 204.5699 - lr: 0.0014
Epoch 25/100
10/10 [==============================] - 4s 346ms/step - loss: 309.7906 - mae: 310.2905 - lr: 0.0016
Epoch 26/100
10/10 [==============================] - 4s 344ms/step - loss: 659.6199 - mae: 660.1199 - lr: 0.0018
Epoch 27/100
10/10 [==============================] - 4s 336ms/step - loss: 547.6761 - mae: 548.1735 - lr: 0.0020
Epoch 28/100
10/10 [==============================] - 4s 335ms/step - loss: 223.7064 - mae: 224.2057 - lr: 0.0022
Epoch 29/100
10/10 [==============================] - 4s 337ms/step - loss: 589.8183 - mae: 590.3183 - lr: 0.0025
Epoch 30/100
10/10 [==============================] - 4s 335ms/step - loss: 334.5772 - mae: 335.0772 - lr: 0.0028
Epoch 31/100
10/10 [==============================] - 4s 333ms/step - loss: 977.7250 - mae: 978.2250 - lr: 0.0032
Epoch 32/100
10/10 [==============================] - 4s 333ms/step - loss: 1445.2421 - mae: 1445.7421 - lr: 0.0035
Epoch 33/100
10/10 [==============================] - 4s 334ms/step - loss: 815.9326 - mae: 816.4326 - lr: 0.0040
Epoch 34/100
10/10 [==============================] - 4s 335ms/step - loss: 1198.8197 - mae: 1199.3197 - lr: 0.0045
Epoch 35/100
10/10 [==============================] - 3s 326ms/step - loss: 2597.7144 - mae: 2598.2144 - lr: 0.0050
Epoch 36/100
10/10 [==============================] - 4s 345ms/step - loss: 4811.5991 - mae: 4812.0991 - lr: 0.0056
Epoch 37/100
10/10 [==============================] - 4s 335ms/step - loss: 5985.2026 - mae: 5985.7026 - lr: 0.0063
Epoch 38/100
10/10 [==============================] - 4s 334ms/step - loss: 4127.0107 - mae: 4127.5107 - lr: 0.0071
Epoch 39/100
10/10 [==============================] - 4s 342ms/step - loss: 6222.2949 - mae: 6222.7949 - lr: 0.0079
Epoch 40/100
10/10 [==============================] - 3s 323ms/step - loss: 3912.1917 - mae: 3912.6917 - lr: 0.0089
Epoch 41/100
10/10 [==============================] - 4s 343ms/step - loss: 5986.7241 - mae: 5987.2241 - lr: 0.0100
Epoch 42/100
10/10 [==============================] - 4s 325ms/step - loss: 8260.9180 - mae: 8261.4180 - lr: 0.0112
Epoch 43/100
10/10 [==============================] - 4s 334ms/step - loss: 9623.8242 - mae: 9624.3232 - lr: 0.0126
Epoch 44/100
10/10 [==============================] - 4s 342ms/step - loss: 6207.9565 - mae: 6208.4565 - lr: 0.0141
Epoch 45/100
10/10 [==============================] - 4s 333ms/step - loss: 3553.3477 - mae: 3553.8477 - lr: 0.0158
Epoch 46/100
10/10 [==============================] - 4s 326ms/step - loss: 4686.0571 - mae: 4686.5571 - lr: 0.0178
Epoch 47/100
10/10 [==============================] - 4s 326ms/step - loss: 4826.3896 - mae: 4826.8896 - lr: 0.0200
Epoch 48/100
10/10 [==============================] - 4s 326ms/step - loss: 6241.6895 - mae: 6242.1895 - lr: 0.0224
Epoch 49/100
10/10 [==============================] - 4s 336ms/step - loss: 17412.7871 - mae: 17413.2852 - lr: 0.0251
Epoch 50/100
10/10 [==============================] - 3s 325ms/step - loss: 16142.4053 - mae: 16142.9062 - lr: 0.0282
Epoch 51/100
10/10 [==============================] - 4s 333ms/step - loss: 24482.1348 - mae: 24482.6348 - lr: 0.0316
Epoch 52/100
10/10 [==============================] - 4s 344ms/step - loss: 18083.8340 - mae: 18084.3359 - lr: 0.0355
Epoch 53/100
10/10 [==============================] - 4s 336ms/step - loss: 13936.6826 - mae: 13937.1836 - lr: 0.0398
Epoch 54/100
10/10 [==============================] - 4s 326ms/step - loss: 17991.3008 - mae: 17991.7988 - lr: 0.0447
Epoch 55/100
10/10 [==============================] - 4s 327ms/step - loss: 7530.1543 - mae: 7530.6543 - lr: 0.0501
Epoch 56/100
10/10 [==============================] - 4s 326ms/step - loss: 13334.4102 - mae: 13334.9111 - lr: 0.0562
Epoch 57/100
10/10 [==============================] - 3s 321ms/step - loss: 17484.4590 - mae: 17484.9590 - lr: 0.0631
Epoch 58/100
10/10 [==============================] - 4s 335ms/step - loss: 26817.9102 - mae: 26818.4102 - lr: 0.0708
Epoch 59/100
10/10 [==============================] - 4s 344ms/step - loss: 36998.6367 - mae: 36999.1367 - lr: 0.0794
Epoch 60/100
10/10 [==============================] - 4s 334ms/step - loss: 43576.8398 - mae: 43577.3398 - lr: 0.0891
Epoch 61/100
10/10 [==============================] - 4s 335ms/step - loss: 51109.8945 - mae: 51110.3984 - lr: 0.1000
Epoch 62/100
10/10 [==============================] - 4s 334ms/step - loss: 33054.6914 - mae: 33055.1914 - lr: 0.1122
Epoch 63/100
10/10 [==============================] - 4s 326ms/step - loss: 46032.9297 - mae: 46033.4297 - lr: 0.1259
Epoch 64/100
10/10 [==============================] - 4s 336ms/step - loss: 81217.2734 - mae: 81217.7812 - lr: 0.1413
Epoch 65/100
10/10 [==============================] - 4s 355ms/step - loss: 120857.3906 - mae: 120857.8906 - lr: 0.1585
Epoch 66/100
10/10 [==============================] - 4s 346ms/step - loss: 134615.8438 - mae: 134616.3438 - lr: 0.1778
Epoch 67/100
10/10 [==============================] - 4s 334ms/step - loss: 88601.0938 - mae: 88601.5938 - lr: 0.1995
Epoch 68/100
10/10 [==============================] - 4s 333ms/step - loss: 49529.4883 - mae: 49529.9883 - lr: 0.2239
Epoch 69/100
10/10 [==============================] - 4s 335ms/step - loss: 40573.3281 - mae: 40573.8242 - lr: 0.2512
Epoch 70/100
10/10 [==============================] - 4s 334ms/step - loss: 69558.4688 - mae: 69558.9688 - lr: 0.2818
Epoch 71/100
10/10 [==============================] - 4s 326ms/step - loss: 71785.3359 - mae: 71785.8359 - lr: 0.3162
Epoch 72/100
10/10 [==============================] - 4s 334ms/step - loss: 93141.4609 - mae: 93141.9609 - lr: 0.3548
Epoch 73/100
10/10 [==============================] - 4s 334ms/step - loss: 49807.6836 - mae: 49808.1836 - lr: 0.3981
Epoch 74/100
10/10 [==============================] - 4s 342ms/step - loss: 97938.5078 - mae: 97939.0078 - lr: 0.4467
Epoch 75/100
10/10 [==============================] - 4s 347ms/step - loss: 89679.1328 - mae: 89679.6328 - lr: 0.5012
Epoch 76/100
10/10 [==============================] - 4s 346ms/step - loss: 143089.6875 - mae: 143090.1875 - lr: 0.5623
Epoch 77/100
10/10 [==============================] - 4s 347ms/step - loss: 425017.4375 - mae: 425017.9375 - lr: 0.6310
Epoch 78/100
10/10 [==============================] - 4s 347ms/step - loss: 476418.3750 - mae: 476418.8438 - lr: 0.7079
Epoch 79/100
10/10 [==============================] - 4s 345ms/step - loss: 385280.7812 - mae: 385281.2500 - lr: 0.7943
Epoch 80/100
10/10 [==============================] - 4s 346ms/step - loss: 196990.6406 - mae: 196991.1406 - lr: 0.8913
Epoch 81/100
10/10 [==============================] - 4s 345ms/step - loss: 219908.6406 - mae: 219909.1406 - lr: 1.0000
Epoch 82/100
10/10 [==============================] - 4s 347ms/step - loss: 87121.7656 - mae: 87122.2656 - lr: 1.1220
Epoch 83/100
10/10 [==============================] - 4s 343ms/step - loss: 102100.0547 - mae: 102100.5547 - lr: 1.2589
Epoch 84/100
10/10 [==============================] - 4s 347ms/step - loss: 113475.2344 - mae: 113475.7344 - lr: 1.4125
Epoch 85/100
10/10 [==============================] - 4s 345ms/step - loss: 125451.8359 - mae: 125452.3359 - lr: 1.5849
Epoch 86/100
10/10 [==============================] - 4s 357ms/step - loss: 145299.1094 - mae: 145299.6094 - lr: 1.7783
Epoch 87/100
10/10 [==============================] - 4s 346ms/step - loss: 156073.5781 - mae: 156074.0781 - lr: 1.9953
Epoch 88/100
10/10 [==============================] - 4s 345ms/step - loss: 184256.2344 - mae: 184256.7344 - lr: 2.2387
Epoch 89/100
10/10 [==============================] - 4s 346ms/step - loss: 195625.1562 - mae: 195625.6562 - lr: 2.5119
Epoch 90/100
10/10 [==============================] - 4s 347ms/step - loss: 232395.0625 - mae: 232395.5625 - lr: 2.8184
Epoch 91/100
10/10 [==============================] - 4s 346ms/step - loss: 246235.9844 - mae: 246236.5000 - lr: 3.1623
Epoch 92/100
10/10 [==============================] - 4s 355ms/step - loss: 292257.9375 - mae: 292258.4375 - lr: 3.5481
Epoch 93/100
10/10 [==============================] - 4s 355ms/step - loss: 310621.1250 - mae: 310621.5938 - lr: 3.9811
Epoch 94/100
10/10 [==============================] - 4s 355ms/step - loss: 367015.7188 - mae: 367016.2188 - lr: 4.4668
Epoch 95/100
10/10 [==============================] - 4s 354ms/step - loss: 392224.5625 - mae: 392225.0625 - lr: 5.0119
Epoch 96/100
10/10 [==============================] - 4s 346ms/step - loss: 460634.4375 - mae: 460634.9688 - lr: 5.6234
Epoch 97/100
10/10 [==============================] - 4s 346ms/step - loss: 495405.5000 - mae: 495406.0312 - lr: 6.3096
Epoch 98/100
10/10 [==============================] - 4s 354ms/step - loss: 578088.6875 - mae: 578089.1875 - lr: 7.0795
Epoch 99/100
10/10 [==============================] - 4s 345ms/step - loss: 625668.6875 - mae: 625669.1875 - lr: 7.9433
Epoch 100/100
10/10 [==============================] - 4s 355ms/step - loss: 725623.1250 - mae: 725623.6250 - lr: 8.9125
In [56]:
plt.semilogx(lr_history.history["lr"], lr_history.history["loss"])
plt.axis([1e-4, 10, 0, 10])
Out[56]:
(0.0001, 10.0, 0.0, 10.0)

Compiling the model

In [57]:
def create_model():    
    model = create_uncompiled_model()

    model.compile(loss=tf.keras.losses.Huber(),
                  optimizer=tf.keras.optimizers.Adam(),
                  metrics=["mae"])  

    return model
In [58]:
# Save an instance of the model
model = create_model()

# Train it
history = model.fit(train_set, epochs=50)
Epoch 1/50
10/10 [==============================] - 7s 358ms/step - loss: 62.7167 - mae: 63.2124
Epoch 2/50
10/10 [==============================] - 4s 345ms/step - loss: 19.2919 - mae: 19.7909
Epoch 3/50
10/10 [==============================] - 4s 346ms/step - loss: 13.2027 - mae: 13.7003
Epoch 4/50
10/10 [==============================] - 4s 344ms/step - loss: 7.5810 - mae: 8.0730
Epoch 5/50
10/10 [==============================] - 4s 346ms/step - loss: 4.4680 - mae: 4.9534
Epoch 6/50
10/10 [==============================] - 4s 344ms/step - loss: 3.2706 - mae: 3.7445
Epoch 7/50
10/10 [==============================] - 4s 343ms/step - loss: 2.2404 - mae: 2.7001
Epoch 8/50
10/10 [==============================] - 4s 335ms/step - loss: 1.7628 - mae: 2.2133
Epoch 9/50
10/10 [==============================] - 3s 324ms/step - loss: 1.9228 - mae: 2.3795
Epoch 10/50
10/10 [==============================] - 3s 324ms/step - loss: 1.7526 - mae: 2.2028
Epoch 11/50
10/10 [==============================] - 3s 332ms/step - loss: 1.5410 - mae: 1.9839
Epoch 12/50
10/10 [==============================] - 3s 332ms/step - loss: 1.5607 - mae: 2.0043
Epoch 13/50
10/10 [==============================] - 4s 325ms/step - loss: 1.5291 - mae: 1.9737
Epoch 14/50
10/10 [==============================] - 3s 323ms/step - loss: 1.5151 - mae: 1.9553
Epoch 15/50
10/10 [==============================] - 3s 323ms/step - loss: 1.5723 - mae: 2.0195
Epoch 16/50
10/10 [==============================] - 3s 323ms/step - loss: 1.5251 - mae: 1.9704
Epoch 17/50
10/10 [==============================] - 4s 334ms/step - loss: 1.5578 - mae: 2.0013
Epoch 18/50
10/10 [==============================] - 3s 323ms/step - loss: 1.6350 - mae: 2.0822
Epoch 19/50
10/10 [==============================] - 3s 323ms/step - loss: 1.5411 - mae: 1.9849
Epoch 20/50
10/10 [==============================] - 3s 324ms/step - loss: 1.6681 - mae: 2.1173
Epoch 21/50
10/10 [==============================] - 3s 332ms/step - loss: 1.6710 - mae: 2.1214
Epoch 22/50
10/10 [==============================] - 4s 325ms/step - loss: 2.1027 - mae: 2.5602
Epoch 23/50
10/10 [==============================] - 4s 331ms/step - loss: 1.6703 - mae: 2.1213
Epoch 24/50
10/10 [==============================] - 3s 331ms/step - loss: 1.9684 - mae: 2.4278
Epoch 25/50
10/10 [==============================] - 3s 332ms/step - loss: 1.6248 - mae: 2.0709
Epoch 26/50
10/10 [==============================] - 4s 326ms/step - loss: 1.5335 - mae: 1.9767
Epoch 27/50
10/10 [==============================] - 3s 324ms/step - loss: 1.6276 - mae: 2.0726
Epoch 28/50
10/10 [==============================] - 3s 323ms/step - loss: 1.5479 - mae: 1.9919
Epoch 29/50
10/10 [==============================] - 3s 322ms/step - loss: 1.5493 - mae: 1.9933
Epoch 30/50
10/10 [==============================] - 3s 331ms/step - loss: 1.5493 - mae: 1.9940
Epoch 31/50
10/10 [==============================] - 3s 330ms/step - loss: 1.6452 - mae: 2.0937
Epoch 32/50
10/10 [==============================] - 3s 331ms/step - loss: 1.7732 - mae: 2.2267
Epoch 33/50
10/10 [==============================] - 3s 322ms/step - loss: 1.7535 - mae: 2.2088
Epoch 34/50
10/10 [==============================] - 3s 332ms/step - loss: 1.8121 - mae: 2.2669
Epoch 35/50
10/10 [==============================] - 3s 331ms/step - loss: 1.5585 - mae: 2.0064
Epoch 36/50
10/10 [==============================] - 3s 324ms/step - loss: 1.4828 - mae: 1.9234
Epoch 37/50
10/10 [==============================] - 3s 332ms/step - loss: 1.5553 - mae: 2.0002
Epoch 38/50
10/10 [==============================] - 3s 325ms/step - loss: 1.6294 - mae: 2.0785
Epoch 39/50
10/10 [==============================] - 4s 346ms/step - loss: 1.5459 - mae: 1.9899
Epoch 40/50
10/10 [==============================] - 4s 345ms/step - loss: 1.5989 - mae: 2.0466
Epoch 41/50
10/10 [==============================] - 4s 346ms/step - loss: 1.7182 - mae: 2.1643
Epoch 42/50
10/10 [==============================] - 4s 345ms/step - loss: 1.6010 - mae: 2.0468
Epoch 43/50
10/10 [==============================] - 4s 346ms/step - loss: 1.6930 - mae: 2.1439
Epoch 44/50
10/10 [==============================] - 4s 345ms/step - loss: 1.4663 - mae: 1.9074
Epoch 45/50
10/10 [==============================] - 4s 345ms/step - loss: 1.8127 - mae: 2.2658
Epoch 46/50
10/10 [==============================] - 4s 346ms/step - loss: 1.7003 - mae: 2.1492
Epoch 47/50
10/10 [==============================] - 4s 346ms/step - loss: 1.6999 - mae: 2.1480
Epoch 48/50
10/10 [==============================] - 4s 345ms/step - loss: 1.7708 - mae: 2.2222
Epoch 49/50
10/10 [==============================] - 4s 345ms/step - loss: 1.4988 - mae: 1.9378
Epoch 50/50
10/10 [==============================] - 4s 344ms/step - loss: 1.6634 - mae: 2.1109

Evaluating the forecast

Now it is time to evaluate the performance of the forecast. For this you can use the compute_metrics function

In [59]:
def compute_metrics(true_series, forecast):
    
    mse = tf.keras.metrics.mean_squared_error(true_series, forecast).numpy()
    mae = tf.keras.metrics.mean_absolute_error(true_series, forecast).numpy()

    return mse, mae

At this point only the model that will perform the forecast is ready but you still need to compute the actual forecast.

Faster model forecasts

Remember that this faster approach uses batches of data.

The code to implement this is provided in the model_forecast below. Notice that the code is very similar to the one in the windowed_dataset function with the differences that:

  • The dataset is windowed using window_size rather than window_size + 1
  • No shuffle should be used
  • No need to split the data into features and labels
  • A model is used to predict batches of the dataset
In [60]:
def model_forecast(model, series, window_size):
    ds = tf.data.Dataset.from_tensor_slices(series)
    ds = ds.window(window_size, shift=1, drop_remainder=True)
    ds = ds.flat_map(lambda w: w.batch(window_size))
    ds = ds.batch(32).prefetch(1)
    forecast = model.predict(ds)
    return forecast

Now compute the actual forecast:

In [61]:
# Compute the forecast for all the series
rnn_forecast = model_forecast(model, G.SERIES, G.WINDOW_SIZE).squeeze()

# Slice the forecast to get only the predictions for the validation set
rnn_forecast = rnn_forecast[G.SPLIT_TIME - G.WINDOW_SIZE:-1]

# Plot the forecast
plt.figure(figsize=(10, 6))
plot_series(time_valid, series_valid)
plot_series(time_valid, rnn_forecast)
In [62]:
mse, mae = compute_metrics(series_valid, rnn_forecast)

print(f"mse: {mse:.2f}, mae: {mae:.2f} for forecast")
mse: 5.65, mae: 1.86 for forecast