Empirical forecasting tools are important in the energy industry and in understanding energy markets. Oftentimes, the main point is not necessarily to land on a prediction that one really believes is going to materialize. Such predictions come with large error margins in any case. Instead, both market participants and researchers are often interested in testing scenarios. They use these scenarios to inform investment decisions, as well as to stress-test their portfolio of both physical and financial assets to market changes.
This is a particularly volatile period in energy markets, so attempting to understand the impact of different scenarios is an important tool.
In this lab and the next, we will use some simple time series forecasting tools to make forecasts and scenarios. In this lab we will investigate various univariate forecasting tools: Basically, modeling the behavior of a time series we are interested in, and then propagating it forward. This will include modeling both the dynamics of the mean value of a time series, as well as modeling the volatility.
In the next lab (lab 6) we will extend this modeling to include the influence of other variables. In particular, we will focus on creating scenarios for the electricity price under different quota prices on the European carbon trading market (ETS).
library(tidyverse)
library(fpp3)
library(lubridate)
library(dynlm)
First, lets load in some data on power prices as well as prices on carbon permits in the ETS trading scheme.
I obtained carbon prices from UK think tank EMBER. The emissions are traded on the EEX exchange, and in principle you should be able to get prices from them as well, but their data is not particularly user friendly.
ets = read_csv("http://jmaurit.github.io/analytics/labs/data/eua-price.csv")
## Rows: 612 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (1): Price
## dttm (1): Date
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
colnames(ets) = c("date", "price")
Then we again look at our power data from Nord Pool:
elspot = read_csv("http://jmaurit.github.io/norwayeconomy/data_series/elspot.csv")
## Rows: 228 Columns: 22
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (20): Bergen, DK1, DK2, EE, ELE, FI, KT, Kristiansand, LT, LV, Molde, O...
## lgl (1): FRE
## date (1): date
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
Right away we see that we have a bit of a problem. The power price data has a monthly frequency, while the carbon price data has a weekly frequency.
Lets aggregate the carbon price data to be at the monthly level by taking the average of all the values in a month:
ets["month"] = month(ets$date)
ets["year"] = year(ets$date)
ets_mon = ets %>% group_by(month, year) %>% summarise(
price = mean(price))
## `summarise()` has grouped output by 'month'. You can override using the
## `.groups` argument.
ets_mon["day"] = 1
ets_mon = ets_mon %>% arrange(year, month)
ets_mon = ets_mon %>% mutate(date = make_date(year, month, day))
Now we are ready to join:
power_df = elspot %>% inner_join(ets_mon[c("price", "date")], by="date")
We can save this date frame to our disk to use later.
write_csv(power_df, "data/power_df.csv")
Now lets just look at Denmark for the moment
DK_df = power_df %>% select(DK1, DK2, date, price)
colnames(DK_df) = c("DK1_price", "DK2_price", "date", "ets_price")
To get power prices on similar scale as ets prices, we’ll divide by 1000 (and thus prices will be per KWh and not MWh). Both power prices and carbon prices are in Euro.
DK_df["DK1_price"] = DK_df$DK1_price/1000
DK_df["DK2_price"] = DK_df$DK2_price/1000
Now a plot:
DK_df %>% pivot_longer(-date, names_to="series", values_to = "prices") %>% ggplot(aes(x=date, y=prices, color=series)) +
geom_line()
Visually, we can already seemingly make out a correlation.
Now that we have our data, our goal is to put together a forecasting model of power prices. Our end goal is to put together a forecasting model that takes into account various future ETS prices, but we will wait until lab 6 to do that.
In this lab, we will make use of some of the time series tools we learned in the previous lab, and then take a step further and turn those models into a forecasting tool. The forecasting tools we make use of here are in large part based on Forecasting: Principles and Practice. It can be particularly useful to have read through ch. 9 and ch. 9.
The AR model that we used in the previous lab is part of a larger class of models called ARIMA models: AR: Autoregressive, MA: Moving Average, and I: order of integration.
We have already seen an AR model, where a variable is modeled as a function of its own lagged values. A Moving Average model is also dynamic, but here we model the dynamics in the model’s error term (or the shocks of the model), not in the actual y values.
We can write a MA model of order 1 (MA(1)) as:
\(y_t = c + \epsilon_t + \epsilon_{t-1}\)
An MA model of order 2 (MA(2)) as:
\(y_t = c + \epsilon_t + \epsilon_{t-1} + \epsilon_{t-2}\)
The economic intuition for such a model is that an exogenous shock in one period, can carry on to later periods. For example, the outbreak of the corona virus in the first half of 2020, continues to have reverberations for the oil price in the second half of the year, first half of 2021 and so on, even if the actual event was (optimistically) limited to the first half of 2020.
Notice that a key difference between a shock whose dynamics are modeled as an MA process vs AR process, is that the in the MA process the shock is finite. That is, if we have an MA(2) process as above will last two periods then completely exit the system. On the other hand, a shock with AR(1) dynamics (say \(Y_t = .5 Y_{t-1} + \epsilon_t\)) will in theory stay in the system infinitely: From the AR(1) process described here, a shock in period 0 will be 50 percent as big in period 1, then 50 percent as big again in period 2 and so on. In practice, of course, such a process that undergoes exponential decay will quickly die out.
An interesting and important fact all AR models can also be written as MA models. And all MA models can be written as an AR model. As a practical matter, a time series with complex dynamics can often best be modeled as a combination of AR and MA terms.
Lets first put our power price series for DK1 in ts format. We will use the tsibble format.
DK_df$date = yearmonth(DK_df$date)
DK_ts = as_tsibble(DK_df, index=date)
Taking a look at our data, we might question whether it is stationary.
autoplot(DK_ts, DK1_price)
It does not look particularly stable over time. We can check formally:
library(tseries)
## Registered S3 method overwritten by 'quantmod':
## method from
## as.zoo.data.frame zoo
adf.test(DK_ts$DK1_price)
##
## Augmented Dickey-Fuller Test
##
## data: DK_ts$DK1_price
## Dickey-Fuller = -3.5051, Lag order = 5, p-value = 0.04469
## alternative hypothesis: stationary
As it turns out, our test comes pretty close to a 5 percent critical value. I might still take the difference of the data to be sure
DK_ts = DK_ts%>% mutate(
d_DK1_price = difference(DK1_price)
)
DK_ts = DK_ts %>% dplyr::filter(!is.na(d_DK1_price))
Now let us use ACF and pACF functions to try to get a picture of the dynamics, if any, left in the series
DK_ts %>% fill_gaps() %>% gg_tsdisplay(d_DK1_price, plot_type='partial')
## Warning: Removed 1 rows containing missing values (geom_point).
There looks like there might be some (negative) autocorrelation on the first lag in the acf. Looking at the pACF I might choose to use a MA(1) model or potentially an AR(1) model.
Now lets build a model. We do it easily with the ARIMA command from the fable package (that comes with fpp3). We want to model this data series as being composed of one differenced term (I(1)) and one AR term (AR(1)). We also may want to try alternatively one differenced term I(1) and one MA term (MA(1)). We also run one model where we allow the algorithm to automatically choose the optimal ARIMA model:
fit1 = DK_ts %>% fill_gaps() %>%
model(
arima110 = ARIMA(DK1_price ~ pdq(1,1,0)), #p:AR, d:I, q: MA
arima011 = ARIMA(DK1_price ~ pdq(0,1,1)),
automatic = ARIMA(DK1_price)
)
Notice the order parameters. For example, the first model corresponds to AR(0)I(1)MA(1)
We sit that our object fit1 consists of three models
fit1
## # A mable: 1 x 3
## arima110 arima011 automatic
## <model> <model> <model>
## 1 <ARIMA(1,1,0)> <ARIMA(0,1,1)> <ARIMA(0,1,1)>
The advantage of the the fable package, is that we can access the model components much as we access data in a tibble. Below we arrange our models by model fit as calculated by AIC (read more here).
glance(fit1) %>% arrange(AICc)
## # A tibble: 3 × 8
## .model sigma2 log_lik AIC AICc BIC ar_roots ma_roots
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <list> <list>
## 1 arima011 0.271 -96.9 198. 198. 203. <cpl [0]> <cpl [1]>
## 2 automatic 0.271 -96.9 198. 198. 203. <cpl [0]> <cpl [1]>
## 3 arima110 0.272 -97.0 198. 198. 204. <cpl [1]> <cpl [0]>
Our ARIMA(0,1,1) model seems to perform the best (and it was also chosen by the automatic routine.)
To get the parameters in the model, we use report
fit1 %>% select(arima011) %>% report()
## Series: DK1_price
## Model: ARIMA(0,1,1)
##
## Coefficients:
## ma1
## -0.3267
## s.e. 0.0779
##
## sigma^2 estimated as 0.2714: log likelihood=-96.88
## AIC=197.76 AICc=197.85 BIC=203.44
We can try to give the model an economic interpretation: A positive shock in one period tends to be followed by a downward movement the following period. This is typical of differenced series. An initial positive shock, will tend to be followed by a movement in the opposite direction.
Let’s use the first model with an MA term. Just to double check that we don’t have any more serial correlation in the residuals we use the tsresiduals function from the fpp2 package that combines some regular residual checks:
fit1 %>% select(arima011) %>%
gg_tsresiduals()
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_bin).
It looks ok!
Now since there was some doubt about whether our series really was stationary, we could also try modelling it without the I(1) term. Differencing doesnt come free. You tend to throw away a lot of information, and this will also impact the forecast you end up making, leading to more uncertainty.
So first, looking at the acf and pacf of the original time series:
DK_ts %>% fill_gaps() %>% gg_tsdisplay(DK1_price, plot_type='partial')
## Warning: Removed 1 rows containing missing values (geom_point).
The exponentially decaying pattern the ACF is typical of a fairly simple AR-prosess. The pACF graph suggests we might also try to have two AR terms.
fit2 = DK_ts %>% fill_gaps() %>%
model(
arima200 = ARIMA(DK1_price ~ pdq(2,0,0)) #p:AR, d:I, q: MA
)
fit2 %>% report()
## Series: DK1_price
## Model: ARIMA(2,0,0) w/ mean
##
## Coefficients:
## ar1 ar2 constant
## 0.6360 0.2849 0.3202
## s.e. 0.0866 0.0893 0.0422
##
## sigma^2 estimated as 0.2685: log likelihood=-96.76
## AIC=201.52 AICc=201.84 BIC=212.93
We might be tempted to compare the goodness-of-fit measures with our models of our difference data. But generally we should only compare goodness of fit measures of models where the series values are un-transformed. The intuition is that when we take a difference we re-scale our data - it is like comparing celsius and fahrenheight - our goodness-of-fit measures become meaningless.
We can again check our residuals:
fit2 %>% gg_tsresiduals()
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_bin).
They look pretty good - no significant autocorrelations in the acf plot
Now lets take these two models and turn them around into forecasts that we can compare.
First we create the forecast using the function appropriately called forecast, which comes in the fpp3 package. You can read about what is going on inside the function here.
But intuitively, all that is going on is that the algorithm is taking the fitted model, and then propagating it forward from the last few data points. If there are dynamics that we don’t have in our ARIMA model, then we won’t have them in our forecast either.
fit1 %>% select(arima011) %>%
forecast(h=10) %>%
autoplot(DK_ts)
fit2 %>%
forecast(h=10) %>%
autoplot(DK_ts)
Now we have two different forecasts based on slightly different ARIMA models. The variance in each forecast is probably just as important as the point forecasts. They tell us, based on our modelling choices, what type of risk we can assume in the next few months–which bounds for the price are most likely.
One note of caution, the calculated uncertainty in these forecasts is almost always too little. The reason is that it calculates uncertainty based on the known past of the series. The future, of course, has many unkowns.
The dark blue bands represent a prediction interval of one standard deviation (about a 70% interval), and the light blue two standard deviations (about a 95% interval).
But interestingly the two forecasts have slightly different directions. The ARIMA(0,1,1) simply propagates the point forecast forward after an initial jump down (due to the negative MA term), while the AR(2,0,0) gives a downward trend in the next 12 months. Why do the two forecasts give these different trends?
The key is to remember that an I(1) process – for example a random walk – will have shocks that tend to be highly persistent or permanent. That means, that the best forecast for the following period is often the previous periods value. Thus the jump we see toward the end of the time series is best assumed to be permanent.
On the other hand, a stationary time series with an AR process will tend to see reversion to the mean, with shocks dying out exponentially. Thus if we assume such a process, then our best forecast would be that the jump towards the end of the series will eventually die out – exactly as the forecast indicates.
Which forecast is best?
In this case, there probably is no perfect technical way of making a decision. It is hard to compare goodness-of-fit measures (AIC, BIC) between integrated and non-integrated models because the likelihood that is used to compute these measures will be of different scales. More so, our tests of stationarity were not conclusive.
We could do a couple things to clear this up.
We could attempt to add more data. This could help us establish more firmly whether the series is stationary or not.
We use some of our “domain” knowledge. That is, what we generally know about power markets.
Traditionally, power markets have tended to display I(1) behavior - shocks did not necessarily die out exponentially. This was in large part due to pricing behavior on the markets followed pricing of commodity fuels – coal and oil – which tend to have price series that display I(1) behavior.
But power markets based on renewables, tend to have more stationary tendencies. Shocks – weather events that affects production – tend to be temporary and can have low persistence.
With the knowledge that the Nordic market as a whole is made up of a large share of renewables (hydro and wind), we could then decide to make use of the ARIMA(2,0,0) model.
A lot of time-series, especially with high frequency, have seasonality. That is to say, that they have recurring patterns at an hourly, daily, monthly or quarterly level.
lets import some daily consumption data (or load data as it is sometimes referred to in electricity markets), which tends to show strong seasonality.
cons = read_csv2("http://jmaurit.github.io/analytics/labs/data/consumption-per-country_2019_daily.csv")
## ℹ Using "','" as decimal and "'.'" as grouping mark. Use `read_delim()` for more control.
## Rows: 365 Columns: 10
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ";"
## chr (1): date
## dbl (9): NO, SE, FI, DK, Nordic, EE, LV, LT, Baltic
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
cons["date"] = as.Date(cons$date, format="%d/%m/%Y")
Lets convert to tsibble format:
cons_ts = tsibble(cons, index=date)
Lets look at Norwegian consumption
cons_ts %>% autoplot(NO)
We can see the weekly variation here. If we extend this data over several years, the yearly seasonal pattern would also become clear. Though we will not deal with the latter right now.
We want to deal with the weekly seasonality in our modeling and we have a few choices. The first is to try to remove the seasonal component, and then just forecast the seasonally adjusted data.
For this we could use several tools, though fpp3 recommends STL decomposition, which stands for Seasonal and Trend decomposition using Loess (loess is a local non-linear regression, we will see more of this in the last module (labs 7, 8).
cons_comp = cons_ts %>% model(
STL(NO ~ trend(window=7) + season(window="periodic"))
) %>% components
cons_comp %>% autoplot()
Here we see the series split into 3 components: a smooth trend component, a regular seasonal component and an error or remainder component.
We extract a seasonally adjusted series, where the seasonal component is removed and we are left with a series composed of the trend and non-seasonal errors.
cons_ts["NO_sa"] = cons_comp$season_adjust
#cons_ts = tsibble(cons_ts, index=date)
cons_ts %>% autoplot(NO_sa)
We could then go on and create a forecast model from this data. But if we are interested in including the seasonality in our forecast as well as the trend, we might want to model the seasonality directly in our ARIMA model. You can read more about modeling seasonal ARIMA models here.
First we recognize that the series looks quite non-stationary. Since we have very pronounced seasonality, instead of taking a normal first-difference, we can take a seasonal difference. We can take a look at the differenced series and the ACF and pACF functions
cons_ts %>%
gg_tsdisplay(difference(NO, 7), plot_type='partial')
## Warning: Removed 7 row(s) containing missing values (geom_path).
## Warning: Removed 7 rows containing missing values (geom_point).
So we see a clear decay pattern in our seasonally differenced data, indicative of an AR type model. In our pACF we also clearly see strong dynamics at the first lag, as well as some seasonality still in the data.
After some experimentation (and comparison of AIC/BIC values), I try a seasonal ARIMA model with ARIMA(1,0,1) and seasonality parameters ARIMA(0,1,1). I also let the algorithm automatically choose the specification
sfit1 <- cons_ts %>%
model(
arima101011 = ARIMA(NO ~ 0 + pdq(1,0,1) + PDQ(0,1,1)),
auto = ARIMA(NO)
)
## Warning in sqrt(diag(best$var.coef)): NaNs produced
sfit1 %>% select(arima101011) %>% report()
## Series: NO
## Model: ARIMA(1,0,1)(0,1,1)[7]
##
## Coefficients:
## ar1 ma1 sma1
## 0.9889 0.1144 -0.9999
## s.e. 0.0093 0.0587 0.0777
##
## sigma^2 estimated as 114438260: log likelihood=-3841.36
## AIC=7690.71 AICc=7690.83 BIC=7706.24
sfit1 %>% select(auto) %>% report()
## Series: NO
## Model: ARIMA(2,0,1)(2,1,1)[7]
##
## Coefficients:
## ar1 ar2 ma1 sar1 sar2 sma1
## 0.9845 -0.0068 0.1076 -0.1096 -0.1560 -0.8898
## s.e. 0.0061 0.0066 NaN 0.0075 0.0007 0.0139
##
## sigma^2 estimated as 118086876: log likelihood=-3839.14
## AIC=7692.27 AICc=7692.59 BIC=7719.44
The automatic selection algorithm chose a slightly more complex model with an extra AR and seasonal AR terms.
Now to create a forecast from our ARIMA model. We will forecast 30 days ahead based on our automatic model model:
sfit1 %>% select(auto) %>% forecast(h=120) %>% autoplot(cons_ts)
Does this forecast seem reasonable? How so? Is there anything that seems a bit wrong with this forecast? Can you explain what might be wrong or missing? How would you go about improving this forecast?
It is common for power market data at higher frequency (hourly, daily) have several patterns of seasonality. We will explore some advanced tools for modeling several seasonality patterns in the next lab.
So far we have modeled the mean value of our time series. That is, we have modeled the behavior of the realized data point at each time. In a lot of situations analyzing power markets, we might be interested in the volatility of a time series. For example, if we are using financial instruments to hedge movements in power markets.
In order to analyze the patterns of volatility of time series, we can use ARCH and the generalized “gARCH” models.
Unfortunately, ARCH and gARCH are not covered in the fpp3 book, so this section will be largely self contained. However, treatment of ARCH and gARCH can be found in most texts on time series.
ARCH stands for Autoregressive Conditional Heteroscedasticity. This type of model takes account of a certain type of heteroskedacity (non-constant variance) in the data. These models are commonly found in finance, but also useful in analyzing energy markets (as well as energy market finance). For example, we saw in the previous section that the consumption data seemed to show non-constant variance over time. This is something we could model with an ARCH or GARCH model.
If we have the following model:
\[y_t = \beta_0 + \beta_1 x_{t1} + \beta_2 x_{t2} + ... + \beta_k x_{tk} + u_t\]
Then we say that the error term follows an ARCH(1) process:
\[E(u^2_t|u_{t-1}, u_{t-2},...) = \alpha_0 + \alpha_1 u^2_{t-1}\]
The variance (or volatility) at time t is conditional on the variance in t-1, and we can model this via OLS.
As an example lets first take the log of our consumption data and then take the seasonal (7-day) difference. We can then interpret the series as approximate week-by-week percentage “returns” (even though this is a bit nonsensical terminology, since we are looking at consumption data).
cons_ts = cons_ts %>% mutate(
returns = log(NO) %>% difference(7)
)
cons_ts %>% autoplot(returns)
## Warning: Removed 7 row(s) containing missing values (geom_path).
After some checking, we land on an AR model with 1st and 7th lags..
cons_ts = cons_ts %>% filter(!is.na(returns))
arch_mod1 = cons_ts %>%
model(
arima100100 = ARIMA(returns ~ 0 + pdq(1,0,0) + PDQ(1,0,0))
)
arch_mod1 %>% report()
## Series: returns
## Model: ARIMA(1,0,0)(1,0,0)[7]
##
## Coefficients:
## ar1 sar1
## 0.8922 -0.4406
## s.e. 0.0243 0.0501
##
## sigma^2 estimated as 0.00118: log likelihood=698.5
## AIC=-1391 AICc=-1390.93 BIC=-1379.36
Then from the residuals, we create measures of squared residuals and then run the model of arch(1) errors:
#create the
resids = arch_mod1 %>% residuals()
resids = resids %>% mutate(
res_sq = .resid^2
)
#modell for squared-residuals
arch_reg = resids %>%
model(
arima100 = ARIMA(res_sq ~ 0 + pdq(1,0,0) + PDQ(0,0,0))
)
arch_reg %>% report()
## Series: res_sq
## Model: ARIMA(1,0,0)
##
## Coefficients:
## ar1
## 0.4141
## s.e. 0.0480
##
## sigma^2 estimated as 5.799e-06: log likelihood=1650.78
## AIC=-3297.56 AICc=-3297.53 BIC=-3289.8
Here we see that the lag of the squared-residual is significant. That is to say that the variance, as measured by our squared residuals, of this model is conditional on earlier variance.
A few important things we can say about this:
One of our conditions for correctly using ordinary regression with time series data is that variance is constant (homoskedacity). Here we show that this is not the case. The variance changes over time. To get correct inference, we either need to model this changing variance directly (as we do with an ARCH model), or we have to calculate standard errors robust to non-constant variance (heteroskedacity).
That the series is ARCH is interesting in itself. It tells us that volatility in our consumption series can be persistent. This can also be important in a forecasting perspective. Without considering the ARCH component of a model we may severely under-estimate risks involved and potential consumption swings in turbulent times.
A more general model of conditional volatility is called generalized ARCH or gARCH, where we allow the variance term to be modeled with both AR and MA terms. For example, a model where we model the mean value as an AR(1) process and then further model the volatility of the error term as an gARCH(1,1) – that is the variance term has both an AR(1) and MA(1) component. We could write the model as follows:
\(y_t = \delta y_{t-1} + u_t\)
\(u_t | (u_{t-1}, u_{t-2},...) \sim N(0, \sigma^2_t)\)
\(\sigma_t^2 = \alpha_0 + \alpha_1 u^2_{t-1} + \beta_1 \sigma^2_{t-1}\)
Here we see that we model the conditional variance term as what can look like a form of an ARMA(1,1) process and in general as an ARMA(p,q). This is what is known as a gARCH(p,q) model. That is, we say that the error terms of our initial mean model come from a distribution with variance \(\sigma^2_t\). Then we model this variance dynamically with both lags of itself as well as the squared error terms from the mean model.
Intuitively, we can say that we are modeling the variance component as autoregressive in the sense that a period of high variance will tend to be followed by periods of high variance. And that shocks (the “MA”) to the system, (large residual values, \(u_t\)), can affect the pattern of volatility as well.
To simultaneously fit an ARMA- gARCH model in R (using maximum likelihood) we can use the fGarch package. (There exist several other packages that provide more functionality, but I find these to be a bit less user-friendly)
#install.packages("fGarch")
library(fGarch)
## NOTE: Packages 'fBasics', 'timeDate', and 'timeSeries' are no longer
## attached to the search() path when 'fGarch' is attached.
##
## If needed attach them yourself in your R script by e.g.,
## require("timeSeries")
After some experimenting, Here I fit our “returns” data to an arma(1,1), gARCH(1,1) model.
cons_ts = cons_ts %>% dplyr::filter(!is.na(returns))
#cons_ts = as_tsibble(cons_ts, index=date)
garchMod1 = garchFit(~arma(1,1) + garch(1,1), data = cons_ts["returns"], trace = F)
summary(garchMod1)
##
## Title:
## GARCH Modelling
##
## Call:
## garchFit(formula = ~arma(1, 1) + garch(1, 1), data = cons_ts["returns"],
## trace = F)
##
## Mean and Variance Equation:
## data ~ arma(1, 1) + garch(1, 1)
## <environment: 0x126232ab8>
## [data = cons_ts["returns"]]
##
## Conditional Distribution:
## norm
##
## Coefficient(s):
## mu ar1 ma1 omega alpha1 beta1
## 5.6878e-04 8.0549e-01 1.2982e-01 1.2751e-05 1.0479e-01 8.9503e-01
##
## Std. Errors:
## based on Hessian
##
## Error Analysis:
## Estimate Std. Error t value Pr(>|t|)
## mu 5.688e-04 1.824e-03 0.312 0.75522
## ar1 8.055e-01 3.665e-02 21.980 < 2e-16 ***
## ma1 1.298e-01 6.179e-02 2.101 0.03564 *
## omega 1.275e-05 1.109e-05 1.150 0.25032
## alpha1 1.048e-01 2.709e-02 3.868 0.00011 ***
## beta1 8.950e-01 2.337e-02 38.306 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Log Likelihood:
## 692.1901 normalized: 1.933492
##
## Description:
## Mon Mar 27 09:42:23 2023 by user:
##
##
## Standardised Residuals Tests:
## Statistic p-Value
## Jarque-Bera Test R Chi^2 2.173664 0.3372832
## Shapiro-Wilk Test R W 0.996615 0.6560559
## Ljung-Box Test R Q(10) 72.58583 1.400935e-11
## Ljung-Box Test R Q(15) 84.53334 1.025935e-11
## Ljung-Box Test R Q(20) 85.92283 3.792975e-10
## Ljung-Box Test R^2 Q(10) 17.8347 0.05781694
## Ljung-Box Test R^2 Q(15) 20.9812 0.1374311
## Ljung-Box Test R^2 Q(20) 26.43738 0.1518475
## LM Arch Test R TR^2 19.39954 0.0793319
##
## Information Criterion Statistics:
## AIC BIC SIC HQIC
## -3.833464 -3.768428 -3.834014 -3.807599
Now lets put some of these calculated values back into our data frame to make it easier to plot together with other data series.
We will also extract the implied volatility (which is referred to as h.t) from our model. That is, a measure of the conditional variance ^2_t term in our model.
cons_ts["volatility"] =garchMod1@h.t
We can then plot our conditional variance measure:
ggplot(cons_ts, aes(y = volatility, x = date)) + geom_line(col = '#ff9933') + ylab('Conditional Variance') + xlab('Date')
From the figure, we see that volatility of consumption changes seems to be quite high in the spring (april-may), as well as towards the end of the year.
Can you explain these changes?
Audio: Walk-through of our garch model
We can also use our ARCH models to create a forecast. The forecast function in the fpp3 package does not cover ARCH and gARCH models, so we will instead use the predict method, which works with the FGarch package.
predict1 = predict(garchMod1, 120)
Note that this predictions comes from the ARMA (mean) part of the model
We then combine our prediction data to our cons data frame.
predict1["date"] = seq(as.Date("2020-01-01"), as.Date("2020-01-01")+119, by="days")
cons_fcst = cons_ts %>% full_join(predict1, by="date")
cons_fcst["date"] = as.Date(cons_fcst$date)
Then we plot our time series along with our prediction and prediction interval. Notice the use of geom_ribbon to create the prediction interval bands:
cons_fcst %>%
ggplot() +
geom_line(aes(x=date, y=returns)) +
geom_line(aes(x=date, y=meanForecast, color="red")) +
geom_ribbon(aes(x=date, ymin=(meanForecast-2*standardDeviation), ymax=(meanForecast+2*standardDeviation) ), fill="blue", alpha=.2)
## Warning: Removed 120 row(s) containing missing values (geom_path).
## Warning: Removed 358 row(s) containing missing values (geom_path).
Does the forecast, and forecast bands seem reasonable? Why or why not?
gARCH type modeling can quickly get complex and technical. As a beginner, keep it simple. If you start reading on the internet, you will start hearing about all sorts of fancy versions: EGARCH, IGARCH, etc, and you will encounter all sorts of fancy and confusing notation.
But in a lot of applications, some simple modeling of conditional volatility goes a long way.
Remember KISS: Keep It Simple Stupid!
Open ended question. From ENTSOE-E you can download daily data on power prices (you will need to register for a free account). Choose prices for a certain country for one year. Model the dynamics of power prices, including for checking for and modeling conditional variance. Create a forecast for 30 days. Explain both the strengths and weaknesses of the forecast.
Conceptual question: The models we have been investigating have been very simple. Presumably, we could construct a model that tries to include all the factors that determine, for example consumption: like temperature, the state of the economy, factors associated with large industrial consumers, etc. If we used such a model, do you think we should expect to get a better forecast? Why or why not? (You could refer to Ch. 7 of Forecasting)
DKNO_ts = cons_ts %>% select(date, NO, DK)
DKNO_ts %>% pivot_longer(cols=c("NO", "DK"), names_to="Country", values_to="Consumption") %>% ggplot(aes(x=date, y=Consumption, color=Country)) +
geom_line()
Just visually we can see a few things. One is that Norway uses a lot more electricity than Denmark-between 4-5 times, depending on the season, even though they have roughly the same number of citizens. There are a few reasons for this. One is that Norway to a much greater degree uses electricity for heating, while Denmark relies much more heavily on district heating schemes. There is also more heavy industry in Norway that uses electricity as a major input–notably aluminium production.
We can also see a much more pronounced yearly seasonal pattern to the Norwegian data, again the fact that Norway uses electricity in heating explains this to a large degree. It also appears that the Danish weekly seasonality is considerably more regular than the Norwegian seasonality.
DKfit1 = cons_ts %>%
model(
DK_mod = ARIMA(DK)
)
DKfit1 %>% report()
## Series: DK
## Model: ARIMA(3,0,2)(0,1,1)[7]
##
## Coefficients:
## ar1 ar2 ar3 ma1 ma2 sma1
## -0.4567 0.4349 0.7675 1.3936 0.7992 -0.9052
## s.e. 0.1506 0.1598 0.1315 0.1751 0.1200 0.0744
##
## sigma^2 estimated as 7854179: log likelihood=-3287.85
## AIC=6589.71 AICc=6590.03 BIC=6616.73
Here I choose the most simple method of simply letting the automatic procedure choose my parameter. The algorithm choose a model with one seasonal difference and one seasonal MA term (weekly seasonality), in addition to 3 AR terms and 2 MA terms. We should probably have some yearly seasonality modeled here as well, but we will wait for the next lab to look at this.
Then we create our 30 day forecast.
DKfit1 %>% forecast(h=30) %>% autoplot(cons_ts)
It doesn’t look too bad - this simple model seems to fit better for the more regular Danish data. As mentioned above, we probably would want to explicitly model the yearly seasonality as well, but then we would need more data.
2.)
I will access data that I have previously cleaned
wtDF = read_csv("https://jmaurit.github.io/analytics/labs/data/wtDF.csv")
## Rows: 440893 Columns: 4
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (2): variable, area
## dbl (1): value
## dttm (1): time
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
And I will pull out prices in DK2
pricesDK2 = wtDF %>% dplyr::filter(area=="DK2" & variable=="Prices")
pricesDK2 = pricesDK2 %>% select(time, value)
colnames(pricesDK2)[2] = "DK2_price"
pricesDK2 = pricesDK2 %>% dplyr::filter(!is.na(time))
pricesDK2_ts = as_tsibble(pricesDK2, index=time)
First, I’ll use the automatic algorithm to find the optimal ARIMA specification
DK2_arima= pricesDK2_ts %>% fill_gaps() %>%
model(
arimaMod = ARIMA(DK2_price)
)
## Warning in sqrt(diag(best$var.coef)): NaNs produced
DK2_arima %>% report()
## Series: DK2_price
## Model: ARIMA(4,0,0)(2,1,0)[24]
##
## Coefficients:
## ar1 ar2 ar3 ar4 sar1 sar2
## 1.0529 -0.1555 -0.0195 0.0374 -0.5647 -0.2985
## s.e. NaN 0.0122 0.0122 0.0093 0.0030 NaN
##
## sigma^2 estimated as 25.54: log likelihood=-74611.47
## AIC=149237 AICc=149237 BIC=149293.7
Here we see an ARMA(4,0) model with seasonal ARMA (2) (24 hour period). For simplicity I will not model the seasonal component, but in a full analysis this should be dealt with (by decomposition for example.)
I used some trial and error to find a garch specification with a good fit.
garchModDK2 = garchFit(~arma(4,0) + garch(1,6), data = pricesDK2_ts["DK2_price"], trace = F)
summary(garchModDK2)
##
## Title:
## GARCH Modelling
##
## Call:
## garchFit(formula = ~arma(4, 0) + garch(1, 6), data = pricesDK2_ts["DK2_price"],
## trace = F)
##
## Mean and Variance Equation:
## data ~ arma(4, 0) + garch(1, 6)
## <environment: 0x1248dbb80>
## [data = pricesDK2_ts["DK2_price"]]
##
## Conditional Distribution:
## norm
##
## Coefficient(s):
## mu ar1 ar2 ar3 ar4 omega alpha1
## 2.433117 1.000000 0.073878 -0.229348 0.087693 2.135251 0.494900
## beta1 beta2 beta3 beta4 beta5 beta6
## 0.160995 0.018643 0.019761 0.022703 0.152013 0.153861
##
## Std. Errors:
## based on Hessian
##
## Error Analysis:
## Estimate Std. Error t value Pr(>|t|)
## mu 2.433117 0.083240 29.230 < 2e-16 ***
## ar1 1.000000 0.009969 100.315 < 2e-16 ***
## ar2 0.073878 0.014337 5.153 2.56e-07 ***
## ar3 -0.229348 0.011598 -19.775 < 2e-16 ***
## ar4 0.087693 0.006751 12.989 < 2e-16 ***
## omega 2.135251 0.093712 22.785 < 2e-16 ***
## alpha1 0.494900 0.016136 30.671 < 2e-16 ***
## beta1 0.160995 0.012982 12.401 < 2e-16 ***
## beta2 0.018643 0.007503 2.485 0.01297 *
## beta3 0.019761 0.006812 2.901 0.00372 **
## beta4 0.022703 0.010576 2.147 0.03182 *
## beta5 0.152013 0.022138 6.867 6.57e-12 ***
## beta6 0.153861 0.014916 10.315 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Log Likelihood:
## -71065.38 normalized: -2.892011
##
## Description:
## Mon Mar 27 09:45:35 2023 by user:
##
##
## Standardised Residuals Tests:
## Statistic p-Value
## Jarque-Bera Test R Chi^2 286362.6 0
## Shapiro-Wilk Test R W NA NA
## Ljung-Box Test R Q(10) 3113.436 0
## Ljung-Box Test R Q(15) 5290.165 0
## Ljung-Box Test R Q(20) 6416.284 0
## Ljung-Box Test R^2 Q(10) 22.81381 0.0114551
## Ljung-Box Test R^2 Q(15) 28.15469 0.02062055
## Ljung-Box Test R^2 Q(20) 48.39295 0.0003745876
## LM Arch Test R TR^2 23.70189 0.02232481
##
## Information Criterion Statistics:
## AIC BIC SIC HQIC
## 5.785079 5.789369 5.785079 5.786469
pricesDK2_ts["volatility"] =garchModDK2@h.t
DK2_long = pricesDK2_ts %>% pivot_longer(cols=c("DK2_price", "volatility"), names_to="variable")
DK2_long %>% ggplot(aes(x=time, y=value)) + geom_line() + facet_wrap(~variable, scales="free", ncol=1)
3.) The type of time series modeling and forecasting we are doing here was established by two statisticians George Box and Gwilym Jenkins at the University of Wisconsin. Their insight at the time was that these simple models of dynamics could often outperform much more detailed models that tried to take into account many more factors.
We might already have some intuition about why this is. In order to establish a model with many elements, we also need to forecast out those elements, which might involve a lot of error. We could also use the concept of “over-fitting”, with the idea that we might be able to construct a model that very closely fits our current data, but which then performs poorly out of our sample.