Brock University Google Trends Essay


use Google Trends to answer all questions in pdf “article review”. another pdf is a tool for question 4, you don’t need to read all of it.

3 attachmentsSlide 1 of 3attachment_1attachment_1attachment_2attachment_2attachment_3attachment_3.slider-slide > img { width: 100%; display: block; }
.slider-slide > img:focus { margin: auto; }

Unformatted Attachment Preview

International Journal of Forecasting 33 (2017) 801–816
Contents lists available at ScienceDirect
International Journal of Forecasting
journal homepage:
The predictive power of Google searches in forecasting US
Francesco D’Amuri a,b , Juri Marcucci a,?
Bank of Italy, Italy
IZA, Italy
Google econometrics
Forecast comparison
Keyword search
US unemployment
Time series models
We assess the performance of an index of Google job-search intensity as a leading indicator
for predicting the monthly US unemployment rate. We carry out a deep out-of-sample
forecasting comparison of models that adopt the Google Index, the more standard initial
claims, or alternative indicators based on economic policy uncertainty and consumers’
and employers’ surveys. The Google-based models outperform most of the others, with
their relative performances improving with the forecast horizon. Only models that use
employers’ expectations on a longer sample do better at short horizons. Furthermore,
quarterly predictions constructed using Google-based models provide forecasts that are
more accurate than those from the Survey of Professional Forecasters, models based on
labor force flows, or standard nonlinear models. Google-based models seem to predict
particularly well at the turning point that takes place at the beginning of the Great
Recession, while their relative predictive abilities stabilize afterwards.
© 2017 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.
1. Introduction
formulated an unemployment rate threshold of 6.5%, above
which a federal funds rate hike is unlikely.
Against this background, we assess whether US monthly
unemployment rate predictions can be improved using the
Google index (GI), a leading indicator that is based on internet job-related searches performed through Google.1
The provision of accurate predictions of labor market
dynamics has always been a core activity for both investors
and policy makers. The task has become even more important since the beginning of the Great Recession and
the related uncertainty regarding the impact of the slowdown in economic activity on the labor market. It then became crucial in December 2012, when the Federal Reserve
announced a shift in the way in which monetary policy
is communicated to the public. The Fed moved from indicating monetary actions based on time, to explicit employment and inflation guideposts (the so-called Evans
rule). For the employment guidepost, the Fed explicitly
? Correspondence to: Bank of Italy, Directorate General for Economics,
Statistics and Research, Via Nazionale 91, 00184, Rome, Italy.
E-mail addresses: (F. D’Amuri), (J. Marcucci).
1 The US unemployment rate time series is certainly one of the most
commonly studied series in the literature. Proietti (2003) defines this
series as the ‘testbed’ or ‘case study’ for many (if not most) nonlinear time series models. In fact, many papers have documented its
asymmetric behavior. DeLong and Summers (1986), Neftci (1984) and
Rothman (1998) document a type of asymmetry called steepness, in
which unemployment rates rise faster than they decrease. Sichel (1993)
finds evidence of another type of asymmetry called deepness, in which
contractions are deeper than expansions. McQueen and Thorley (1993)
find sharpness, in which peaks tend to be sharp while troughs are usually
more rounded. In a recent paper, Barnichon and Nekarda (2012) develop
a model based on labor force flows for forecasting unemployment; their
results indicate that this approach can improve the forecast accuracy of
standard time series models considerably.
0169-2070/© 2017 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.
F. D’Amuri, J. Marcucci / International Journal of Forecasting 33 (2017) 801–816
After having selected the best specifications at each forecast origin using the BIC, we test the predictive power of
this indicator by means of a deep out-of-sample comparison carried out along two different dimensions: (i) the
alternative exogenous variables adopted as leading indicators; and (ii) the length of the estimation sample. In particular, we estimate standard time series (AR) models and
augment them with different leading indicators such as
the initial claims, employers’ and consumers’ surveys on
employment dynamics, the economic policy uncertainty
index of Baker, Bloom, and Davis (2016), and the Google
index that is specific to this study. We also compare models estimated over samples of different lengths, since the
GI is only available since the first week of January 2004,
and an exercise comparing the forecasting performances
of models estimated on this sample would be of little practical relevance if models estimated on longer samples were
better at predicting the unemployment rate. We also compare the forecasts of Google-based models with those obtained using non-linear models and models based on labor
force flows data, as per Barnichon and Nekarda (2012).
We find that Google-based models, estimated using
data from 2004 onwards, outperform most of the competitors for predicting the US unemployment rate, irrespective of the length of the time series considered. Their
performance improves with the length of the forecast horizon, with the Diebold and Mariano (DM) (1995) test of
equal forecast accuracy always rejecting the null at horizons from 1 to 12 months ahead. The analysis of the cumulative sum of squared forecast error differences (CSSED),
as suggested by Welch and Goyal (2008), shows that
Google-based models perform particularly well during the
Great Recession, with their relative performance stabilizing thereafter relative to both the benchmark and the other
competing models. Of the various specifications tested,
only models using employers’ expectations estimated on
much longer time series improve on Google-based forecasts at one and two months ahead, though they are
outperformed at three to twelve months ahead. We investigate the reasons behind the success of Google-based
models for forecasting unemployment further by calculating the transition probabilities at one and twelve months
by labor market status and internet job search activity using the Computer and Internet Use supplement of the Current Population Survey. Such estimates suggest that the
predictive power of the Google-based indicator is due to
the fact that individuals start looking for employment long
before losing their job or being re-classified as unemployed
instead of inactive. This gives Google-based indicators an
advantage over other indicators that provide information
on job terminations (initial claims) or employment levels
(employers’ expectations), but do not cover other relevant
elements, such as employees anticipating job loss, unemployment duration, and transitions from inactivity to unemployment.
These results also hold after a number of robustness
checks, namely: (i) conducting the exercise with the
available real time data in the short sample, (ii) employing
two alternative, less popular and less relevant, job-searchrelated keywords, and (iii) conducting a placebo test with a
false keyword which is unrelated to job searches but highly
correlated with our target variable according to Google
We also repeat the forecast horse race for each of the
50 US states plus the District of Columbia (DC) individually,
rather than at the federal level, and find that the correlation
between the unemployment rate and the Google index
is stronger in states in which the percentage of the
unemployed who use the internet for job searches is
higher. The results of the forecast comparison are less clear
cut at the state level than at the federal level, but still
point to the substantive forecasting power of Google-based
models, which outperform all of their competitors in 35%
of the states at one step ahead, and more as the forecast
horizon increases (up to a 53% success rate at 12 steps
Finally, we construct a group of quarterly forecasts of
the unemployment rate by combining the monthly forecasts of the best models from our horse race, and compare them with the quarterly predictions released by the
Survey of Professional Forecasters (SPF) conducted by the
Federal Reserve Bank of Philadelphia. For a given information set, models using the GI outperform the professionals’ forecasts, with RMSFEs that are more than 40% lower
for nowcasts of the current quarter unemployment rate,
and substantially lower for one to three quarters ahead as
well. The best Google model also outperforms state-of-theart models based on labor force flows data, such as that of
Barnichon and Nekarda (2012), for three out of four forecast horizons. We also show that Google-based models are
quite stable in terms of predictive accuracy.
The innovative data source employed in this article has
already been used in epidemiology and in different fields of
economics (Edelman, 2012). To the best of our knowledge,
the first article using Google data (Ginsberg et al., 2009)
involved estimating the weekly ‘influenza’ activity in the US
using an index of health-seeking behavior that was equal to
the incidence of influenza-related internet queries. The use
of such data in economics started with a paper by Choi and
Varian (2012) that showed their relevance for predicting
consumer behavior and initial unemployment claims for
the US.2
To the best of our knowledge, this is the first paper to
use Google data to forecast the monthly unemployment
rate in the US.3 However, there has already been some
work done for other countries, in particular for Germany
2 Preis, Moat, and Stanley (2013) find evidence that changes in
finance-related Google query volumes anticipate stock market moves;
Da, Engelberg, and Pengjie (2011) show the relevance of Google data
as a direct and timely measure of investors’ attention for a sample of
Russell 3000 stocks; Baker and Fradkin (forthcoming) develop a jobsearch activity index in order to analyze the reaction of the job-search
intensity to changes in the unemployment benefit duration in the US;
Billari, D’Amuri, and Marcucci (forthcoming) use web-search data related
to fertility as a leading indicator of the US birth rate; Vosen and Schmidt
(2011) show that Google-based indicators improve upon standard ones
for forecasting private consumption. Einav and Levin (2013) argue that
‘‘big data’’ (as internet-related data are also called) will have a great
impact on economic policy and economic research; see Askitas and
Zimmermann (2015) for further discussion and applications.
3 That is, the first other than the previous version of this paper
(D’Amuri & Marcucci, 2009). Previously, Ettredge, Gerdes, and Karuga
(2005) showed the existence of a positive association between search
F. D’Amuri, J. Marcucci / International Journal of Forecasting 33 (2017) 801–816
(Askitas & Zimmermann, 2009), Italy (D’Amuri, 2009),
Israel (Suhoy, 2009), and more recently France (Fondeur &
Karamé, 2013). Central banks are also starting to publish
reports on the suitability of Google data as a complement to
more standard economic indicators (see for example Artola
& Galan, 2012; for Spain; McLaren & Shanbhorge, 2011; for
the United Kingdom; and Troy, Perera, & Sunner, 2012; for
Antenucci, Cafarella, Levenstein, Re, and Shapiro (2014)
have another interesting application of web-based data to
the forecasting of labor market dynamics. They compute an
index of job loss, searching and posting using information
made available via Twitter accounts, and show that such
an indicator has the potential to improve predictions of
initial claims dynamics for the US. Twitter-based data have
the clear advantage that they make it possible to track
individuals over time and to correlate their activity with
self-declared personal characteristics. At the same time,
they suffer more from sample selection for two reasons:
(i) the use of Twitter is less widespread than Google
searches, and (ii) individuals are not always willing to
share personal information about their job market status
on social media, whereas Google searches are anonymous.
Based on our results for the unemployment rate, we
believe that there is the potential for further applications
of Google query data to other fields of economics.
The paper is organized as follows. Section 2 describes
the data used to predict the US unemployment rate,
with a particular emphasis on the GI. Section 3 discusses
the models that we employ for predicting the US
unemployment rate, while Section 4 compares their outof-sample performances. Section 5 conducts a number of
robustness checks. Section 6 compares our predictions
with those of the Survey of Professional Forecasters and
with those of models based on labor force flows, while
Section 7 concludes.
2. Data and descriptive statistics
The data used in this paper come from various different
sources. The seasonally adjusted monthly unemployment
rate4 is the one released by the Bureau of Labor Statistics
(BLS), and comes from the current employment statistics
and the local area unemployment statistics for the national
and state levels, respectively.5
We complement these data with a well-known
leading indicator for the unemployment rate (see for example Montgomery et al., 1998): the weekly seasonallyadjusted IC released by the US Department of Labor.6
We also consider alternative leading indicators that
are used routinely by economic and financial observers
in forming their expectations on labor market prospects,
but whose power in predicting unemployment has never
been assessed before in the economic literature, to the best
of our knowledge. Specifically, our exogenous variables
are employment expectations for the manufacturing and
non-manufacturing sectors from the Institute for Supply
Management’s (ISM) Report on Business (EEM t and EENM t
respectively), the current and six-month-ahead consumer
expectations from the US Consumer Confidence survey of
the Conference Board (CE t and CE6M t respectively), and
the index of economic policy uncertainty proposed by
Baker et al. (2016).7
2.1. Google-based data
The exogenous variable that is specific to this study
is the weekly GI that summarizes the job searches performed through the Google search engine website, collected through Google Trends. The GI represents the
number of web searches that have been made for a particular keyword in a given geographical area (r) within a
given time. The search share for a particular keyword on
day d is given by the number of web searches containing that keyword (Vd,r ), normalized by division by the total number of web searches performed through Google for
the same day and area (Td,r ), i.e., Sd,r = T d,r . The search
share for week ? is given by the simple average S? ,r =
Sd,r . For privacy and anonymity reasons, no
5 Many papers in the literature impose the presence of a unit
root or induce stationarity with a particular transformation (see for
example Rothman, 1998). However, Montgomery, Zarnowitz, Tsay,
and Tiao (1998) model the level of the monthly unemployment rate,
arguing that it is hard to justify unit-root non-stationarity for the US
unemployment rate because it is a rate that varies within a limited range.
Similarly, Koop and Potter (1999) argue that the unemployment rate
cannot exhibit a global unit root behavior, since it is bounded by 0 and
1. Previous versions of this paper tested for the presence of a unit-root
formally, using tests that are robust to non-linearities and structural
breaks, and found opposite results for the short and long samples. Thus,
we have adopted the more agnostic approach of Koop and Potter (1999)
and Montgomery et al. (1998), deciding not to restrict our models to the
stationary regime explicitly, and presenting all of our forecasting results
using the levels of the monthly US unemployment rate.
6 Since seasonally adjusted data are issued only at the national level,
engine keyword usage data extracted from WordTracker’s Top 500
Keyword Report and the monthly number of unemployed for the interval
between September 2001 and March 2003. More recently, Tuhkuri (2016)
showed that data on Google searches can sometimes help to predict
the US unemployment rate, especially at short horizons. Nevertheless,
Tuhkuri’s (2016) results are not comparable to ours for the following
reasons: (i) the web-search-based leading indicator used only tracks the
interest in unemployment benefits, but some of the unemployed are not
eligible for unemployment insurance (for example, individuals who are
looking for their first job), and some of those who are eligible do not
claim it (e.g., the take-up rate in the US around the Great Recession was
about 45% according to East & Kuka, 2015); (ii) the author adopts data
that have not been adjusted for seasonality, using a seasonal factor for the
target variable only; and (iii) he does not align the weeks used to compute
the monthly Google index with the reference weeks used by the BLS to
calculate the official unemployment rate (see Section 2.1).
4 The unemployment rate for month t refers to individuals who do not
we have performed our own seasonal adjustment for the state-level data
using the X-13 ARIMA-SEATS filtering of the US Census Bureau. Regarding
the timing, IC data for the jth week of month t (w j(t )) are released by the
Department of Labor on Thursday of the (j + 1)th week.
7 We also considered the alternative uncertainty indices suggested by
have a job, but are available for work, in the week including the 12th day
of month t (i.e., the reference week), and who have looked for a job in the
four weeks prior to the reference week. The monthly unemployment rate
is released on the first Friday of month t + 1.
Jurado, Ludvigson, and Ng (2015) and Orlik and Veldkamp (2014), but they
had rather low correlations with both the unemployment rate and the GI
(Table A.3 of the Appendix), and thus we did not include them among the
exogenous variables used in the horse race.
F. D’Amuri, J. Marcucci / International Journal of Forecasting 33 (2017) 801–816
absolute values of the GI components are available publicly. Google also scales the index GI t ,r to 100 in the week
in which it reaches the maximum level. Thus, the Google
index for week ? is given by GI ? ,r = max100
S , and rep(S ) ? ,r
? ,r
resents the likelihood of a random user from that area doing a Google search for that particular keyword during that
week. The data are gathered using IP addresses, and are
made available to the public if the number of searches exceeds a certain – undeclared – threshold. Repeated queries
from a single IP address within a short period of time are
eliminated. The data are updated weekly and are available almost in real time starting with the week ending January 10, 2004. Google trends data are freely available at The user-friendly
interface permits users to download time series for up to
five keywords or combinations of keywords for a given
country, or the same keyword for up to five countries. Once
the data have been retrieved, they can be downloaded in
csv format provided that the user has a Google account.
Last but not least, the index is calculated based on a sample of IPs that changes with time (daily) and with the IP
address. As a consequence, the indices can vary according
to the day and the IPs of download. Throughout the paper,
we compute our indices as the simple average of 24 downloads carried out over 12 different days from two different IPs. Nevertheless, taking the raw data coming from the
single downloads would not alter the results much, since
the elementary time series are nearly identical, with crosscorrelations that are never below 0.99.8
Our preferred indicator summarizes the incidence of
queries that include the keyword ‘‘jobs’’ out of all queries
performed through Google in the relevant week (this index is labeled G1 henceforth).9 We choose to use the keyword ‘‘jobs’’ as the main indicator of job-search activities
for two main reasons. First, we found the keyword ‘‘jobs’’
to be the most popular among different job-search-related
keywords. No absolute search volumes are available, but it
is possible to identify the most popular keywords by looking at relative incidences. Figure A.1 of the online Appendix
plots the monthly average values of the GI for the keywords
‘‘facebook’’, ‘‘youtube’’, and ‘‘jobs’’. Of these keywords, ‘‘facebook’’ has the highest incidence, while the GI for ‘‘jobs’’ is
constantly around 10. This means that, when searches for
‘‘facebook’’ were at their peak, there was still one keyword
search for ‘‘jobs’’ for every 10 searches for ‘‘facebook’’. The
results are similar when conducting the comparison with
the keyword ‘‘youtube’’, another popular search, which
reaches a maximum level of above 40 in our sample.
In addition to its popularity, the second reason why we
chose the keyword ‘‘jobs’’ is that we believe that it is used
8 The online Appendix (see Appendix A) reports graphs (Figure A.7 and
B.1 to B.5), descriptive statistics (Table A.4) and correlations (Table A.5)
for the 24 raw time series. We have also computed the forecasting results
using all of the individual GIs downloaded at each IP address and each
day, which are summarized for the G1 monthly averages in Figure A.8 of
the online Appendix. The other results are very similar and are available
from the authors upon request.
9 We have adjusted both the weekly and monthly indicators for
seasonality using the X13 ARIMA-SEATS filter. Google data have a peculiar
seasonality: there are usually troughs in November and December when
the denominator of the index gets inflated by Christmas-related searches.
most widely across the broadest range of job seekers, and
therefore is less sensitive to the presence of demand or
supply shocks that are specific to subgroups of workers,
which could bias the value of the GI and its ability to predict
the overall unemployment rate. Finally, it has to be noted
that the numerator of the index contains all of the keyword
searches that include the word ‘‘jobs’’, such as ‘‘public jobs’’
or ‘‘California jobs’’, for example. As a consequence, the
index is based on a broader set of queries that include the
word ‘‘jobs’’, some of which might actually be unrelated to
job searches. Such a measurement error is unlikely to be
correlated with the monthly unemployment rate over time
and, if anything, should reduce the predictive power of our
leading indicator.10
The variable also has other limitations. For example,
individuals who are looking for a job through the internet
(jobs available through the internet) may well be not
selected randomly among job seekers (jobs). Moreover,
the indicator captures overall job-search activities; that
is, the sum of searches performed by unemployed and
employed people. This limitation is made more severe by
the fact that, while unemployed job searches are believed
to follow the anti-cyclical variation of job separation rates,
on-the-job searches are normally assumed to be cyclical.
We acknowledge that this could introduce some bias into
our GI; nevertheless, if anything, such a bias should reduce
the precision of our forecasts based on Google data.
We should also consider the representativeness of
internet data in general, and of Google data in particular.
According to the July supplement to the 2011 Current
Population Survey, 30.1% of those unemployed use the
internet to look for a job. Moreover, according to comScore,
Google persistently had a dominant share of the search
engine market between 2004 and 2014, going from 56% in
2004 to 67% in 2014. Thus, these data have the potential
to track social phenomena if there is a connection between
what people search for on the web and their later behavior.
Our empirical analysis aligns the GI and IC data with
the relevant weeks for the unemployment survey.11 Figs. 1
and 2 depict the unemployment rate, along with all
of the exogenous variables evaluated in this article.12
IC and the GI are highly correlated with the level of
the unemployment rate: 0.64 for IC and 0.8 for the
GI; consumers’ and employers’ expectations show lower
10 However, we do subtract from the numerator the keyword searches
for ‘‘Steve Jobs’’, a popular search that includes the word ‘‘jobs’’, in order
improve the precision of our GI.
11 When constructing the GI or the IC for month t, we take into account
the week that includes the 12th of the month and the three preceding
weeks, which is the exact same interval that is used to calculate the
unemployment rate for month t, reported in the official statistics (what
we may call the ‘survey time’). When there are more than four weeks
between the reference week of month t and the following one in month
t + 1, we do not use either the GI or the IC for the week that is not used by
the official statistics (the first week after the reference week of month t) to
calculate the unemployment rate (see Figure A.2 of the online Appendix
for a visual description of the alignment procedure and for the timing of
our variables).
12 Table A.1 of the online Appendix reports the descriptive statistics for
the monthly US unemployment rate and various leading indicators for the
interval 2004.1–2014.2.
F. D’Amuri, J. Marcucci / International Journal of Forecasting 33 (2017) 801–816
Fig. 1. US unemployment rate and leading indicators: employers’ and consumers’ expectations. Notes: The employment expectations for the
manufacturing (EEM) and non-manufacturing (EENM) sectors are from the ISM Report on Business, while the current (CE) and six months in advance
(CE6M) consumer expectations are from the US Consumer Confidence Survey of the Conference Board.
Fig. 2. US unemployment rate and leading indicators: initial claims, EPU, and the Google index. Notes: The initial claims (IC) are monthly averages of the
weekly IC. The Google index (GI) is the monthly average of Google ‘jobs’ searches. EPU is the economic policy uncertainty indicator.
correlations (Table A.2 of the online Appendix). Thus, the
correlations of the GI for ‘‘jobs’’ with the unemployment
rate are higher than those of the IC, which is the most
widely accepted leading indicator in the literature.
3. Forecasting models and methods
Our paper focuses on the multi-step pseudo out-ofsample forecasting performances of a variety of linear
models for nowcasting and forecasting the US unem-
F. D’Amuri, J. Marcucci / International Journal of Forecasting 33 (2017) 801–816
ployment rate. Following the literature on forecasting
economic variables (Stock & Watson, 2003), we adopt a
standard autoregressive model with explanatory variables
as follows:
yht+h = ?0 + ?1 (L)yt + ?2 (L)xt + ?t +h ,
t = 1, 2, . . . , T ,
where yht+h is the h-period-ahead unemployment rate at
time t; xt is a possible explanatory variable or leading
indicator in period t; yt is the period t unemployment
rate; and ?t +h is an error term. ?1 (L)?and ?2 (L) are
lag polynomials, such that ?1 (L)yt =
j=1 ?1j yt ?j and
?2 (L)xt =
j=1 ?2j xt ?j . The lag orders p and q are selected
recursively and sequentially at each forecast origin using
the BIC.13 We consider from one- to twelve-step-ahead
direct forecasts of the US unemployment rate by setting the
parameter h = 1, 2, . . . , 12 months.
We compare bivariate models which differ only in
the additional explanatory variable xt that is adopted for
forecasting the target variable. As possible explanatory
variables, we adopt a series of leading indicators for the
unemployment rate. We begin with the most commonly
used indicator, initial claims (IC), and use either its
monthly average or its weekly value. In addition, we also
evaluate the economic policy uncertainty (EPU) index, the
employment expectations for the manufacturing (EEM)
and non-manufacturing (EENM) sectors, plus current and
six-month-ahead consumer expectations (CE and CE6M,
respectively). Finally, we consider the GI for the keyword
‘‘jobs’’, as was explained in the previous section. In this
case, we add as an explanatory variable either its monthly
average or its weekly values.
We compare the h-step-ahead pseudo out-of-sample
forecasting performances of these models with that of
a univariate autoregression (AR(p)), where the lag p is
selected recursively by the BIC. We refer to the latter model
as our benchmark model:
yht+h = ?0 + ?1 (L)yt + ?t +h ,
t = 1, 2, . . . , T .
Eqs. (1) and (2) are both estimated by OLS, in rolling
samples of two different lengths. In the short sample, they
are estimated over a rolling window of 37 observations
(R = 37) starting from February 2004.14 We also use a
longer time interval starting in 1997:7, the first month
in which all of the non-Google-based exogenous variables
used in this article are available. In the latter case, Eqs.
(1) and (2) are estimated over a rolling window of 116
observations (R = 116).15 Accordingly, the first 1-monthahead out-of-sample forecast is made for 2007:3, while the
13 First, the lag p is selected based on the in-sample estimation of the
benchmark model AR(p) in (2). After choosing the ‘‘best’’ benchmark
specification, the lag q is chosen based on the in-sample estimation of Eq.
(1). The maximum lag length considered in both cases is four. Table A.6
in the online Appendix shows the average in-sample BICs for all of the
competing models at each forecast horizon.
14 We start from February 2004 because data for G1
and G1
W 1,t
W 2 ,t
weekly indices) are not available for January 2004.
15 We restrict the analysis to the period 1997:7 onward due to
the availability of employment expectations in the non-manufacturing
first 12-month-ahead out-of-sample forecast is made for
Finally, the exercise is carried out with both revised and
(for the short sample only) real time data.16
4. Out-of-sample forecasting comparison
4.1. Methodology
We evaluate the out-of-sample performances of our
competing models relative to the benchmark by comparing
the root mean squared forecast error (RMSFE) of each
model with that of the benchmark, and we test for equal
forecast accuracy using a Diebold and Mariano (1995,
DM) test. Moreover, we also compare forecasts based on
the cumulative sum of squared forecast error differences
(CSSED), introduced by Welch and Goyal (2008), to check
for possible forecast instabilities.
4.2. RMSFE comparison
Panel A of Table 1 reports the results of the forecast
comparison over the short sample 2004.2–2014.2 (first insample 2004.2–2007.2), for forecast horizons of 1 to 12
steps ahead. The first row reports the root mean squared
forecast error (RMSFE) of the benchmark AR(p) model
estimated over the short sample. For all other rows, the
values reported are the ratios of the RMSFE of the model
using the leading indicator in each row to that of the
benchmark model. A value below one means that the
model in the row has a lower RMSFE than the benchmark
(and thus outperforms it), and vice versa. Most of the
competing models tend to outperform the benchmark at
short forecast horizons. The models using initial claims,
the standard leading indicator for the unemployment rate,
outperform the benchmark at short term horizons (up to
four months ahead), but do not perform particularly well
at longer horizons. Alternative models that include the
economic policy uncertainty (EPU) index of Baker et al.
(2016), employment or consumer expectations tend to
fare relatively better. This is true in particular for models
that include EENM (employment expectations in the nonmanufacturing sectors) or CE (consumer expectations on
current labor market dynamics); these are able to reduce
the RMSFE compared to the benchmark at all forecast
horizons, with the results for EENM leading to a rejection
of the null of equal forecast accuracy (DM test) for
forecasts from one to seven steps ahead. Moving to Googlebased models, those using the monthly means of the GI
achieve the best performances in terms of RMSFEs, with an
advantage over the benchmark that actually increases with
the forecast horizon (1