대구한의대학교 향산도서관

상세정보

부가기능

Essays in High Dimensional Time Series Analysis

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Essays in High Dimensional Time Series Analysis.
개인저자Yousuf, Kashif.
단체저자명Columbia University. Statistics.
발행사항[S.l.]: Columbia University., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항227 p.
기본자료 저록Dissertations Abstracts International 81-06B.
Dissertation Abstract International
ISBN9781392688229
학위논문주기Thesis (Ph.D.)--Columbia University, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-06, Section: B.
Advisor: Feng, Yang.
이용제한사항This item must not be sold to any third party vendors.
요약Due to the rapid improvements in the information technology, high dimensional time series datasets are frequently encountered in a variety of fields such as macroeconomics, finance, neuroscience, and meteorology. Some examples in economics and finance include forecasting low frequency macroeconomic indicators, such as GDP or inflation rate, or financial asset returns using a large number of macroeconomic and financial time series and their lags as possible covariates. In these settings, the number of candidate predictors ($p_T$) can be much larger than the number of samples ($T$), and accurate estimation and prediction is made possible by relying on some form of dimension reduction. Given this ubiquity of time series data, it is surprising that few works on high dimensional statistics discuss the time series setting, and even fewer works have developed methods which utilize the unique features of time series data. This chapter consists of three chapters, and each one is self contained.The first chapter deals with high dimensional predictive regressions which are widely used in economics and finance. However, the theory and methodology is mainly developed assuming that the model is stationary with time invariant parameters. This is at odds with the prevalent evidence for parameter instability in economic time series. To remedy this, we present two $L_2$ boosting algorithms for estimating high dimensional models in which the coefficients are modeled as functions evolving smoothly over time and the predictors are locally stationary. The first method uses componentwise local constant estimators as base learner, while the second relies on componentwise local linear estimators. We establish consistency of both methods, and address the practical issues of choosing the bandwidth for the base learners and the number of boosting iterations. In an extensive application to macroeconomic forecasting with many potential predictors, we find that the benefits to modeling time variation are substantial and are present across a wide range of economic series. Furthermore, these benefits increase with the forecast horizon and with the length of the time series available for estimation. This chapter is jointly written with Serena Ng.The second chapter deals with high dimensional non-linear time series models, and deals with the topic of variable screening/targeting predictors. Rather than assume a specific parametric model a priori, this chapter introduces several model free screening methods based on the partial distance correlation and developed specifically to deal with time dependent data. Methods are developed both for univariate models, such as nonlinear autoregressive models with exogenous predictors (NARX), and multivariate models such as linear or nonlinear VAR models. Sure screening properties are proved for our methods, which depend on the moment conditions, and the strength of dependence in the response and covariate processes, amongst other factors. Finite sample performance of our methods is shown through extensive simulation studies, and we show the effectiveness of our algorithms at forecasting US market returns. This chapter is jointly written with Yang Feng.The third chapter deals with variable selection for high dimensional linear stationary time series models. This chapter analyzes the theoretical properties of Sure Independence Screening (SIS), and its two stage combination with the adaptive Lasso, for high dimensional linear models with dependent and/or heavy tailed covariates and errors. We also introduce a generalized least squares screening (GLSS) procedure which utilizes the serial correlation present in the data. By utilizing this serial correlation when estimating our marginal effects, GLSS is shown to outperform SIS in many cases. For both procedures we prove two stage variable selection consistency when combined with the adaptive Lasso.
일반주제명Statistics.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼