# Introduction

Months ago, I asked a question to the community: how should I organize my R research projects? After writing that post, doing some reading, then putting a plan in practice, I now have my own answer.

Months ago, I asked a question to the community: how should I organize my R research projects? After writing that post, doing some reading, then putting a plan in practice, I now have my own answer.

Advertisements

Now here is a blog post that has been sitting on the shelf far longer than it should have. Over a year ago I wrote an article about problems I was having when estimating the parameters of a GARCH(1,1) model in R. I documented the behavior of parameter estimates (with a focus on ) and perceived pathological behavior when those estimates are computed using **fGarch**. I called for help from the R community, including sending out the blog post over the R Finance mailing list.

I’m reblogging this article mostly for myself. If you’ve been following my blog, you’ll see that recently I published an article on organizing R code that mentioned using packages to organize that code. One of the advantages of doing so is that the work you’ve done is easily distributed. If the methods are novel in some way, you may even get a paper in

J. Stat. Soft.or theR Journalthat helps people learn how to use your software and exposes the methodology to a wider audience. Therefore we should know something about those journals. (I recently got a good reply on Reddit about the difference between these journals.)

When I was considering submitting my paper on psd to J. Stat. Soft. (JSS), I kept noticing that the time from “Submitted” to “Accepted” was nearly two years in many cases. I ultimately decided that was much too long of a review process, no matter what the impact factor might be (*and in two years time, would I even care?).* Tonight I had the sudden urge to put together a dataset of times to publication.

Fortunately the JSS website is structured such that it only took a few minutes playing with XML scraping (*shudder*) to write the (R) code to reproduce the full dataset. I then ran a changepoint (published in JSS!) analysis to see when shifts in mean time have occurred. Here are the results:

Top: The number of days for a paper to go from ‘Submitted’ to ‘Accepted’ as a function of the cumulative issue index (each paper is an “issue”…

View original post 152 more words

My formal training in computer programming consists of two R programming labs required by my first statistics classes, and some JavaScript and database training. That’s about it. Most of my programming knowledge is self-taught.^{1} For a researcher who does a lot of programming but doesn’t consider programming to be *the* job, that’s fine… up to a point.

At the University of Utah I’ve taught MATH 1070 and MATH 3070. Both are introductory statistics classes, but I call MATH 1070 “Introductory Statistics for People Who Don’t Like Math” while MATH 3070 is “Introductory Statistics for People Who *Do* Like Math”, since the latter requires calculus and uses *far more* probability. In both classes, though, students need to learn what confidence intervals (CIs) say and don’t say, and I spend a lot of time debunking common misconceptions for what a confidence interval says.

**UPDATE (11/2/17 3:00 PM MDT):** I got the following e-mail from Brian Peterson, a well-known R finance contributor, over R’s finance mailing list:

I would strongly suggest looking at

rugarchorrmgarch. The primary

maintainer of the RMetrics suite of packages, Diethelm Wuertz, was

killed in a car crash in 2016. That code is basically unmaintained.

*I will see if this solves the problem. Thanks Brian! I’m leaving this post up though as a warning to others to avoid fGarch in the future. This was news to me, books often refer to fGarch, so this could be a resource for those looking for working with GARCH models in R why not to use fGarch.*

**UPDATE (11/2/17 11:30 PM MDT):** I tried a quick experiment with **rugarch** and it appears to be plagued by this problem as well. Below is some quick code I ran. I may post a full study as soon as tomorrow.

library(rugarch) spec = ugarchspec(variance.model = list(garchOrder = c(1, 1)), mean.model = list(armaOrder = c(0, 0), include.mean = FALSE), fixed.pars = list(alpha1 = 0.2, beta1 = 0.2, omega = 0.2)) ugarchpath(spec = spec, n.sim = 1000, n.start = 1000) -> x srs = x@path$seriesSim spec1 = ugarchspec(variance.model = list(garchOrder = c(1, 1)), mean.model = list(armaOrder = c(0, 0), include.mean = FALSE)) ugarchfit(spec = spec1, data = srs) ugarchfit(spec = spec1, data = srs[1:100])

These days my research focuses on change point detection methods. These are statistical tests and procedures to detect a structural change in a sequence of data. An early example, from quality control, is detecting whether a machine became uncalibrated when producing a widget. There may be some measurement of interest, such as the diameter of a ball bearing, that we observe. The machine produces these widgets in sequence. Under the null hypothesis, the ball bearing’s mean diameter does not change, while under the alternative, at some unkown point in the manufacturing process the machine became uncalibrated and the mean diameter of the ball bearings changed. The test then decides between these two hypotheses.