The Kolmogorov distribution (which I call ) is as follows:
I’m reblogging this article mostly for myself. If you’ve been following my blog, you’ll see that recently I published an article on organizing R code that mentioned using packages to organize that code. One of the advantages of doing so is that the work you’ve done is easily distributed. If the methods are novel in some way, you may even get a paper in J. Stat. Soft. or the R Journal that helps people learn how to use your software and exposes the methodology to a wider audience. Therefore we should know something about those journals. (I recently got a good reply on Reddit about the difference between these journals.)
When I was considering submitting my paper on psd to J. Stat. Soft. (JSS), I kept noticing that the time from “Submitted” to “Accepted” was nearly two years in many cases. I ultimately decided that was much too long of a review process, no matter what the impact factor might be (and in two years time, would I even care?). Tonight I had the sudden urge to put together a dataset of times to publication.
Fortunately the JSS website is structured such that it only took a few minutes playing with XML scraping (*shudder*) to write the (R) code to reproduce the full dataset. I then ran a changepoint (published in JSS!) analysis to see when shifts in mean time have occurred. Here are the results:
View original post 152 more words
At the University of Utah I’ve taught MATH 1070 and MATH 3070. Both are introductory statistics classes, but I call MATH 1070 “Introductory Statistics for People Who Don’t Like Math” while MATH 3070 is “Introductory Statistics for People Who Do Like Math”, since the latter requires calculus and uses far more probability. In both classes, though, students need to learn what confidence intervals (CIs) say and don’t say, and I spend a lot of time debunking common misconceptions for what a confidence interval says.
UPDATE (11/2/17 3:00 PM MDT): I got the following e-mail from Brian Peterson, a well-known R finance contributor, over R’s finance mailing list:
I would strongly suggest looking at rugarch or rmgarch. The primary
maintainer of the RMetrics suite of packages, Diethelm Wuertz, was
killed in a car crash in 2016. That code is basically unmaintained.
I will see if this solves the problem. Thanks Brian! I’m leaving this post up though as a warning to others to avoid fGarch in the future. This was news to me, books often refer to fGarch, so this could be a resource for those looking for working with GARCH models in R why not to use fGarch.
UPDATE (11/2/17 11:30 PM MDT): I tried a quick experiment with rugarch and it appears to be plagued by this problem as well. Below is some quick code I ran. I may post a full study as soon as tomorrow.
library(rugarch) spec = ugarchspec(variance.model = list(garchOrder = c(1, 1)), mean.model = list(armaOrder = c(0, 0), include.mean = FALSE), fixed.pars = list(alpha1 = 0.2, beta1 = 0.2, omega = 0.2)) ugarchpath(spec = spec, n.sim = 1000, n.start = 1000) -> x srs = x@path$seriesSim spec1 = ugarchspec(variance.model = list(garchOrder = c(1, 1)), mean.model = list(armaOrder = c(0, 0), include.mean = FALSE)) ugarchfit(spec = spec1, data = srs) ugarchfit(spec = spec1, data = srs[1:100])
These days my research focuses on change point detection methods. These are statistical tests and procedures to detect a structural change in a sequence of data. An early example, from quality control, is detecting whether a machine became uncalibrated when producing a widget. There may be some measurement of interest, such as the diameter of a ball bearing, that we observe. The machine produces these widgets in sequence. Under the null hypothesis, the ball bearing’s mean diameter does not change, while under the alternative, at some unkown point in the manufacturing process the machine became uncalibrated and the mean diameter of the ball bearings changed. The test then decides between these two hypotheses.