Time Series and MCHT

Introduction

Over the past few weeks I’ve published articles about my new package, MCHT, starting with an introduction, a further technical discussion, demonstrating maximized Monte Carlo (MMC) hypothesis testing, bootstrap hypothesis testing, and last week I showed how to handle multi-sample and multivariate data. This is the final article where I explain the capabilities of the package. I show how MCHT can handle time series data.

Continue reading

Advertisements

Beyond Univariate, Single-Sample Data with MCHT

Introduction

I’ve spent the past few weeks writing about MCHT, my new package for Monte Carlo and bootstrap hypothesis testing. After discussing how to use MCHT safely, I discussed how to use it for maximized Monte Carlo (MMC) testing, then bootstrap testing. One may think I’ve said all I want to say about the package, but in truth, I’ve only barely passed the halfway point!

Continue reading

My Tutorial Book on Anaconda, NumPy and Pandas Is Out: Hands-On Data Analysis with NumPy and Pandas

I announced months ago that one of my video courses, Unpacking NumPy and Pandas, was going to be turned into a book. Today I’m pleased to announce that this book is available!

Continue reading

Materials for Teaching Applied Statistics

Today is the first day of the new academic year at the University of Utah. This semester I am teaching MATH 3070: Applied Statistics I, the fourth time I’ve taught this course.

Continue reading

Learn Foundations of Python Natural Language Processing and Computer Vision with my Video Course: Applications of Statistical Learning with Python

I’m pleased to announce my fourth and final video course. The course has already been out for a couple months by now, but that doesn’t mean it’s too late for me to write about it!

Continue reading

Time to Accept It: publishing in the Journal of Statistical Software

I’m reblogging this article mostly for myself. If you’ve been following my blog, you’ll see that recently I published an article on organizing R code that mentioned using packages to organize that code. One of the advantages of doing so is that the work you’ve done is easily distributed. If the methods are novel in some way, you may even get a paper in J. Stat. Soft. or the R Journal that helps people learn how to use your software and exposes the methodology to a wider audience. Therefore we should know something about those journals. (I recently got a good reply on Reddit about the difference between these journals.)

The Geokook.

When I was considering submitting my paper on psd to J. Stat. Soft. (JSS), I kept noticing that the time from “Submitted” to “Accepted” was nearly two years in many cases.  I ultimately decided that was much too long of a review process, no matter what the impact factor might be (and in two years time, would I even care?).  Tonight I had the sudden urge to put together a dataset of times to publication.

Fortunately the JSS website is structured such that it only took a few minutes playing with XML scraping (*shudder*) to write the (R) code to reproduce the full dataset.  I then ran a changepoint (published in JSS!) analysis to see when shifts in mean time have occurred.  Here are the results:

Top: The number of days for a paper to go from 'Submitted' to 'Accepted'.  Middle: In log2(time), with lines for one month, one year, and two years. Bottom frame: changepoint analyses. Top: The number of days for a paper to go from ‘Submitted’ to ‘Accepted’ as a function of the cumulative issue index (each paper is an “issue”…

View original post 152 more words

How Should I Organize My R Research Projects?

My formal training in computer programming consists of two R programming labs required by my first statistics classes, and some JavaScript and database training. That’s about it. Most of my programming knowledge is self-taught.1 For a researcher who does a lot of programming but doesn’t consider programming to be the job, that’s fine… up to a point.

Continue reading