# Introducing Rank Data Analysis with Arkham Horror Data

## Introduction

Last week I analyzed player rankings of the Arkham Horror LCG classes. This week I explain what I did in the data analysis. As I mentioned, this is the first time that I attempted inference with rank data, and I discovered how rich the subject is. A lot of the tools for the analysis I had to write myself, so you now have the code I didn’t have access to when I started.

# Introduction

Now here is a blog post that has been sitting on the shelf far longer than it should have. Over a year ago I wrote an article about problems I was having when estimating the parameters of a GARCH(1,1) model in R. I documented the behavior of parameter estimates (with a focus on $\beta$) and perceived pathological behavior when those estimates are computed using fGarch. I called for help from the R community, including sending out the blog post over the R Finance mailing list.

# Making a Profit with Henry Wan in Arkham Horror: The Card Game

## Introduction

The Forgotten Age cycle of Arkham Horror is at a close and Fantasy Flight Games already announced the next cycle, The Circle Undone. Not only that, they’ve announced two mythos packs at a rate that… surprised me. A new cycle announcement and two mythos pack announcements in less than two months? Am I the only one who finds the new pace of announcements surprising? Perhaps that means they want to get product out at a faster pace?

# Maximized Monte Carlo Testing with MCHT

## Introduction

I introduced MCHT two weeks ago and presented it as a package for Monte Carlo and boostrap hypothesis testing. Last week, I delved into important technical details and showed how to make self-contained MCHTest objects that don’t suffer side effects from changes in the global namespace. In this article I show how to perform maximized Monte Carlo hypothesis testing using MCHT, as described in [1].

# Naïve Numerical Sums in R

## Introduction

The Kolmogorov distribution (which I call $F(x)$) is as follows:

$F(x) = \frac{\sqrt{2 \pi}}{x} \sum_{k = 1}^{\infty} e^{-(2k - 1)^2 \pi^2/(8x^2)}$

# Problems In Estimating GARCH Parameters in R

UPDATE (11/2/17 3:00 PM MDT): I got the following e-mail from Brian Peterson, a well-known R finance contributor, over R’s finance mailing list:

I would strongly suggest looking at rugarch or rmgarch. The primary
maintainer of the RMetrics suite of packages, Diethelm Wuertz, was
killed in a car crash in 2016. That code is basically unmaintained.

I will see if this solves the problem. Thanks Brian! I’m leaving this post up though as a warning to others to avoid fGarch in the future. This was news to me, books often refer to fGarch, so this could be a resource for those looking for working with GARCH models in R why not to use fGarch.

UPDATE (11/2/17 11:30 PM MDT): I tried a quick experiment with rugarch and it appears to be plagued by this problem as well. Below is some quick code I ran. I may post a full study as soon as tomorrow.

library(rugarch)

spec = ugarchspec(variance.model = list(garchOrder = c(1, 1)), mean.model = list(armaOrder = c(0, 0), include.mean = FALSE), fixed.pars = list(alpha1 = 0.2, beta1 = 0.2, omega = 0.2))
ugarchpath(spec = spec, n.sim = 1000, n.start = 1000) -> x
srs = x@path\$seriesSim
spec1 = ugarchspec(variance.model = list(garchOrder = c(1, 1)), mean.model = list(armaOrder = c(0, 0), include.mean = FALSE))
ugarchfit(spec = spec1, data = srs)
ugarchfit(spec = spec1, data = srs[1:100])


These days my research focuses on change point detection methods. These are statistical tests and procedures to detect a structural change in a sequence of data. An early example, from quality control, is detecting whether a machine became uncalibrated when producing a widget. There may be some measurement of interest, such as the diameter of a ball bearing, that we observe. The machine produces these widgets in sequence. Under the null hypothesis, the ball bearing’s mean diameter does not change, while under the alternative, at some unkown point in the manufacturing process the machine became uncalibrated and the mean diameter of the ball bearings changed. The test then decides between these two hypotheses.

# Stock Trading Analytics and Optimization in Python with PyFolio, R’s PerformanceAnalytics, and backtrader

DISCLAIMER: Any losses incurred based on the content of this post are the responsibility of the trader, not me. I, the author, neither take responsibility for the conduct of others nor offer any guarantees. None of this should be considered as financial advice; the content of this article is only for educational/entertainment purposes.

## Introduction

Having figured out how to perform walk-forward analysis in Python with backtrader, I want to have a look at evaluating a strategy’s performance. So far, I have cared about only one metric: the final value of the account at the end of a backtest relative. This should not be the only metric considered. Most people care not only about how much money was made but how much risk was taken on. People are risk-averse; one of finance’s leading principles is that higher risk should be compensated by higher returns. Thus many metrics exist that adjust returns for how much risk was taken on. Perhaps when optimizing only with respect to the final return of the strategy we end up choosing highly volatile strategies that lead to huge losses in out-of-sample data. Adjusting for risk may lead to better strategies being chosen.