Causal design patterns for data analysts
I’m a little late to this one, but it is—without exaggeration—the best explanation of practical causal inference I’ve read. A straightforward description of the most common methods, with a running example of an industry-relevant application and guidance for which method to use in various situations. Don’t let this one fall off the back of your reading queue.
I especially like how Riederer phrases the article’s warrant and purpose:
The need to understand true causal (versus correlative) effects and to derive meaning and strategy from “found” historical data (as opposed to experimentally “produced” data) is nearly universal, but methods are scattered across epidemiology, economics, political science, and more….this post humbly aims to advertise these methods so analysts can add them into their mental index.
Too often, we resort to quick-and-dirty—and misleading—methods because we don’t have better options cached in our “mental index”. This post will help to change that.
Patterns, Predictions, and Actions; A story about machine learning
The announcement about this new textbook made waves a couple weeks ago, and with good reason. Two aspects of this book set it apart, and make it worth the read even for experienced ML practitioners.
Emphasis on decision-making and taking action based on models. The authors say outright in the introduction that
…predictions only become useful when they are acted upon. (Ch. 1, Prediction and action)
and they include chapters on Causality, Causal inference in practice, Sequential decision making, and Reinforcement learning.
A chapter dedicated entirely to datasets: the history of machine learning benchmarks, background about some of the most famous benchmark datasets, and data-related pitfalls. This chapter is destined to be a commonly cited reference for machine learning work with standardized benchmarks.
The text is at the intro graduate level and is fully available online in both HTML and PDF formats.
Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing
Ron Kohavi, Diane Tang, and Ya Xu
I dusted this instant classic off the shelf because I have a paper copy—handy when my power was out earlier this week. I should have dusted it off sooner, because it’s the most comprehensive and applicable guide to online (in the internet sense) experimentation that I know of.
What I like about this book is that it spends far more time on practical tips than statistical details. The tips range from nitty-gritty implementation details, e.g.:
- make sure to run experiments at least a week to reduce weekday-vs-weekend effect,
up to big picture organizational philosophy, e.g.:
- experimentation only makes sense if your organization acknowledges that personal intuition is an unreliable way to evaluate new ideas.
In my experience, these kinds of issues are much more likely to block effective experimentation than a sub-optimal significance procedure, for example.
You should read this book with a critical eye, and not blindly accept every recommendation. Chapter 5, for example, suggests slowing down your website to better understand the gain you would see from speeding up your website. I think this is organizationally impossible for all but the most profitable companies (“dear senior exec, may I purposefully reduce revenue by 2% this quarter to prove that faster websites are better? Uhhh, no.”) It’s also wrong; we simply can’t extrapolate from slower page load time results (without strong assumptions that would render the exercise pointless), and I was disappointed to see the authors try to justify this claim with mathematical shock-and-awe (no, a casual “first-order Taylor-series approximation” mention does not turn your lead into gold).
So far, however, this example is the exception that proves the rule; it’s an awesome book, and I plan to read it cover-to-cover.
Chapter one is a good read in its own right and is available on the book’s website.