Knowing something about methods and statistics is more and more important in a world that’s becoming increasingly data-driven. From evidence-based medicine to evidence-based business strategy, in nonprofit, government, and business alike, basing decisions on empirical evidence is rapidly becoming the norm. Making sound evidence-based decisions requires that you know what you’re doing. You need to know what questions to ask, what data to collect, how to collect them and, perhaps most importantly, how to interpret the answers they present. To what extent can you draw inferences from the data? How likely are you to draw the right conclusions?
You can collect all the data in the world and run the most complicated analyses, but if you ask the wrong questions of your data or misinterpret the answers they provide, you can end up in a dangerous place: making wrong decisions that you think are backed by hard evidence.
So how do you get it right? This is where methods and statistics come in. The empirical sciences are about developing the best possible descriptions and explanations of how the world works, by systematically testing our ideas against empirical observations. Methods and statistics are all about how to ‘do science’. What hypotheses can we test? What’s the best way to test them? What should we measure, and how? Once we have the data, how do we summarize them to make them interpretable? How do we decide if the data support our hypotheses? How convincing are our results?
Given how long the scientific method has been around and given that methods and statistics are continually improved, you would think that the quality of our research findings would have steadily increased in the past decades. Unfortunately, the integrity of many recent research findings is being questioned, especially in the social and behavioral sciences. Both in the medical and social sciences, several fraud cases have shaken entire scientific disciplines to their core. Failures to replicate key results are leading people to question the effectiveness of scientific ‘control’ mechanisms like peer review and the publication system. Questionable research practices, involving inappropriate use of statistics, are suspected to be much more influential than we all thought just a couple of years ago.
As social scientist Daniel Kahneman suggests, it’s time for the social sciences to clean house. We’ve tried to answer his call by offering a new specialization, consisting of four courses on methods and statistics and a capstone project. The idea is to not just explain the basic scientific principles, research designs and statistical techniques, but to also show how their correct use supports scientific integrity and solid science, and how their misuse results in sloppy science, that can potentially bring the whole system down. The goal is to help learners to avoid questionable research practices in their own research projects and to recognize them in published articles.
The Methods and Statistics Specialization familiarizes learners with the basics scientific concepts and gives you the tools to critically evaluate research—which are relevant skills in any field of study by the way; it also helps you take your first steps on the path to performing your own statistical analyses using the programming language R, with no prior knowledge of programming required.