Course Meeting Times
5 day program, 7 hours per day
Program Objectives
This course is designed for people from a variety of backgrounds: managers and researchers from international development organizations, foundations, governments and non-governmental organizations from around the world, as well as trained economists looking to retool.
Course Coverage
Specifically, the following key questions and concepts will be covered:
- Why and when is a rigorous evaluation of social impact needed?
- The common pitfalls of evaluations, and why does randomization help.
- The key components of a good randomized evaluation design.
- Alternative techniques for incorporating randomization into project design.
- How do you determine the appropriate sample size, measure outcomes, and manage data?
- Guarding against threats that may undermine the integrity of the results.
- Techniques for the analysis and interpretation of results.
- How to maximize policy impact and test external validity.
- The whys and hows of comparative cost-effectiveness analysis for informed policy making.
The program will achieve these goals through a diverse set of integrated teaching methods. Expert researchers will provide both theoretical and example-based classes complemented by workgroups where participants can apply key concepts to real world examples.
Curriculum
-
Introduction to effective evaluations
A program evaluation attempts to answer the basic question: How would an individual have fared in the absence of the program? This unit examines the various methods of program evaluation including both non-randomized "retrospective" evaluations (such as difference-in-differences estimation, multivariate regression, and panel regression) and randomized evaluations. The measured impact from alternative methods can be inconsistent, resulting in opposing policy implications. This unit lays out the superiority of randomized evaluations in producing reliable impact estimates.
- Why evaluations matter
- Different evaluation types
- Why randomized evaluations are the gold standard
-
Methods of Randomization
This unit introduces the design stage of a randomized evaluation. It focuses on inventive and elegant ways to add randomization into a program, given practical, budgetary, and political constraints faced by researchers and program implementers.
- Specifying the program to be studied
- Determining the level of intervention: Individual, School, Country, or other
- Choosing the method of randomization
- Randomizing in the real world
-
Evaluation Design
The process of designing an evaluation can help policymakers and implementers critically analyze their program to both pinpoint its key objectives and identify indicators that can measure those objectives. A well-designed evaluation is meant to answer not only the question of what was the program's impact, but also, how did that impact occur? This unit reviews common practices for designing survey instruments, and determining the necessary sample size in order to detect an effect.
- Choosing objectives
- Identifying what variables to survey
- Selecting the population and calculating sample size
- Common pitfalls and their solutions
-
Implementation
Impact estimates are useful only if the evaluation is implemented correctly and the resulting data are properly analyzed. Participants will review problems frequently encountered when implementing and analyzing randomized evaluations.
- Determining if an estimate is spurious or significant
- Attrition of study subjects
- Non-compliance and "contamination" of treatment/control designation
- Troubleshooting problems
- Hawthorne and John Henry effects
- Intention to Treat and Treatment on Treated
Teaching Methods
We will present material through a combination of interactive lectures, case studies, and relevant exercises. Participants will have group time to discuss cases with one another prior to lectures, as well as work jointly through a set of preparatory exercises designed to focus attention on key points. Additionally, participants will form 4-5 person groups which will work through the design process for a randomized evaluation of a development project they chose. Groups will be aided in this project by both the faculty and teaching assistants with the work culminating in presentations at the end of the week.
By examining both successful and problematic evaluations, participants will better understand the significance of various specific details of randomized evaluations. Furthermore, the program will offer extensive opportunities to apply these ideas, ensuring that participants will leave with the knowledge, experience, and confidence necessary to conduct their own randomized evaluations.