Policy Evaluation

Objectives

This course will give students an overview of the main policy evaluation methods. The perspective will be microeconomic and applied. We only employ econometric derivations when needed to help intuition. The course aims to strengthen the methodological background of students interested in conducting applied research.

General characterization

Code

2181

Credits

3.5

Responsible teacher

Pedro C. Vicente

Hours

Weekly - Available soon

Total - Available soon

Teaching language

English

Prerequisites

Available soon

Bibliography

I General readings:

Khandker, R. Shahidur, Gayatri B. Koolwal, and Hussain A. Samad (2010), Handbook on Impact Evaluation: Quantitative Methods and Practices, The World Bank;

Angrist, Joshua D., and Jörn-Steffen Pischke (2008), Mostly Harmless Econometrics: An Empiricist's Companion, Princeton University Press;

Deaton, Angus (2010), Instruments, Randomization, and Learning about Development, Journal of Economic Literature, 48(June): 424–455;

Imbens, Guido W., and Jeffrey M. Wooldridge. 2009. Recent Developments in the Econometrics of Program Evaluation. Journal of Economic Literature, 47(1): 5- 86.

II The methods:

  • 1. The Policy Evaluation Question and Randomization.

    (Chapters 2, 3, and 12) Khandker, R. Shahidur, Gayatri B. Koolwal, and Hussain A. Samad (2010), Handbook on Impact Evaluation: Quantitative Methods and Practices, The World Bank;

    Duflo, Esther, Rachel Glennerster, and Michael Kremer (2006), Using Randomization in Development Economics Research: A Toolkit, NBER Technical Working Paper 333;

    More advanced:

    Heckman, James J., and Edward Vytlacil (2005), Structural Equations, Treatment Effects, and Econometric Policy Evaluation, Econometrica, 73(3): 669–738.

    Applications:

    LaLonde, Robert (1986), Evaluating the Econometric Evaluations of Training Programs with Experimental Data, American Economic Review, 76: 604-620;

    Krueger, Alan (1999), Experimental Estimates of Education Production Functions, Quarterly Journal of Economics, 114(2): 497-532.

    Cole, Shawn, Xavier Giné, Jeremy Tobacman, Petia Topalova, Robert Townsend, and James Vickery (2013), Barriers to Household Risk Management: Evidence from India, American Economic Journals: Applied Economics, 5(1): 104-135.

  • 2. Propensity Score Matching.

    (Chapters 4, and 13) Khandker, R. Shahidur, Gayatri B. Koolwal, and Hussain A. Samad (2010), Handbook on Impact Evaluation: Quantitative Methods and Practices, The World Bank;

    Becker, Sascha, and Andrea Ichino (2002), Estimation of Average Treatment Effects Based on Propensity Scores, Stata Journal, 2(4): 358–377;

    More advanced:

    Heckman, James J., Hidehiko Ichimura, and Petra Todd (1997), Matching as an Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme, Review of Economic Studies, 64(4): 605–654;

    Rosenbaum, Paul R., and Donald B. Rubin (1983), The Central Role of the Propensity Score in Observational Studies for Causal Effects, Biometrika, 70(1): 41– 55.

    Application:

    Dehejia, Rajeev, and Sadek Wahba (2002), Propensity Score Matching Methods for Non- experimental Causal Studies, Review of Economics and Statistics, 8(1): 151-161;

    Smith, Jeffrey, and Petra Todd (2005), Does Matching Overcome LaLonde’s Critique of Nonexperimental Estimators?, Journal of Econometrics, 125: 305-353.

  • 3. Difference-in-differences.

    (Chapters 5, and 14) Khandker, R. Shahidur, Gayatri B. Koolwal, and Hussain A. Samad (2010), Handbook on Impact Evaluation: Quantitative Methods and Practices, The World Bank;

    Bertrand, Marianne, Esther Duflo, and Sendhil Mullainathan (2004), How Much Should We Trust Differences-in-Differences Estimates?, Quarterly Journal of Economics, 119(1): 249– 275;

    Application:

    Duflo, Esther (2001), Schooling and Labor Market Consequences of School Construction in Indonesia: Evidence from an Unusual Policy Experiment, American Economic Review, 91(4): 795-913.

    Card, David, and Alan Krueger (1994), Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania, American Economic Review, 84: 772- 793.

  • 4. Instrumental Variable Estimation.

    (Chapters 6, and 15) Khandker, R. Shahidur, Gayatri B. Koolwal, and Hussain A. Samad (2010), Handbook on Impact Evaluation: Quantitative Methods and Practices, The World Bank;

    Angrist, Joshua, and William Evans (1998), Children and Their Parents’ Labor Supply: Evidence from Exogenous Variation in Family size, American Economic Review, 88: 450-477;

    Angrist, Joshua, Guido Imbens, and Donald Rubin (1996), Identification of Causal Effects Using Instrumental Variables, Journal of the American Statistical Association, 91(434), 444- 455;

    See Comments by Heckman, Moffitt, and Robins and Greenland.

    Imbens, Guido, and Joshua Angrist (1994), Identification and Estimation of Local Average Treatment Effects, Econometrica, 62(2): 467–476;

    Heckman, James J. (1997), Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations, Journal of Human Resources, 32(3): 441– 462;

    See Comment by Angrist and Imbens and reply by Heckman.

    Application:

    Angrist, Joshua and Alan Krueger (1991), Does Compulsory School Attendance Affect Schooling And Earnings?, Quarterly Journal of Economics, 106(4): 979- 1014;

    Bound, John, David Jaeger, and Regina Baker (1995), Problems with Instrumental Variables Estimation When the Correlation Between the Instruments and the Endogeneous Explanatory Variable is Weak, Journal of the American Statistical Association, 90(430): 443- 450.

  • 5. Regression Discontinuity Design.

    (Chapters 7, and 16) Khandker, R. Shahidur, Gayatri B. Koolwal, and Hussain A. Samad (2010), Handbook on Impact Evaluation: Quantitative Methods and Practices, The World Bank;

    Lee, David S., and Thomas Lemiueux (2010), Regression Discontinuity Designs in Economics, Journal of Economic Literature, 48(2): 281-355.

    More advanced:

    Hahn, Jinyong, Petra Todd, and Wilbert van der Klaauw (2001), Identification of Treatment Effects by Regression Discontinuity Design, Econometrica 69 (1): 201–209.

    Application:

    Angrist, Joshua, and Victor Lavy (1999), Using Maimonides Rule to Estimate the Effect of Class Size on Scholastic Achievement, Quarterly Journal of Economics, 114(2): 533-575.

    Ludwig, Jens and Douglas L. Miller (2007), Does Head Start Improve Children’s Life Chances? Evidence from a Regression Discontinuity Design, Quarterly Journal of Economics, 122(1): 159-208.

    A course webpage (moodle) will be used to disseminate information about the course and the slides used in class.

    Teaching method

    Taking into consideration the fundamental purpose of this course, the learning methods most suitable to this course are:
    •    the method learning-by-examples (demonstration)
    •    learning-by-doing (practice bydoing)

    Evaluation method

    Presentation of a replication of the results in a research paper (30% of the grade): To be done in groups (specific size depending on class size) for the duration of approximately 45 minutes (30 minutes presentation, 15+ minutes discussion). Each group will prepare slides (can be very short) and replication files that will then be posted on the class website.
    Replicationfiles willincludedata files, commented Statadofile(s) referring to thetablesin the paper, and corresponding Stata log file(s). Please send the replication files to both the lecturer and the grader. During the presentation students should: (i) provide motivation to the research question, (ii) produce a clear and organized presentation of the empirical results, (iii) comment on replication difficulties, and (iv) provide appropriate responses to questions from the class.
     
    Problem sets (20% of the grade): To be done in groups. Please send all assignments to the grader of the course by the due date. These are Stata exercises, so students should provide commented do files, with corresponding log files.

    Final exam (50% of the grade).

    Participation in class is taken into account in marginal cases: All students are required to read the papers (for the applications) in advance, and to comment on the presentations during class.

Subject matter

The course will begin by defining the policy evaluation question, introducing the idea of causality and randomization. It will then cover propensity score matching, difference-in- differences, instrumental variable estimation, and regression discontinuity design. For each topic we will have students replicating results in a well-known empirical paper and presenting their efforts in class.