Feedback
Rating: 10 star
Date: 10/08/2015
Feedback Given By:
Topwrite Feedback Comment:
Project Details
Project Status: completed
This work has been completed by:
Topwrite Total payment made for this project was: $ 15.00
Project Summary: Quantitative Design
At this point in the course, you have been introduced to the major developments in quantitative policy evaluation designs. Now you will have to the opportunity to develop a defensible quantitative design that takes into account the strengths, limitations, and tradeoffs that are involved in employing these designs to address major policy problems.
For this Assignment, use all of the information you have gathered so far about your Final Project, your understanding of the program, stakeholders, and theoretical and logical framework of the project, and some of your earlier thinking regarding an appropriate evaluation design.
QUESTION: Write a 2- to 3-page defensible quantitative design for your selected program that addresses the following:
Explain how you will select treatment and control groups if the design is a field experiment.
Explain what techniques you might use to address selection bias if the design is a quasi experiment.
Explain how you might address internal validity if the design is a nonexperimental design.
_____________________________________________________________________________________
Introduction to Quantitative Evaluation
Introduction
Some of you may have experience filling out federal forms for student financial aid or have been the recipient of Pell grants. If so, you know it is a complex procedure. What you may not know is that the federal government is implementing simplification procedures that will reduce the complexity of the application process. This activity is the result of quantitative evaluation experiments that documented the direct relationship of simplification of the process to increases in college enrollment, grant awards, and applications in general.
In addition to quantitative evaluation, cost benefit and cost effectiveness analysis are also methods that have been used for many decades in policy analysis and program evaluations. In the past decade, there have been several developments in quantitative research design, two of which are particularly significant for evaluation. This week, you review quantitative approaches that range from the traditional to the newer approaches and apply them to specific scenarios.
Learning Objectives
Students will:
Evaluate quantitative design methods for assessing the impact of public programs and initiatives usage of medical services
Examine impact and of nudging in quantitative policy evaluation
Analyze techniques of quantitative design
Analyze internal validity for non-experimental designs
____________________________________________________________________________________
Required Resources
Note: To access this week's required library resources, please click on the link to the Course Readings List, found in the Course Materials section of your Syllabus.
Readings
Langbein, L. (2012). Public program evaluation: A statistical guide (2nd ed.). Armonk, NY: ME Sharpe.
o Chapter 4, Randomized Field Experiments (pp. 73109)
o Chapter 5, The Quasi Experiment (pp. 110142)
o Chapter 6, The Nonexperimental Design: Variations on the Multiple Regression Theme (pp. 143208)
McDavid, J. C., Huse, I., & Hawthorn, L. R. L. (2013). Program evaluation and performance measurement: An introduction to practice (2nd ed.). Thousand Oaks, CA: Sage.
o Chapter 7, Concepts and Issues in Economic Evaluation (pp. 271308)
Mills, C. (2013). Why nudges matter: A reply to Goodwin. Politics, 33(1), 2836.
Retrieved from the Walden Library databases.
USAID. (2013). Impact evaluations. Retrieved fromhttp://www.usaid.gov/sites/default/files/documents/1870/IE_Technical_Note_2013_0903_Final.pdf
United States Government Accountability Office (USGAO). (2009). Randomized experiments can provide the most credible evidence of effectiveness under certain conditions. In Program evaluation: A variety of rigorous methods can help identify effective interventions (pp. 2026). Retrieved fromhttp://www.gao.gov/assets/300/298907.pdf
Optional Resources
Green, D., & Winik, D. (2010). Using random judge assignments to estimate the effects of incarceration and probation on recidivism among drug offenders. Criminology, 48, 357359.
Institute of Politics. (Producer). (2013). Nudging policy: Behavioral economics in the public square[Video file]. Retrieved from http://forum.iop.harvard.edu/content/%E2%80%9Cnudging%E2%80%9D-policy-behavioral-economics-public-square
Johnson, E., & Goldstein, D. (2003). Do defaults save lives? Retrieved fromhttp://www.dangoldstein.com/papers/DefaultsScience.pdf
Khandker, S. R., Koolwal, G. B., & Samad, H. A. (2010). Handbook on impact evaluation: Quantitative methods and practices. Retrieved fromhttps://openknowledge.worldbank.org/bitstream/handle/10986/2693/520990PUB0EPI1101Official0Use0Only1.pdf?sequence=1
McKinsey Quarterly. (Producer). (2011, June). Nudging the world toward smarter public policy: An interview with Richard Thaler [Audio podcast]. Retrieved fromhttp://www.mckinsey.com/insights/public_sector/nudging_the_world_toward_smarter_public_policy_an_interview_with_richard_thaler
The World Bank Independent Evaluation Group. (2006). Impact evaluation: The experience of the independent evaluation group of the World Bank. Retrieved fromhttp://ieg.worldbank.org/Data/reports/impact_evaluation.pdf