I am a Ph.D. student in the department of Agricultural and Resource Economics at the University of Maryland. My research focuses on technology adoption among smallholder farmers, and deforestation caused by agriculture.
Prior to joining the University of Maryland, I was a Senior Research and Training Associate at J-PAL Global, where I worked on developing training courses such as Evaluating Social Programs and J-PAL’s Research Staff Training course. Earlier, I worked as a research assistant on a randomized control trial designed to improve take-up of an agricultural technology in Niger. I hold a M.S. in Economics from Tufts University, where I completed a thesis on the Affordable Care Act’s impact on the gig economy, and a B.A. in Economics from Wheaton College (MA).
Ph.D. in Agricultural and Resource Economics, 2026 (Expected)
University of Maryland
M.S. in Economics, 2018
Tufts University
B.A. in Economics, 2016
Wheaton College (MA)
Joint with Ariel Listo (UMD) and Nguyen Voung (UW-Madison)
Joint with Ariel Listo (UMD)
This resource gives an overview and non-technical introduction to randomized evaluations, highlighting work from a variety of contexts, including studies on youth unemployment in Chicago, a subsidized rice program in Indonesia, and a conditional cash transfer in Mexico. It includes guidance on when randomized evaluations can be most useful, and also discusses when they might not be the right choice as an evaluation method.
This resource presents a high-level overview of the steps of a randomized evaluation, while showcasing a selection of J-PAL’s teaching and learning tools that were created as part of their online and in-person capacity building activities.
This resource covers best practices for programming a survey using computer assisted personal interview (CAPI) software. It primarily relies on examples using SurveyCTO, which is widely used by J-PAL and Innovations for Poverty Action (IPA), but applies to all CAPI software
High-frequency checks, back-checks, and spot-checks can be used to detect programming errors, surveyor errors, data fabrication, poorly understood questions, and other issues. The results of these checks can also be useful in improving your survey, identifying enumerator effects, and assessing the reliability of your outcome measures. This resource describes use cases and how to implement each type of check, as well as special considerations relating to administrative data.
Researchers should monitor the implementation of a program to preserve its integrity of the program and collect additional information that can inform the generalizability of the results of the program. There is a variety of methods available to researchers, such as administrative data, site visits, and focus group discussions. This resource provides an overview of monitoring methods, how to select indicators to monitor, and how to choose monitors.
As with in-person surveys, remote survey work involves considerations at every stage of the project lifecycle. This resource summarizes key points regarding remote surveys and, where applicable, lists J-PAL’s related public resources, in which more detailed guidance can be found.
TA: Fall 2021
TA: Spring 2022
TA: Spring 2022