Proponents of the use of randomized controlled trials (RCTs) in impact evaluation and development research often point out the close link between these trials and their clinical counterparts in the world of medical research. Yet, clinical trials often differ from development RCTs in a number of ways, ranging from their ability to ensure subjects actually take their medicine to their emphasis on blind or double-blind protocol, where subjects are unaware whether or not they have received the real treatment. By contrast, development RCTs exist in a far messier world in which, for example, farmers cannot be forced to use the fertilizer you just randomly allocated them and preventing your study subjects from knowing they have received a bag of fertilizer is nigh impossible.
This has not stopped a group of researchers from trying to close the gap by implementing a double-blind protocol as part of a standard development impact evaluation. In the abstract of the resulting paper, Erwin Bulte and coauthors describe how introducing a blinding protocol seemed to eliminate the effectiveness of the intervention they were studying:
"Randomized controlled trials (RCTs) in the social sciences are typically not double-blind, so participants know they are "treated" and will adjust their behavior accordingly. Such effort responses complicate the assessment of impact. To gauge the potential magnitude of effort responses we implement a conventional RCT and double-blind trial in rural Tanzania, and randomly allocate modern and traditional cowpea seed varieties to a sample of farmers. Effort responses can be quantitatively important--for our case they explain the entire "treatment effect on the treated" as measured in a conventional economic RCT. Specifically, harvests are the same for people who know they ...
[view whole blog post ]