Ever since the Bush administration took office in 2001, U.S. federal education departments have been moving to implement highly restrictive guidelines to federally-funded educational research. The U.S. Department of Education’s Institute of Education Sciences backs research that makes claims of causality based on random-assignment experiments (see also the affiliated What Works Clearinghouse).
This has become increasingly frustrating to many educational scholars in the U.S., including those with expertise in educational evaluation, who recognize that random-assignment experiments are seldom feasible in education, and that when experiments of any type are feasible, they are usually best preceded by other forms of qualitative or mixed methods research to help understand the contextual features that shape a particular educational innovation. For example, a couple of us at UCI are beginning to investigate Automated Writing Evaluation software. For us to start off with an experiment, randomly assigning a large number of students to use it and others not to use it, without better understanding how its used, what features it has, how different types of students react to it, how different types of teachers make use of it, etc.–would not lead to very robust results, as we would not even be able to design the appropriate treatments or controls.
Mahdabi Chatterji, an educational evaluation expert at Teachers College of Columbia University, has written an outstanding article critiquing this narrow approach to research and instead arguing for what she calls extended-term mixed method evaluation design. Highly recommended.
Leave a Reply