Mel Mark is Professor of Psychology. He has edited a dozen books and is author of more than 130 articles and chapters in books. For much of his career, Dr. Mark has applied his background in social psychology and his interest in research methods to the theory and practice of program evaluation. He has served as President of the American Evaluation Association and as Editor of the American Journal of Evaluation. Dr. Mark’s awards include the American Evaluation Association’s Lazarsfeld Award for Contributions to Evaluation Theory.
One of the undercurrents that has propelled social psychology is a desire on the part of its practitioners to make a positive difference in the world. Program evaluation shares that motivation, largely through efforts to apply systematic research methods to assist in the development, dissemination and retention of more effective social programs. Dr. Mark is interested in the intersection of social psychology and evaluation. Social psychological theory and research can contribute to the design of better programs. Social psychologists can also help resolve problems that arise in program evaluation, such as how to facilitate discussion among stakeholders who differ in power, or how to increase the likelihood that an evaluation’s results will be used. The real-world testbed of evaluation also offers opportunities to contribute back to social psychology, for example, by assessing the generalizability of findings from lab experiments.
Dr. Mark is interested in working with students who would like to apply their social psychological research interests to the practice of program evaluation. He also works with students to evaluate interventions in various domains of application.
Selective, Recent Publications:
Mark, M. M. (in press). Mixing methods in quasi-experiments and clinical trials. In S. Hesse-Biber and B. Johnson (Eds.), The Oxford Handbook of Mixed and Multimethod Research. Oxford University Press.
Campbell, B. C. & Mark, M. M. (2015). How analogue research can advance descriptive evaluation theory: Understanding (and improving) stakeholder dialogue. American Journal of Evaluation, 36, 204-220.
Mark, M. M. (2014). Credible and actionable evidence: A framework, overview, and suggestions for future practice and research. In Donaldson, S., Christie, T. C. and Mark, M. M. Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations (pp. 275-301). Thousand Oaks: Sage.
Donaldson, S., Christie, T. C. and Mark, M. M.(Eds.) (2014). Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations (2nd ed.). Thousand Oaks: Sage.
Mark, M. M., Donaldson, S., and Campbell, B. (Eds.) (2011). Social Psychology and Evaluation. New York: Guilford.
Mark, M. M., Donaldson, S. & Campbell, B. (2011). Social psychology and evaluation: The past, the present, and possible futures. In Mark, M. M., Donaldson, S., and Campbell, B. Social Psychology and Evaluation. (pp 4-27). New York: Guilford.
Chen, H-t., Donaldson, S., & Mark, M. M. (Eds.) (2011). Validity in Outcome Evaluation. San Francisco: Jossey Bass.
Mark, M. M. (2011). New (and old) directions for validity concerning generalizability. In Chen, H-t., Donaldson, S., & Mark, M. M. (eds). Validity in Outcome Evaluation (pp. 31-42). San Francisco: Jossey Bass.
Mark, M. M. & Lenz-Watson, A. L. (2011). Ethics and the conduct of randomized experiments and quasi-experiments in field settings. In A. T. Panter & S. K. Sterba (eds.), Handbook of Ethics in Quantitative Methodology (pp. 185-209). New York: Routledge.
Sinclair, R. C., Moore, S. E., Mark, M. M., Soldat, A. S., & Lavis, C. A. (2010). Incidental moods, source likeability, and persuasion: Liking motivates message elaboration in happy people. Cognition and Emotion, 24(6), 940-961.
Campbell, B. C. & Mark, M. M. (2006). Toward more effective stakeholder dialogue: Applying theories of negotiation to policy and program evaluation. Journal of Applied Social Psychology, 2834-2863.