Home / posts / Blog / Complexity and Evaluative Thinking

Complexity and Evaluative Thinking

Jan 21, 2019 | Blog

By Dione Hills, CECAN Fellow (evaluating the ‘capacity building’ side of CECAN’s work) 

21st January 2019

There has recently been an upsurge of interest in what constitutes ‘Evaluative Thinking’ (ET). One frequently quoted definition of this term (Buckley 2015) refers to ET being “critical thinking applied to contexts of evaluation”[i]. This blog reflects on ways in which ‘evaluative thinking’ and the application of an understanding of complexity to evaluation can be mutually supportive. It also considers ways in which a complexity perspective may need to be enhanced in order to promote true ‘evaluative thinking’.

A recent issue of New Directions for Evaluation(Official Journal of the American Evaluation Association)[ii]brought together a number of papers on this topic. An interesting theme running through these papers (of particularly relevance to those of us involved in CECAN activities) is that evaluative thinking can help evaluators (and their clients) think more clearly about complexity, uncertainty and ambiguity.

Vo et al, in their review of the literature[iii]list some of the key features of evaluative thinking as:

  • As one facet of a society and culture, ET consists of interactions between multiple value systems.
  • As a professional value in evaluation, ET entails the commitment to those virtues and procedures that support the representation of truth in context (my italics).
  • As an evaluator competency, ET reflects the ability to creatively navigate uncertainty, ambiguity, and complexity (my italics).
  • As one aspect of evaluation practice, ET involves the investigation of one’s own as well as others’ positionality, assumptions, motivations, and biases.

The topic of evaluative thinking comes up particularly in discussions about what constitutes core elements of good evaluation practice, and how to build ‘evaluation capacity’ in individuals and organisations. Several ‘capability’ and ‘competency’ frameworks now set out the knowledge and skills required by those undertaking or commissioning evaluation[iv]. Most of these indicate that quality evaluation goes beyond the application of technical competence (i.e. the skilled application of research methods). Evaluators (and commissioners) also need knowledge of evaluation theory, values and ethics, and to have the managerial and interpersonal skills that will enable them to keep evaluations on track in situations that may be highly dynamic or politically charged, and build productive relationships with clients and other key stakeholders. Some capability frameworks also include having the skills to promote an evaluative culture: which might be interpreted as supporting evaluative thinking throughout an organisation, so that good evaluation is undertaken, and effective use made of findings.

The recently published book Evaluation Failures[v]provides a number of fascinating examples of how technically competent evaluation strategies can go disastrously wrong when the context is rapidly changing, relationships break down, or where there are major misunderstandings about what the evaluation is trying to achieve.

Quinn Patton, in his review of the history of how evaluative thinking has developed[vi], points out why good evaluation has to beyond the collection and analysis of data:

‘It is not enough to have trustworthy and accurate information (the informed part of the informed citizenry). People must also know how to use the information, that is, to weigh evidence, consider the inevitable contradictions and inconsistencies, articulate values, interpret findings, with complexity, and examine assumptions, to note but a few of the things meant by “thinking evaluatively” ‘

Much of what has been said about evaluative thinking will sound familiar to those of us who have been considering how to apply complexity theory to evaluation practice. Complexity in itself is not new to the practice of evaluation: the evaluation literature is full of discussions about how complex interventions can best be evaluated. However, until recently, only limited reference was made to complexity theory. Applying understandings from the study of complex adaptive systems to the practice of evaluation can help draw attention to just those aspects of evaluation that are being highlighted in the literature on evaluative thinking: context dependency, multiple perspectives, uncertainty of outcomes and the often ambiguous nature of data. Some of the research methods that CECAN has been exploring (e.g. systems mapping or agent-based modelling) also help bring greater rigour to the study of these dimensions.

There are, however, two important elements of evaluative thinking which are not necessarily supported by the adoption of a complexity informed lens. One of these is the ‘valuing’ aspect of evaluation, which Vo et al describe as ‘the ascription of merit, worth, significance, importance’. The other is the importance of adopting a ‘reflective’ stance.

Calling this “an evaluative cast of mind” rather than evaluative thinking, Carol Weiss describes how evaluation can help:

‘Program people reflect on their practice, think critically, and ask questions about why the program operates as it does. They learn something of the evaluative cast of mind—the sceptical questioning point of view, the perspective of the reflective practitioner”[vii]’

So, another way of asking the question about how a greater understanding of complexity contributes evaluative thinking is to ask whether a complexity mind-set can help to answer not only the question of ‘What?’, but the questions ‘So what?’ and ‘What now’ (using questions from reflective practice as set out by Rolfe et al[viii])? Combining a complexity appropriate methods together with a reflective stance, and paying attention to the ‘valuing ’ aspect of evaluation, may be an important key to helping evaluation practitioners and commissioners appreciate how a complexity perspective can improve both the quality of their evaluation, and the usability of evaluation findings.

 


[i]Buckley, J., Archibald, T., Hargraves, M., & Trochim, W. M. (2015). Defining and teaching evaluative thinking insights from research on critical thinking. American Journal of Evaluation36(3), 375–388. (P1)

[ii]Evaluation Thinking: New Directions for Evaluation, Volume 2018, Issue 158, Summer 2018. Wiley 

[iii]Vo, A. T., Schreiber, J. S., & Martin, A. (2018). Toward a conceptual understanding of evaluative thinking. In A. T. Vo & T. Archibald (Eds.), Evaluative Thinking. New Directions for Evaluation.158, 29–47.

[iv]See for example: The UKES capabilities framework https://www.evaluation.org.uk/index.php/news-resources/ukes-publications/77-ukes-capabilities-framework, The UNEG evaluation competency framework http://www.unevaluation.org/document/detail/1915and The Canadian Evaluation society competencies for Canadian evaluation practice https://evaluationcanada.ca/txt/2_competencies_cdn_evaluation_practice.pdf

[v]Hutchinson K (2019) Evaluation Failures: 22 tales of mistakes made and lessons learned, Sage Los Angeles.

[vi]Patton, M. Q. (2018). A historical perspective on the evolution of evaluative thinking. In A. T. Vo & T. Archibald (Eds.), Evaluative Thinking. New Directions for Evaluation. 158, 11–28. 

[vii]Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation19, 21–33. 

[viii]Rolfe, G., Freshwater, D., Jasper, M. (2001) Critical reflection in nursing and the helping professions: a user’s guide. Basingstoke: Palgrave Macmillan

Share This