The theme of the 2017 Annual Evaluation Conference is Exploring the current uses of evaluation. Evaluation is a common term in the English language and means different things to different people. It is used in many different ways. We need to ensure that institutions and their staff who wish to "evaluate" their policies, programmes, projects and institutions know what that really means and what they will gain from the exercise. How, for example, does it differ from an audit, or monitoring, or a review? Is it the same as research? Can they use evaluation to measure impact, or success? Is it conducted before, during or after taking action, is it an external or internal exercise, who is involved, how are its conclusions communicated?
Emphases on different types of use of evaluation change over time. These have included focus on accountability, establishing impact, formative and developmental evaluation to improve implementation, and generation of evidence to learn about what works.
The 2017 conference will explore these issues and in particular how evaluation can become more useful to its commissioners, to the subjects of the evaluation, and to society more widely. To be effective, evaluation results and evaluative evidence need to be used and used correctly. So results and evidence themselves need to be presented in a clear and logical way and be understandable to their audience. They should not come as a surprise as the evaluators should be communicating with both commissioners and participants throughout the process to find out what is required (ie not to come armed with a set approach or suite of methods looking for an application), who it is required for, and why. Taking time at the outset to talk to each other will not only lead to better results but may also save time and money in the process.
What should be the purpose of the findings, recommendations or proposed actions listed in the reports produced by evaluators? Should evaluation reports contain such statements in the first place? And if so, what is the difference between findings and recommendations? How can readers differentiate good ones from not-so-good or even downright bad or misleading?
The conference will consider the design of outputs, developing use strategies and connecting with potential users' needs so that participants will be better able to create and/or use evaluation resources for change, development and accountability.
Call for Abstracts:
The call for abstracts remains open - details can be found on the conference website.