With
these preliminary considerations in mind, it would be helpful to look
carefully at the following and make some tentative choices regarding
the role or roles that may seem most appropriate in your evaluation
of the project, programme, innovation or other initiative (for
simplicity sake we will encompass all of these from now on in the
term ‘project’). The evaluator’s role is to be:
- as objective as possible (interviewing, questioning, reporting on findings, not being too close to the participants) and to report to the person or body for whom the evaluation is conducted;
- to collect data rigorously and scientifically;
- to feed back impressions to participants (so that they can take note of your findings and improve their activities);
- to understand and describe the project and make judgments;
- to be involved with the project from the outset, working with the project participants to plan their programme and the evaluation together;
- to define the nature and methodology of the evaluation professionally, to begin work when the project is operational and monitor it at agreed intervals and at the end;
- to monitor the ‘process’, that is, the implementation of the initial terms of reference or objectives of the project;
- to focus on the ‘life’ of the project in its relevant wider contexts;
- to investigate the ‘outcomes’, successful or unsuccessful, of the project;
- to judge whether the project has been (or is likely to be) value for money;
- to conduct an external evaluation and nothing more;
- to help participants to conduct an internal evaluation, in addition to the formal external one, or as a substitute for it;
- Or…..
It
will be clear from the choices available that evaluation is far from
being a simple or standard activity. The choices are neither right
nor wrong, but may be more appropriate to particular programmes,
conditions and requirements, and to the self-image of the evaluator.
Evaluators and evaluation theorists have extensively explored the
alternatives and these have been the focus of various kinds of
controversy. To compare your own preferences or issues with
some of those in the literature in terms of types of evaluation
click here.
We cannot here consider all of these alternative approaches, but it
is important to emphasise two that are frequently met in the
evaluation literature.
The
purposes of evaluation can be encapsulated in these two terms, the
former to highlight what is and has been happening, the latter to
attempt to indicate what has happened as a result. Both encounter
difficulties.
- Process evaluation is targeted on implementation, how the programme’s intentions are being interpreted and the experience of conducting the activity, together with the continuing or changing perceptions of the various constituencies involved. The kinds of questions that such evaluation raises may include conflicts in these perceptions for reasons not necessarily connected with the activity itself, confusion about the original terms of reference or doubts about their wisdom. The larger the programme the more difficult is the question of sampling (how many people to interview and how to select, what activities to attend…) and when it is reasonable to monitor what is taking place. For an external evaluator there may be problems of time allocation and frequency of involvement, depending on the nature and extent of the programme (multi-site, national…), though even with a small, single-institution activity initial decisions about the extent of the external evaluator’s involvement may cause problems. Often called ‘implementation evaluation’, this often causes difficulties in collecting reliable information on how successfully the implementation is taking place.
- Impact (or ‘outcomes’, or sometimes ‘product’) evaluation raises some of these issues, but also different ones. Would the ‘outcomes’ of the programme have happened without the intervention, and is there a credible causation between the activity and the impact? Answers to the question of what impact has taken place may be positive, negative or mixed, that is, an evaluation may be of non-success, evidence of non-impact or of the complexities that have arisen from other factors – for example, the result of other interventions, processes and contexts. Impact may cover time scales that vary considerably from programme to programme (eg a limited research/development programme in a school or university, or a World Bank project over a nation or region). Impact may be studied not only at the conclusion of an activity (or its funding) or after an interval of time, but also during the activity – especially if it is designed to provide regular feedback or if it is a longitudinal study. Evaluations of the American Head Start and similar programmes, for example, involved the evaluation of learning gains and other measures in a variety of ways at intervals over very long periods. It is common for evaluators of limited-time projects to feel (and suggest) that the real impact evaluation could only take place several years after the end of the programme. Depending on the project, impact evaluation may have policy or decision-making implications:
An impact
evaluation assesses the changes in individuals’ well-being
that can be attributed to a particular program or policy. It is aimed
at providing feedback and helping improve the effectiveness of
programs and policies. Impact evaluations are decision-making tools
for policymakers and make it possible for programs to be accountable
to the public. (World
Bank, website)
Such
a role for the evaluator raises questions, discussed below, of the
kind of contract agreed at the beginning of the evaluation, and the
possible influence of the audiences for the reporting procedure at
the end. There are issues about the tentative or reliable nature of
impact data, which may differ considerably by type of project. Since
a funding agency may require impact data and an evaluator may find
such data unattainable, there is room for misunderstanding and
conflict.
These
may be, but are not necessarily, related to the above.
Hopkins
(as we saw above in terms of types
of evaluation)
made the simple suggestion that formative evaluation was when the
cook tasted the soup, and summative when the guest tasted it. He also
suggested that the difference was ‘not so much when as why. What is
the information for, for further preparation and correction or for
savouring and consumption? Both lead to decision making, but toward
different decisions’ (Hopkins 1989, p. 16). This latter distinction
establishes the difference between these concepts and those relating
to process and impact. Formative evaluation is designed to help the
project, to confirm its directions, to influence or help to change
them. It is more than monitoring or scrutinising, it serves a
positive feedback function (which process evaluation does not
necessarily do). Summative evaluation is not just something that
happens at the end of the project, it summarises the whole process,
describes its destination, and though it may have insights into
impact, it is not concerned solely with impact.
Summative
evaluation has often been associated with the identification of the
preset objectives and judgments as to their achievement (again, not
necessarily in terms of impact). The assumption in this case is that,
unlike in formative modes, evaluation is not (should not be) involved
in changing the project in midstream – otherwise the relationship
between objectives and their achievement cannot be evaluated:
…every
new curriculum, research project, or evaluation program starts with
the specifications to be met in terms of content and objectives and
then develops instruments, sampling procedures, a research design,
and data analysis in terms of these specifications. (Bloom
1978, p. 69)
Starting
specifications that are expected or required to be met therefore
dictate the nature of the summative evaluation. The instruments or
sampling procedures cannot produce ‘pure’ data if the process is
corrupted by the intervention of evaluator feedback or other
alterations to the original specifications. It is possible to
conceive of evaluation as both formative and summative, but in this
case ‘summative’ comes closer to meaning ‘final’, and cannot
present data and make judgments as purely as is suggested in Bloom’s
definition.
Other
approaches to evaluation emerged in the last quarter of the
20th century,
and some will be mentioned further below in relation to methodology.
These have included ‘illuminative’, ‘democratic’ (as opposed
to ‘bureaucratic’ evaluation), ‘participative’ and
‘responsive’ evaluation. These all have implications for the role
of the evaluator in relation to the project, for example, sharing
with the project participants, responding to the activity not to
specifications and intentions, identifying and reporting differences
of perspective and values, emphasising the importance of
understanding or recording competing perceptions. Much of this work
relates to discussion in other RESINED components, notably action
research and case
studies.
You
could at this point consult the paper by Parlett and Hamilton on
‘Evaluation
as
illumination’ in Hamilton et al., Beyond
the Numbers Game
and
other contributions to this influential book.
See
also the chapter on ‘Program evaluation: particularly responsive
evaluation’
by Robert Stake, in Dockrell and Hamilton, Rethinking
Educational
Research,
and Helen Simons, Getting
to Know Schools in a Democracy:
the
politics and process of evaluation.
|
Actividad:
Lee el texto y realiza un graphic organizer.
No hay comentarios:
Publicar un comentario