Abstract
Education departments are faced with the challenge to ensure that materials
presented to them by various Learning and Teaching Support Materials (LTSM)
developers are evaluated for possible approval, selection and inclusion into
catalogues. This is in line with the principles of the United Nations Commission
for Human Rights as stipulated in article 26 of the Universal Declaration of
Human Rights (1998).
The process of evaluation is scaled down to regional, national, provincial, district
and school level within the South African milieu. It is standard practice for
education departments to use an evaluation instrument with specific criteria, train
educators as evaluators and asses the effectiveness of the evaluation process.
I have been motivated to undertake this research because of my interest in
investigating whether the instrument used by Gauteng Department of Education
(GDE) to evaluate LTSM, builds capacity among evaluators of materials yield
reliable, objective results upon their evaluating the LTSM. Thus this study is
concerned with evaluation of the effectiveness, efficiency and impact of the
evaluation instrument itself. The three concepts, according to Boulmetics &
Dutwin (2000: 5), are collectively termed levels of programme evaluation.
The unit of analysis of the study is the Foundation Phase evaluators’
experiences. The topic is examined from the theoretical perspective of
Evaluation theory and Metatheory (Scriven, 2003, Stufflebeam, 2003; Stake,
2003), educational evaluation in Africa (Omolewa and Kellaghan, 2003) and
social constructivism as the educators are encouraged to work in a collaborative,
participative fashion using the instrument.
The argument of this inquiry is that discourse, social involvement and
collaboration are essential in the evaluators’ engagement with the instrument
and enable them to construct their knowledge by utilizing existing and new
knowledge. As primary users educators as evaluators are engaged in and formpart of the context, input, process and product evaluation to attain their values
(Stufflebeam, 2003).
Evaluators are stakeholders and therefore, in this study, fulfil what Stake (2003)
refers to as stakeholder evaluation or connoisseurship evaluation. Evaluators
also respond to key issues and problems that they experience at the evaluation
site in what Stake (2003) refers to as responsive evaluation.
A questionnaire-based survey, observations and interviews were conducted to
collect data. I will engage with the works of authors on evaluation of systems,
materials, programmes and projects at different levels and this implies something
further than mere voting and requires the participants to deliberate (House and
Howe, 2003). Evaluators of LTSM are given knowledge on evaluation in a social,
interactive manner that promotes discourse. Lincoln (2003) refers to the
participative evaluators as a community of knowers.
To this study, I also contribute my experience as an education specialist and also
my early experience as a teacher. Although I worked as a high school teacher, I
was appointed to work in GDE District and Head Office as an education
specialist. This exposed me to Foundation Phase educators/ practitioners. I
made use of that social and dialogic relationship for the benefit of the study.
The study concludes with findings illustrating that: educators are willing to
perform the function as part of their work; the instrument encourages
collaborative evaluation; the instrument is a core element for the evaluation
process; and that evaluators need some assistance when evaluating LTSM.
Based on the findings, I make recommendations for, among other things, the
GDE to be more conscious of the significance of this process and to formalize
relevant LTSM structures.
Mr. W.A. Janse van Rensburg