Evaluating the impact of CAL on student learning of Anatomy - the search for a paradigm



Department of Anatomy

University of the Western Cape

Private Bag X17

Bellville, 7535




This paper argues that there is a need for evaluating the impact of the computer assisted learning on the quality of student learning. The focus should be on how students actually respond to rather than on the potential of the newer educational technologies. Evaluation is a complex process and , therefore, requires an eclectic approach in the selection of the research tools used. Furthermore, collaboration with other educators, evaluators and instructional designers, who share this interest, will facilitate progress.

Definition of evaluation

Evaluation is the systematic acquisition and assessment of information about some object in order to provide useful feedback to a variety of audiences (Trochim, 1999). In this paper, the object of evaluation is computer assisted learning CAL) in Human Anatomy at the University of the Western Cape. The main goals of the process are twofold; firstly, to assess the effects of computer mediated interventions on the quality of student learning; and, secondly, to generate empirically driven feedback that may influence decision-making or policy formulation with regard to the use of computers at our institution.

Importance of evaluating

It seems obvious that as institutions embark on increased efforts involving the electronic delivery of education, it is important that attention be paid to measuring outcomes of such interventions. (http://www.edgorg.com/assessment.htm). Reeves (1997) confirms the need for evaluation and discusses four reasons for the lack of useful evaluation of CBE. Firstly, educators tend to accept uncritically the advertised claims of the producers of the technology. Secondly, the evaluation of CBE is often reduced to a numbers game with the focus on those parameters that can be easily measured. Thirdly, he suggests that evaluations that have been previously conducted lacked utility and impact. Lastly, he states that an inappropriate reliance on traditional empirical evaluation methods frequently led to disappointing results.

Complexity of evaluation

In order to address the complexity of evaluation, Reeves (1997) proposes fourteen pedagogical criteria for evaluating various forms of CBE in order to obtain more valid and useful evaluations. These are based on some aspect of learning theory and include considerations of epistemology, pedagogical philosophy, underlying psychology, goal orientation, experiential value, teacherís role, program flexibility, value of errors, motivation, accommodation of individual differences, learner control, user control, cooperative learning and cultural sensitivity.

Quality learning

Laurillard (1993) argues that a mix of teaching and learning methods will always be the most efficient way to support student learning, because only then is it possible to embrace all the activities of discussion, interaction, adaptation, and reflection that are essential for quality learning. She asserts that technology should be used to meet genuine academic needs of student. Therefore, the potential of the technology to provide students with new learning activities should be explored.


Nightingale and OíNeill (1994) recommend Action Research as an approach to enhancing the quality of learning in higher education. It is adopted here as a broad approach because of its focus on improving practice rather than merely proving theories. It is seen as a self-reflective spiral that involves planning, acting, observing and reflecting with further iterations of the same cycle.

Evaluation Strategies

There are three broad, overarching perspectives on evaluation: scientific-experimental models; qualitative/anthropological models; and the participant-oriented models. Reeves distinguishes three research paradigms that seem to correspond: Analytic-Empirical-Positivist-Quantitative Paradigm, Constructivist-Hermeneutic-Interpretivist-Qualitative Paradigm; Critical Theory-Neomarxist-Postmodern-Praxis Paradigm. (http://www.educationau.edu.au/archives/cp/REFS/reeves_paradigms.htm)

Scientific-experimental models

According to Entwistle (1996) pure scientific experimentation is often inappropriate for ethical, practical and technical reasons. The virtues of this approach are its rigor, its apparent objectivity and its strict control of all relevant variables. But, as Reeves (1997) points out reliance on traditional empirical evaluation methods, frequently led to disappointing results. There seems to be a tension between pure, scientific research and evaluation. Scientific research aims to formulate or to test hypotheses and theories on the basis of reliable empirical data gathered, as far as possible, under controlled experimental conditions. Ideally evaluation should optimize rigor and objectivity. But educational evaluation deals with complex phenomena where a multitude of variables cannot be strictly controlled. In addition, it occurs in a managerial/political context where its seeks to influence decision making and, therefor, sacrifices the stance of strict objectivity.



In this study a quasi-experimental approach will be adopted where appropriate. For example, many resources in Anatomy are available in both printed and electronic forms; thus, students may be assigned into two groups and given a common task using the two media. Afterwards their response to each medium could be assessed. Several iteration of this procedure should produce reliable data.

Participant-oriented models

Action research favors a participant oriented approach. This model is preferred on the following grounds. Firstly, it allows for the complexity of the learning and evaluation processes. Secondly, it allows one to engage in a dialogue with students as partners in solving the problems of teaching and learning. Thirdly, it holds the following benefits for students: greater collaboration between lecturer & learner fostering metacognitive growth;. Fourthly, the ethical concerns raised by the scientific approach are overcome provided that participation is entirely voluntary and totally retractable and that confidentiality is maintained. This model is not without disadvantages and risks. It has a large element of subjectivity, conflicts may arise, and participants may manipulate the situation or withdraw at crucial times.


The potential of the newer educational technologies is enormous and should be harnessed. Evaluating the implementation and impact of these technologies is complex. Thus, a variety of strategies ranging from participant-oriented to the scientific-experimental should be employed appropriately. In addition, there should be continuous reflection on and analysis of the feedback received from students. Furthermore, ongoing communication with other educators interested in these issues will expedite progress.


I thank the University of the Western Cape for special leave and for financial support.


Entwistle, N. (1996). Improving University Teaching through Research on Student Learning. Centre for Research on Learning and Instruction. University of Edinburgh.

Laurillard, D. (1993). Rethinking university teaching: A framework for the effective use of educational technology. London: Routledge.

Nightingale, P, and OíNeill M. (1994). Achieving Quality Learning in Higher Education.

Kogan Page.

Reeves, T.C. (1997) Evaluating What Really Matters in Computer-Based Education

At http://www.educationau.edu.au/archives/cp/reeves.htm

Reeves, T.C (1998). Rigorous and Socially Responsible Interactive Learning Research.

At http://www.aace.org/pubs/jilr/intro.html

Trochim, W.M. (1999) The Research Methods Knowledge base.

At http://trochim.human.cornell.edu/kb/index.htm