(Claire Gilligan is an Education Officer (Special Duties) in Research Services, Department of Education.)
Case study is a recent product of the evolution of curriculum evaluation as a field of enquiry. Its place in the field as a reputable methodology is now established but its arrival there owes itself to wider epistemological and ideological influences which have characterized the historical development of curriculum evaluation. An examination of the history of curriculum evaluation reveals the nature of earlier methodologies and how many of these were inappropriate to curriculum evaluation needs. The case study method evolved in response to these needs, but it is in the light of evaluation history that its central features can be described and understood.
Case study falls within the 'transaction' or 'responsive' group of evaluation models which use naturalistic methods including participants' observations and interviews to provide an understanding of the nature of the case under evaluation: "Defined as the study of a single case or bounded system, it observes naturalistically and interprets higher order interrelations within the observed data. Results are generalizable in that the information given allows readers to decide whether the case is similar to theirs" (Stake, 1985, p277).
Features such as naturalistic observation, interactive methods and a reporting style aimed at eliciting naturalistic generalizations also blatantly mark this methodology as qualitative and subjective. The emergence of the case study approach owes itself to the inappropriateness of 'objective' quantitative methods traditionally used in educational research to meet the needs of educational evaluation. However, historical analyses such as those provided by Hamilton (1977), House (1978) and McTaggart (1983), reveal that whilst quantitative and qualitative evaluation methodologies have many differences, they share a common origin in the subjectivist ethics of liberalism. These writers reveal the profound influence liberalism had on the development of the major evaluation models and on their underlying epistemological, ethical and ideological nature.
The 'scientific approach', empiricist methods and the predominance of the individual in society were key principles of liberalism as articulated by J.S. Mill. These principles have had a remarkable impact on emerging methods of enquiry in the l9th Century (Hamilton, House). Mill's assumption that the natural world was uniform allowed him to develop an inductive logic from which, with the use of empirical methods, he could justify scientific laws. As he believed the natural world was uniform, then equally, empirical methods could be applied to social phenomena to induce fundamental laws in social science. The use of experimentation with large sample sizes and the use of correlation techniques to sort the data to induce generalizable 'laws' about the subject of study, found its way into emerging fields of social science such as psychology and education.
Hamilton and McTaggart trace the emergence of curriculum evaluation to the work of Bobbitt and Tyler, which centred on the specification of curriculum objectives and led to the evaluation of educational outcomes using 'scientific' research methods. John Dewey's pragmatist philosophy also strongly influenced these early American evaluators who met the push for educational efficiency and quality control through the use of scientific measurement. The influence of Thorndike in introducing testing, scales and other quantitative techniques also strongly promoted the use of scientific enquiry in educational evaluation (House, p4).
Ralph Tyler's work involved the design of curricula according to behavioural objectives and the testing of student achievement against objectives. The 'Tylerian' model was adopted as a means of measuring student outcomes and judging the quality of curricula and became known as the behavioural objectives model. The Tylerian evaluation model was used in the US during the 1930's in the Eight-Year Study; this study used objectives for the design and evaluation of curricula. Comparative experimentation was used to deduce the relative merits of progressive and traditional courses (Hamilton, pp7-8). Such methods, however, were found to have weaknesses when applied to education. McTaggart provides illustrations of how comparative experimental models used in studies of curriculum in the 1960's, such as the Head Start evaluation and the Coleman Report, did not achieve "the required levels of randomization and control" (McTaggart, 1983, p12).
The field of curriculum evaluation, began to expand by the 1960's when it was realised that the Tylerian model inhibited planning and evaluation and failed to provide understanding of the quality of education (McTaggart, p4; Kemmis & Stake, 1988, p27). The work of Lee Cronbach broadened notions of evaluation. His approach which aimed at curriculum improvement, introduced notions such as the need for understanding of the contextual features surrounding student learning (McTaggart, p5). Another significant impetus to the growth of evaluation came with the work of Scriven who made a distinction between the goals and roles of evaluation. The roles of evaluation involved information collection as a methodological activity which Scriven saw as assisting evaluators make a judgment; judgement he viewed as the ultimate goal of evaluation (McTaggart, p56).
These developments in the evolution of evaluation theory, as well as the failure of methodologies belonging to the scientific paradigm to provide for particular information needs in education, were a major turning point in the development of evaluation as a methodology.
It was in the late 1960's with the work of Cronbach, Scriven and Stake that the fields of research and evaluation began to diverge; it became clearer that although these fields overlapped they had differing purposes. Kemmis and Stake (1988) describe research as concerning enquiries into the nature of things and one which develops specific fields of knowledge whereas evaluation is aimed at decision-making and changing the way things are done and how people work in particular situations (Kemmis & Stake, p21). As Guba and Lincoln (1981) put it:
"The scientific paradigm of inquiry had served the 'hard' sciences well and had been embraced by early inquirers in the social sciences in the hope that it would function equally well for them. But it proved to have important shortcomings. The epistemological assumptions on which it was based (logical positivism and radical relativism), however appropriate to the hard sciences (a contention that is itself debatable), were not well met in the phenomenology of human behaviour. Research results proved to be inconclusive, difficult to aggregate, and virtually impossible to re late to happenings in the real world. A competing paradigm, dedicated to the study of behavioural phenomena in situ and using methods drawn from ethnography, anthropology, and sociological field studies, began to gain in popularity. This is the so-called naturalistic approach" (Guba and Lincoln, px-xi).The work of Stake was pivotal in providing new direction to the developing field of curriculum evaluation. His model of responsive evaluation, which employed case study as a technique, was one of the forerunners in naturalistic methodologies. It expanded Cronbach's notion of evaluation as the description of outcomes, and Scriven's concept of evaluation as consisting of expert judgment, to one which would portray all information relevant to program description and judgment (McTaggart, p7). As McTaggart points out, the experimental methods of the hypothetico-deductive approach which were aimed at results leading to predictive generalisations had been problematic in the area of educational research and evaluation. Stake countered the worth of such generalizations with his concept of naturalistic generalization: "I claim that case studies will often be the preferred method of research because they may be epistemologically in harmony with the reader's experience and thus to that person a natural basis for generalization" (Stake, 1978, p1).
Stake argued that whilst positivistic methodologies are valid tools for providing explanation, the case study approach has a greater capacity to increase understanding and to extend experience. It does so by building on experiential understanding and thus contributes to naturalistic generalization. Naturalistic generalisation which derives from tacit knowledge is "arrived at by recognizing the similarities of objects and issues in and out of context and by sensing the natural covariations of happenings ... They seldom take the form of predictions but lead regularly to expectation" (Stake, p2).
Case study involves 'an examination of an instance in action' (MacDonald & Walker, 1977). It entails the study of a bounded system (the case) selected as an instance drawn from a class, or alternatively, the study of the case as a bounded system of issues to be indicated, discovered or studied. The case is usually an entity of intrinsic interest and although similar to others, it has a distinctive internal unity or character. The milieu or context within which it is embedded is highly important to the overall study, as interactions between it and the focus of observation lead to richer description, and hence better understanding (Stake, 1985). Arising from case study are different kinds of generalisation: from the instance to a class, from the instance to a multiplicity of classes, or about the case itself (Adelman, Jenkins & Kemmis, 1976, p3).
Stake articulated a method for case study in his paper "The countenance of educational evaluation" (Stake, 1967). This paper defined and described educational evaluation as a qualitative process. It proposed a systematic method of gathering and processing data which countered the arguments of critics who suggested that qualitative methodologies were weak, poorly constructed and undisciplined. Stake contended that evaluation consisted of two essential features: description and judgment. He argued that 'to be fully understood, the educational program must be fully described and fully judged' (Stake, p2). In this paper he builds on limited notions of description (as espoused by Tyler and Cronbach) and of judgment (as espoused by Scriven) and offers data matrices which can assist in a systematic collection of both description and judgment data. He distinguishes between antecedent, transaction and outcome data and poses ways of processing description and evaluation data.
Stake also expressed some important notions which were to pave the way for the emergence of a major new model of curriculum evaluation. The notions that, "An evaluation of a school program should portray the merit and fault perceived by well-identified groups, systematically gathered and processed" (Stake, p3), and that "part of the responsibility of evaluators is to make known which standards are held by whom" (Stake, p8), were developed more completely in his concept of responsive evaluation:
"An educational evaluation is responsive evaluation if it orients more directly to program activities than to program intents; responds to audience requirements for information; and if the different value-perspectives present are referred to in reporting the success and failure of the program" (Stake, 1975, p145).The importance of reporting different value-perspectives became a characteristic intrinsic to the emerging class of pluralistic evaluation models. Responsive and transaction models of evaluation take a pluralistic value perspective, openly cognizant of differing value positions and possible conflict between both participants and audiences. McTaggart elaborates,
"... responsive evaluation was not merely an information service for practitioners. It had a wider mandate than that. It responded to multiple audiences and was embraced by a 'democratising spirit'. Responsive evaluation was characterized by the collection of judgments rather than the making of them. Stake was able to regard 'people as instruments': as a means of collecting data for the portrayal of the program-in-use. The perspectives of participants in social situations became important resources in evaluation" (McTaggart, 1983, p12).Hamilton describes other characteristics of pluralist evaluation models in contrast to those of 'classical' models:
"... they tend to be more extensive (not necessarily centred on numerical data), more naturalistic (based on program activity rather than program intent), and more adaptable (not constrained by experimental or preordinate designs).... they are likely to ... endorse empirical methods which incorporate ethnographic fieldwork, to develop feedback materials which are couched in the natural language of the recipients, and to shift the focus of formal judgment from the evaluator to the participants" (Hamilton, p6-17).Case study which features these characteristics implicitly is a technique employed widely by proponents of the more recent pluralist evaluation models. As the field of curriculum evaluation diversified to include models such as 'illuminative evaluation' and 'peer-research', the methodology of naturalistic enquiry attracted criticisms. McTaggart (p12) refers to the criticisms made of illuminative evaluation, which have been fairly typical of criticisms made of naturalistic methods. He notes that this approach was criticized for not articulating the role of theory in its application. Another criticism was its failure to articulate the difficulties of treating the school as 'culture'. Much of the criticism centres on the failure of naturalistic evaluation activity to contribute to theory construction. Such criticisms are misdirected as they have failed to appreciate that naturalistic techniques such as case study are practical methods of enquiry. They are not aimed at theory building,
"but wise and prudent practice - action which is appropriate in particular circumstances ... From this perspective, the 'theory' of evaluation will take the form of a developed and tested rationale for ways of working. Because the rationale for naturalistic enquiry will always allow for responsiveness to the particular in a given situation, it will never be 'technical' or 'theoretic' in form as Parsons (1976) seemed to hope it would be. We cannot expect such a rationale to define procedural steps nor constitute a unitary theory of evaluation. Criticisms of evaluation approaches based on the assumption that evaluation is meant to build theory are based on a fundamental misunderstanding about what evaluation is for" (McTaggart, 1983, p13).Good case studies will, as McTaggart also points out, articulate, test and defend the interpretive schemes and categories used in assembling data. Whe n Stake first offered a scheme for the systematic collection of description and judgment data (Stake, 1967) and later offered further structure to guide responsive evaluation (Stake, 1975), he was offering a defensible rationale to guide good naturalistic enquiry. His method "advocates technical steps (e.g. replication and nonverbal operationalization) to bolster the reliability of observation and opinion gathering without sacrificing relevance" (Stake, 1975, p145).
Stake's responsive evaluation begins with an identification of the issues in or around the case, which is used to provide a structure for the data gathering plan. Secondly, he postulates twelve prominent events which may occur in a responsive evaluation, and which revolve around observation and feedback. Data matrices can be used to organize evaluation observations. Stake does not rule out the use of testing or quantitative techniques as an aid in case study. However, data reliability is sought through the use of observation and numerous observers to replicate these observations. Stake held human observation to be one of the best instruments for data gathering. Holistic communication is achieved through reporting techniques which provide vicarious experience; such techniques are far more communicative about a case or program than a report consisting of scores and results obtained from quantitative techniques.
Case study, as a method employed in pluralistic evaluation, is susceptible to a number of problems which leave it open to criticism. The close involvement of the evaluator in the case under study, the question of confidentiality of data, access to and control of data, anonymity in reporting are problems common to the case study. These problems are intrinsic to the nature of case study as involving a social process and leading to a social product (MacDonald and Walker, p 184). The experimental methods of conventional educational research are, conversely, typically asocial. The research is directed by the researcher in terms of choice of subject to be studied, hypothesis formulation, experimental design and choice of instrumentation, through to the reporting of results and interpretation of data. The credibility of such methods is based on their being 'bias-free', although it is obvious that such heavily directed enquiry would very much reflect the value position(s) of those in control. Case study, on the other hand, is openly subjective. It seeks to represent the pluralism of values within the case as "a web of human relationships" (MacDonald & Walker, 1977, p184). Reporting styles such as portrayal and vignettes are used to convey the observational viewpoints, value perspectives, judgments and interpretations of program participants, rather than those of the person(s) directing the study.
The 'democratic' approach to case study evaluation has been endorsed as particularly appropriate to managing problems such as confidentiality and control of data, anonymity and bias in reporting. "Case-study research takes the researcher into a complex set of politically sensitive relationships" (MacDonald and Walker, p185). The importance of reporting different value perspectives characterised the pluralistic evaluation models to emerge in the late 1960's and early 1970's. The shift that was taking place in the evaluation field was not just one of technique, but one which concerned a change in the underlying assumptions directing enquiry.
Historical analyses by Hamilton, House and others of the development of curriculum evaluation, have revealed that the major evaluation models share a common origin in the subjectivist ethics of liberalism. However, whilst all models reflect fundamental tenets of liberal ideology, they have developed differently. The differences lie in the political assumptions upon which the models are based and how they have developed in accord with the interests of audiences, decision makers or those with control over the enquiry.
According to Hamilton and House, the majority of evaluation models have shared a problematic assumption. This was the assumption of goal consensus: the assumption that the goals of a curriculum and the criteria for its success can be agreed upon. The behavioural objectives model, the decision-making model, goal-free evaluation and more recent models such as art criticism, professional review and the legal/ adversarial model, have all shared this problematic assumption. Goal consensus, which reigned over evaluation history and is the essence of quantitative objectivist models, is rooted in J.S. Mill's utilitarian criterion of 'the greatest happiness', an essentially subjectivist ethic as House so clearly points out: "The objectivism ... that tends to equate objectivity with quantification relies on intersubjective agreement as the exclusive indicator of objectivity" (House, p4). 'High reliability' is ensured through the use of instrumentation with high observer (intersubjective) agreement. The use of instrumentation is, as discussed earlier, characteristic of models based on an objectivist epistemology.
As House observed, the use of the utilitarian principle in evaluation models, tended to attract strong government interference in order to assist decision-making in the interests of 'all citizens'. Participatory models such as responsive and transaction evaluation, instead aim at assisting decision-making through the direct participation of those closest to the program. Responsive evaluation, and the method of case study, are based on liberal ideals of democratic pluralism. Case study fits closely with the characteristics of 'democratic evaluation' as described by MacDonald. It recognizes value pluralism, seeks to represent a range of interests in its data-gathering techniques, controls problems of 'confidentiality', 'negotiation' and 'accessibility' through interaction with program participants, and acts as an information service in reporting to a range of audiences.
The emergence of case study as a method of curriculum evaluation has occurred in response to educational information needs not met by earlier methodologies. Many of these earlier methodologies sprang from a scientific research paradigm which used experimentation involving large sample sizes and numerical analysis in order to draw predictive generalizations. Application of such procedures to the field of educational evaluation proved to be inappropriate. The evaluation field grew away from the epistemological assumptions upon which such models were based to a subjective epistemology more embracing of the phenomenology of human behaviour. The methodology of case study is proven in its capacity to increase understanding and so contribute to the quality of educational decision-making.
Guba, E.G. and Lincoln, Y.S. (1981), Effective Evaluation, Jossey Bass Inc., Publishers, San Francisco.
Hamilton, D. (1977), 'Making sense of curriculum evaluation: continuities and discontinuities in an educational idea', in Deakin University, 1987, Course Reader, Volume 1: Approaches and Dilemmas in Curriculum Evaluation, Deakin University, Deakin.
House, E.R. (1978), 'Assumptions underlying evaluation models', in Deakin University, 1987, Course Reader, Volume 1: Approaches and Dilemmas in Curriculum Evaluation, Deakin University, Deakin.
Kemmis, S. and Stake, R. (1988), Evaluating Curriculum, Deakin University Press, Deakin University.
MacDonald, B. (1976), 'Evaluation and the control of education', the Deakin University, 1987, Course Reader, Volume 1: Approaches and Dilemmas in Curriculum Evaluation, Deakin University, Deakin.
MacDonald, B. and Walker R., 'Case-study and the social philosophy of educational research' in Hamilton, D., Jenkins, D., King, C., MacDonald, B., and Parlett, M. (eds) (1977), Beyond the Numbers Game, Macmillan Education Ltd, London.
McTaggart, R. (1983), 'The development of curriculum evaluation as a field of enquiry', in Deakin University, 1987, Course Reader, Volume 1: Approaches and Dilemmas in Curriculum Evaluation, Deakin University, Deakin.
Stake, R.E. (1967), 'The Countenance of educational evaluation', in Course Reader, Volume 1: Approaches and Dilemmas in Curriculum Evaluation, Deakin University, Deakin.
Stake, R. (1975), 'To evaluate an arts program', in Kemmis, S. and Stake, R. (1988), Evaluating Curriculum, Deakin University Press, Deakin University.
Stake, R.E. (1978), 'The case study method in social inquiry', in Deakin University, 1987, Course Reader, Volume 1: Approaches and Dilemmas in Curriculum Evaluation, Deakin University, Deakin.
Stake, R., 'Case Study' in Nisbet, J., Megarry, J., and Nisbet, S. (eds) (1985), World Yearbook of Education 1985, Research, Policy and Practice, Kogan Page, London.
Stenhouse, L. (1981), 'Case Study in educational research and evaluation, in Deakin University, 1987, Course Reader, Volume 2: Case Study Approaches, Deakin University, Deakin.
|Please cite as: Gilligan, C. (1990). A rationale for case study in curriculum evaluation. Queensland Researcher, 6(2), 39-51. http://www.iier.org.au/qjer/qr6/gilligan.html|