This paper examines some important issues relating to selection from a competency based qualification into further or higher education. It addresses the question: 'Can a competency based program be used to select outstanding students; and, if not, how else can they validly and reliably be selected?' After briefly considering a ballot, the discussion looks first at how competency standards are set, and the possible effects this may have on both notions of 'competence' and selection for further study. It then examines various approaches to grading within a competency based approach, with particular emphasis on a 'merit' grade. The use of a selection process which falls outside the competency based program is then reviewed. The paper draws these various aspects into a general concluding section, which also suggests what might be the most valid procedures to follow.
From the outset, it must be stressed that the discussion which follows treats the issues both from a theoretical and a practical standpoint. It is assumed that assessment and selection processes are the result of decisions by human beings and not the necessary outcome of any specific theory of assessment or selection. It is further assumed that these decisions can therefore be changed if they are found wanting.
Nevertheless, the discussion does not assume that 'anything goes as long as it works' but seeks to establish principled approaches to the problems at hand. This runs the risk of challenging the current enthusiasm for competency which appears on both sides of the Tasman to be a notion strongly favoured by politicians. However assessment is a complex affair and simplistic answers of any kind may 'work' but conveniently overlook critical issues of validity and reliability.
While one major focus here is the selection into higher education of students emerging from vocational education and leaving courses, the discussion also applies to other situations present or developing in Australia and New Zealand. In both countries, there is widely expressed concern on the part of many teachers and researchers, especially in universities, that a move to competency based assessment will bring only mediocrity, and dampen motivation to strive for excellence. Indeed, early evidence from both countries confirms the fear that some universities at least equate 'competent' with 'average'. This is based on what is arguably the mistaken belief that a person who is competent has reached only a modest standard of achievement, similar to a pass level in more traditional norm based forms of graded examinations. Such a pass level commonly assumed that the learner had managed to achieve only part of what was possible or required. It also meant that the learner was typically in the modal group and hence an 'average' student. This belief that 'competent' means 'average' is clearly wrong, especially when competency requires mastery and/or a high level of accuracy and skill.
Interestingly, there have not been similar fears of mediocrity expressed by large-scale employers or industry groups. In a recent discussion of unit standards and qualifications in New Zealand, it was noted that: 'Participation by industry has been so enthusiastic that the funding allocated has been inadequate to meet the demand' (New Zealand Qualifications Authority, 1993). There is some evidence, however, that at least smaller industries are interested in distinguishing 'better' learners from those who are simply declared as competent (Peddie, 1993a).
A final introductory comment is that the issue of access and equity is not debated in this paper. The author strongly believes that, while maintaining entry standards which offer a reasonable chance of success, addressing equity concerns by reserving places for disadvantaged groups is one small way of moving towards a fair and just society. Equity concerns should therefore be integrated into any adopted selection policies and procedures.
The answers to such an approach lie largely outside assessment theory. First, it can be argued that all learners who have completed the program are 'competent' and have not simply 'gained a pass'. Secondly, it can presumably be shown that in pre-competency times a certain number of learners with similar entry characteristics and program experiences were successful in further study. A similar number of places could justifiably be requested in the first year of completion of a competency based program, with this allocation to be reviewed biennially in terms of the actual performance of the selected learners. Thirdly, it could be requested that the selecting body prepare a list of desired entry level qualities which could then be matched against the skills and understandings attained in the competency based program. This could well give a good deal of useful information which might not otherwise be obvious to the selecting body.
If the number of places is less than the number of (competent) applicants, how should this group be selected? If no further differentiation is or can be made, the use of a ballot might be necessary. As a means of selection, a ballot is reliable, transparent, practical and easy to operate. In general, it can also be regarded as valid and fair when other methods of selection are impossible.
However, it must also be accepted that those who are not selected by ballot, and who believe they are better than some who are, will feel cheated and angry. An equally serious concern is that some who are selected may well feel guilty and uncomfortable as they know that peers who are clearly better have been unsuccessful. Finally, a straight ballot does not provide for equity, unless applicants are first grouped and several ballots are held. On balance, therefore, a ballot should be viewed as a last resort.
The remainder of this paper examines different ways in which those who do deserve selection might be selected at the conclusion of a competency based program.
Even a moment's reflection will show that these standards are only 'objective' in a very narrow sense. Once the standard is set, and the conditions are made clear to the learner and the assessor, then the assessment of the learner's performance may indeed be (relatively) objective.[2] To argue that any such standard is itself objective is to ignore the realities of measurement and assessment.
Let us suppose that a basic unit in carpentry has an outcome relating to the ability to hammer in a nail. On first sight, this appears to be relatively straightforward; either learners can hammer in a nail competently or they cannot. But the density of the wood, the length and type of the nail, the weight and balance of the hammer, and other such factors will affect the performance of the learner. This means that, unless the learning outcome is specified to what might seem an absurd extent, the assessor will have to exercise a degree of what is best described as professional judgement in assessing learner competence. In any judgement which evaluates the standard, as opposed to a straightforward 'measurement', there is inevitably some subjectivity.
In some areas, the task might appear to be much more clearcut. An engineering unit may have a very precisely defined task in terms of the measurements of a component of, say, a refrigerator. Yet it is no less true to say that the decision about these measurements was almost certainly a (subjective) professional judgement based on what was technologically possible at the time and/or the design features set down by the manufacturer.
Assuming there is a need to assess, this would presumably be because the appropriate machinery allows for incompetent performance. There is also in that case an element of subjectivity over whether a learner must always produce the component accurately or whether an occasional failure is permissible. If this seems absurd in practice, as the task is in fact a very simple one, it should be recognised nevertheless that subjective judgements is still being made.
Of course there are learning outcomes involving health and safety where a particular standard is absolutely necessary for us to be confident that the learner can proceed. Nevertheless, performance at that standard is no guarantee of future performance. An anaesthetist may well perform competently on several assessments, and subsequently over a large number of surgical operations. Yet if on the day after a close friend is innocently killed in an alcohol-related car accident the anaesthetist makes an error involving a drunk driver, we would be highly unlikely to blame the assessment procedures used during training.
It is perhaps not obvious that the examples of both the anaesthetist and the carpentry student show how standards are, in fact, based on our experience and expectations and not on some externally determinable objective standard (Peddie, 1992). We may decide that a trainee anaesthetist must demonstrate competence on a number of occasions, in several locations, and under varying degrees of (artificial) stress. We still will make a reasonable judgement about the likely performance of such trainees when we assess their competence, and we hope that any unforeseen circumstances will then be dealt with by the individual or some system of support.
In other words, we make a subjective decision about both the manner and the extent to which we will assess competence. This is so, even in fields where life-and-death decisions have to be made by the learner who is judged 'competent'. We set the competency standard at a level we believe, based on our expectations and experience, is most appropriate for normal on-the-job performance.
Another way of putting this, as has been observed frequently in the literature (see McGaw, 1993), is that we infer competence on the basis of performance; we do not observe it directly. Our avoidance of talk of competence by arguing that what we are doing is simply 'measuring against the standards' does not negate this point. To say someone has 'met the standards' or 'performed at the specified level' means that we have judged them to have performed in a manner similar to someone who we have reason to believe is competent in the elements of the unit in question.
It is perhaps a pity that those who regard competence as something to be judged directly from performance are not more clearly aware of the debates in this area in child language acquisition during the 1950s and 1960s. In that period, Noam Chomsky and others showed the inadequacy of views which focussed solely on behaviour and demonstrated that performance was simply the observable evidence which gave partial confirmation and clues about the nature of the competence which had been acquired by the learner.
What all of this leads to is the strong argument that there is no abstract, external, objective 'competency standard' somewhere out there, just waiting to be incorporated into a unit. Similarly, there is no abstract, external and objective 'competency' which will be acquired by the learner as one might acquire a hamburger from the corner takeaway. A learner is assessed as competent (or not) in terms of the standards set. This competence may be, and often is, quite different from the competence of the experienced and/or expert worker, even if the task which both are doing is identical.[3]
If, then, the issue of selection is added to this situation, it is a simple step to see that altering the standard - a human decision - will alter the proportion of students who are likely to achieve that standard. This, in theory, would allow selection to be made on the basis of the (deliberately manipulated) proportion of those who are successful.
If, for example, I wish to select a small promising group of tennis players for further coaching, I might set the competency standard for successful serving at 90 per cent. Following that decision, far fewer learners will achieve that standard than if it were only 70 per cent. This will be particularly the case if I add some range statements about the speed of the serve and the placement of the ball in the service court.
Yet while such a move may solve one problem, it clearly raises others. We do have expectations and experience in terms of what learners can and cannot reasonably be expected to do. But it is also quite clear that one of the prime goals of a shift to competency based assessment was to eliminate the expectation of failure which is built into most norm referenced systems. If the competency standard is set so high that only a few can be expected to reach it, this creates serious concern over 'pass' rates.
A second problem is that raising the standard in no way guarantees that only a few will reach that standard. Superb teaching, coupled with a high level of motivation and a good entrance level might result in a large proportion of particular groups of learners reaching the high standard set. In a competency based program there would then be no fair way of retrospectively setting the standard even higher, to ensure that only a small number could be selected.
A third problem relates to the overall program of learning. In many cases, a much higher standard would already be one of the learning outcomes of a subsequent unit placed at a higher level on the qualifications framework, or belonging to a higher level qualification in which selection was not an issue. In such cases, the amendment of the standard for one unit could have very complex and interactive effects on what was prescribed for other units.
Yet perhaps this approach should not be dismissed out of hand. Might it be possible and even considered desirable to set the standard of one or two of the last units in a program at a very high level, and to use success in these units as a selection device? Unfortunately, the three issues just mentioned would still apply: a very high standard right at the end would prove demoralising for the 'good-average' learner; there could be no guarantee of producing the 'right' number of successful learners; and there could still be awkward programming and qualification issues as a result of the somewhat artificially higher standard if this is a lower-level qualification where a higher-level one exists.
It would seem clear from this brief review that artificially manipulating the standard in one or more units may or may not allow for selection. Furthermore, it will always have some undesirable effects, and will often create serious program and other curriculum issues.
First, however, it must be assumed that we are talking here about grades which - perhaps among other things - measure performance above the predetermined competency standard. While grades might well be used to show learners how far their performance is below being competent, that is not the issue here. The present discussion is about selection, and therefore about ways of selecting 'the best'.
This leads to the consideration of an important fact: many learning outcomes do not allow the valid use of more than two grades (competent, not yet competent). In particular, in most cases of competency based assessment it would be foolish to try to use grades where a totally accurate performance was required for competence, especially where health and safety are involved. The phrase 'in most cases' suggests there are some cases where this is possible; but where in fact could we use grades in such a case? The answer lies primarily in the validity and reliability of the measures selected.
For example, to some observers, speed is a commendable attribute, regardless of whether speed is in fact essential or even desirable for the learning task. A fast finisher, then, might be awarded a higher grade. For others, the (apparent) motivation and interest of the learner add a dimension which might be rewarded. In these cases, however, we are in fact relying on features which appear to lie outside the competency framework. We will return to a discussion of the use of such features below.
A second point is that what most employers and higher education selection agencies will be looking for is the 'highest' grade or combination of grades. It therefore becomes slightly less important to specify a number of grade levels, and more important to consider first a 'merit' level. At this point, no distinction is drawn between 'merit' and 'excellence'; this distinction is, however, examined in a later section of this paper.
To begin with, what valid criteria for a 'merit' standard might be used in units where there is some sort of 'ceiling' above the desired competency level? Peddie (1993b, 1993c) has suggested that for a single merit standard or grade there is a variety of ways which might be considered. First, we will consider approaches that relate more closely to the learning outcomes themselves.
Conceptually, therefore, the first two approaches are very similar. They have the strong attraction of being what we regularly think of as meritorious or excellent, and they would both seem to be appropriate for determining a small group for selection.
Nevertheless, there are some problems here. These relate to measurement, motivation and a variety of other issues. The following points present discussion on some of the more obvious of these problems.
This point was discussed earlier in terms of manipulating the competency standard. The same problems noted there apply here, although one point can be added. Earlier discussion showed that no upper limit of meritorious students can be pre-specified, as good teaching and an unusually strong group may result in higher numbers than expected. But, if the merit level is set at a very high level, the result may well be that fewer learners reach the merit standard than the selection process requires. This creates the same sort of problems in selection as a system in which too many learners reach the standard. Unless the grades are then statistically manipulated, which is a violation of the competency based approach,[5] there is no valid way of determining which of the remaining candidates should be selected.
It is reasonable to say that many learning outcomes in a competency based program will have speed as one of the aspects to be assessed. To take some fairly obvious examples, it would be somewhat peculiar, for example, if accuracy in keyboard skills did not involve any sort of time limit on the performance. It would be equally true that an apprentice carpenter who took five minutes to hammer in each nail - however accurate the performance - would not be highly regarded on a building site.
If speed is built in as a performance measure for one or more learning outcomes, this simply takes us back to the first two approaches involving performance above the competency standard. At that point, most of the concerns expressed above about those approaches can simply be repeated. Are there, then, cases where speed could be a convenient and appropriate measure for the award of merit but where it was not an aspect of the learning outcome?
Two cases where this might be acceptable and remain valid would be the speed at which the learner completed the unit (or achieved one or more of the learning outcomes) and the speed at which the learner demonstrated competence. The former case would be acceptable because we quite normally think of those who learn the same material faster as being 'better' than their slower counterparts. It could be seen as valid because it was thus a measure of merit which is both widely accepted and which does not alter the nature of the competency required. The latter case (speed of performance) is commonly associated with expertise and would seem to be quite acceptable for that reason.
Such an approach does introduce a new dimension to the learning process. In New Zealand, at least, one of the major publicised features of the new competency based approach has been that it eliminates the notion of 'time-served' measures, and allows learners to proceed at their own pace. If speed of attaining learning outcomes were to become the criterion for merit, then this could have potentially drastic implications for 'non-merit' learners in either the classroom or a workplace learning situation.
There is a further complication. Speed of completion, either of the unit or of assessment tasks, may in part be a reflection of ability or 'merit', but in part is also a reflection of prior learning and experience. It would seem most odd to claim that a learner who asked for assessment on the basis of prior learning, and who immediately reached the credit/competency standard should be awarded a merit grade for having shown exceptional speed! Yet, if it is accepted that all learners bring with them to many classrooms very different prior learning experiences, it is difficult to see how speed alone can reflect 'merit'.
Does it matter, however, if the object is to determine which learners can profit by further and advanced forms of learning? Put another way, if one learner does finish much faster than others, is it not this result that matters? A moment's reflection will show that, while there may be some element of truth in that claim, it is clearly only part of the story. If a learner has spent some time developing basic skills in one arena, and then comes rapidly to show competence in that same set of skills as part of a competency assessment, this does not automatically mean that such a learner will perform rapidly or successfully in tasks involving more advanced skills.
There are other practical and theoretical issues to consider here. What would we do when one learner demonstrated transfer of one skill to a variety of contexts while another learner demonstrated transfer of several skills into one new context? Would both learners be considered equally meritorious? It may be that the answer to such a question can reasonably be determined on the basis of the goals of a program, or the eventual needs of an industry sector, but the validity of this determination needs to be explicit.
While there are several significant practical concerns, the most important of these questions appears to be the first. It does seem odd to say that a particular set of learning outcomes is required to be achieved for a learner to be declared 'competent' but that something different is required for the learner to be selected as 'excellent'. There would seem to be a much stronger conceptual basis for considering more closely an approach in which the 'normal' learning outcomes themselves were the focus of any measure.
A further point here is the possible link to speed of learning (or of performance). If a learner is expected to achieve additional learning outcomes in the same time as learners who achieve only what is specified for competence, then the earlier discussion on speed as a criterion needs to be revisited. If the time frame is not specified or important, then a new factor arises: to what extent would we feel comfortable with the notion that a learner might simply keep on going until, well after other learners have passed on to a new higher level unit, this plodder finally achieves the additional learning outcomes and is awarded a 'merit' grade?
First, the most creative people are not necessarily the highest achievers in a more conventional sense. A creative learner may indeed produce an innovative response or performance, but not one in which high levels are achieved in the more practical aspects of that performance. There are of course learning domains like the arts where creativity is valued as a central part of a learning outcome, but these are relatively limited.
Nevertheless, it should not be forgotten that higher education institutions typically applaud learners who provide 'creative' analyses to areas as diverse, for example, as literature, mathematics and sociology. The same, by extension, can be said for many performances in engineering, architecture and some branches of science, but arguably less so for medicine, law, languages and some other aspects of the sciences.
Yet is this 'creativity of solutions' the same as creativity in a work of art? In one sense it is, as it involves the understanding of the parameters of a situation and extending them in some novel way. In another sense, there would seem to be a clear difference between a self-initiated form of creativity and one in which a learner responds to an issue raised by a teacher, or even writes an essay which is genuinely 'creative' (or 'innovative'), but has very little to do with the question asked - a situation many teachers will recognise immediately!
More importantly, even if there were to be agreement that creativity should be counted as a measure of merit or excellence, the problems of reliability of measurement are both serious and again well known. While there are ways around this, they tend to involve several judges and a range of performances.[8] This can sometimes lead to severe practical difficulties.
Finally, there are cases where creativity is neither expected nor welcomed at the early stages of a learning process. The assessing of learners in fields like automotive engineering, carpentry, computer technology and many other vocational areas is not concerned in the early stages with flair or creativity. This does not seem therefore to be an option which could have widespread application. Yet in the arts, hairdressing, fashion technology, interior design and in many other fields, if valid criteria for creativity can be determined for a competency level, then a merit level may well equally be possible.
A special case of considering the good learner does occur currently in educational selection. There are several prestigious scholarships and awards (including the Rhodes Scholarship) where personal qualities are part of the selection criteria. While this is true, these appear to be limited to cases where selection needs to be made among those whose scholastic excellence is well-established. There do not appear to be cases where personal qualities alone count as the criteria for selection.[9]
First, can ranking be used in a competency based assessment approach? The short answer is 'possibly'. A system of multiple grades can be used in which one grade point is properly specified as the competency level, provided no scaling is applied to ensure only certain numbers of learners reach this level. In one sense the positive answer is still debatable as it does depend on the definition of 'competency based'. The approach taken here is that the critical characteristic distinguishing a norm based approach from a competency based approach is that in the former approach ranking and a suitable spread of scores are the prime and conscious goals while in the latter approach they are not.
In a norm based approach it is also common to use statistics to scale the results to ensure an appropriate spread of marks. This is at odds with a competency based approach. There, the prime focus is helping learners reach one standard which has been specified as the competency level. Learners may then be quite confident that, if they personally reach that standard, they will be declared competent and credited with that unit.[10]
In a norm based system, then, the prime focus of the assessment (but not necessarily the teaching), is to spread the students as widely as possible so that a reliable rank order can be determined. This is done either by choosing an appropriate difficulty level for the assessment, or by statistical manipulation of the marks gained, or both. There is therefore no certainty about outcomes on the basis of performance for the individual learner. While the 'very good' learner will still normally gain a very good result, the specific rank and mark awarded may well be determined not only on their individual performance but also on the performances of all other learners attempting the unit or tasks, and especially the number and quality of other 'very good' learners.[11]
It should nevertheless be recognised that ranking systems can incorporate full feedback and profiles of learner strengths and weaknesses. It is also true that the 'pass' level can as easily be set in terms of competence as it might be in terms of experimentally determined norms. This is because the setting of our competency levels almost always relates to our underlying expectations and experience, in other words, it is related to more informally determined norms (Peddie, 1993b). Similarly, it should not be thought that in competency based systems there cannot be attention paid to relative degrees of success in a class group.
It does seem fair to suggest that in a competency based program the focus is on helping all learners to reach the competency standard. In a ranking system, the focus is, or certainly should be, on helping each individual learner to do their best. Assessments are then typically set in such a way as to allow all learners to perform at optimum levels. In a mixed-ability group this almost necessarily means offering a system where a number of grade or percentage points is used as a convenient summary of learner achievement.[12] In New Zealand, at least, it has also tended to be the case that when a pass/fail level is used this base has sometimes been set at or near the mid-point of these grades or percentages. In a competency based approach, these practices are totally unnecessary.
It should be noted in any case that the provision of a mid-point pass/fail mark in either a ranking or any norm based system is perhaps only defensible when selection is required on the basis of a fairly coarse measure and in terms of some notional standard. Thus, a 'pass' in the Year Ten School Certificate examinations in New Zealand used to mean that a learner was, broadly speaking, in the upper half of the year group, and had achieved at a more or less identifiable level in the subjects passed. This was achieved by scaling marks to ensure a normal distribution.[13] There would seem, however, to be no theoretical justification for setting a pass/fail point in a norm based or ranking system unless selection needs to be made in these (largely predetermined) terms.
While this is by no means new, it is worth stressing in any discussion of selection. Selection based on merit requires some form of ranking. The key issue then becomes the extent to which we can validly and reliably say that the lowest ranked learner selected is 'better' than the highest who is not. As it may be assumed that not all of those who achieve at the competency level will be selected (if they were, a paper like this would be unnecessary), the theoretical - and the practical - focus must be squarely on the 'cut-off' point. To add to the problem here, the cut-off point will presumably vary on different occasions and for different purposes, as the numbers being selected from successful learners who have completed a competency based program may themselves be affected by the numbers applying on a different basis.
To sum up this section, the following conclusions can be drawn.
Excellence carries with it not only a notion of being outstanding, but also the notion of a certain exclusivity (Peddie, 1993a). Thus, while it probably does not violate our notions of appropriate assessment policy to say that the aim of a program is to have the whole group of learners perform at a merit standard, there is arguably something slightly odd about saying that as a matter of course all learners could be (equally) 'excellent' in performing a particular skill.[14] In other words, it can be argued that learners are 'excellent' in comparison with another larger group of learners who are not excellent. While therefore it makes sense to suggest that a merit standard could be the highest of an equal division of grades or levels, it is likely that we would want a standard of excellence to be somewhat 'smaller' than other grades.[15]
In some contexts, this distinction may simply be noted; in others, and where selection is the focus, the difference is critical. If a high standard is required, but there are no very strict limits on numbers, a merit grade might be quite adequate as a basis for selection. If places are limited and entry is competitive, then 'excellence' may be the major focus. In that case, the intention of the selection process may well be the elimination of even the very good learners, and the selection grade(s) will be deliberately set so that only the few learners required are expected to attain the standard(s). If the distinctions drawn here are accepted, it becomes clear that setting a standard for 'excellence' appears to be conceptually at odds with competency based teaching and assessment, even if strict ranking is not utilised (Peddie, 1993a). This leads to consideration of separate assessment for selecting those learners who are 'excellent'.
The notion that such a test would not 'disrupt' normal teaching, then, is true only in terms of the assessing time itself. Furthermore, those learners who (in theory) are not seeking selection would be affected by the very fact that most teachers would 'teach to the test'. It is likely that they, too, would aim for selection, particularly if the stakes were perceived as high. Given the likelihood of a test which attempts to discriminate only among better learners, this could have significant effects on the morale of otherwise 'competent' students.
The use of a norm based and/or ranking test would also tend to lead to the kind of competition in the learning situation that protagonists of competency based programs are seeking to avoid. Also, the presence of a test in which it was known that features or approaches could be included which were outside the normal competency based program would tend to lead to these being included by the conscientious teacher.
If it is decided that such a test should be held, how can these and other problems be minimised? First, a selection test should focus mainly on the normal content of the competency based program. Second, if a prognostic aspect is believed desirable, this should be tested by assessing the ability of learners to adapt their skills to new contexts or situations, even if the program for which selection is being made does involve other sorts of skills.[17] Third, the test should be norm based across the whole group of learners, with an effort to ensure that any learner already assessed as competent will score at least at the half-way point.
These proposals strongly suggest that it should be the institution or organisation in which the competency based program is running which should prepare and run the selection test, with the obvious addition of consultation with the selecting body to ensure the test is acceptable for their purposes. It is far less likely that the latter would prepare a test which would avoid the difficulties raised earlier in this section.
In many spheres of life, it is surprisingly common to select for a single outcome learners who show different but basically equally valued characteristics. Sports teams do not select only players for one position; outstanding scholars can have superb research or writing or other skills; companies may well select manager trainees who show a variety of strengths which may collectively be useful to the company. Thus, it is arguably both valid and acceptable to have more than one approach to merit or excellence in a program or qualification and to use all such approaches for selection purposes. The key point here is that those selecting be completely aware of the different bases for merit used in the program(s) from which learners are being selected.
It must be emphasised that all of these points need to be taken into consideration when a decision on selection policies is made for learners who are completing qualifications awarded on the basis of standards or competency based assessment.
This approach is clearly the 'purest' approach to selection within a competency based framework. It accepts that the best way to select learners is in terms of their actual achievement in identical or highly comparable situations. It does tend to ignore the effects of entry skills and abilities, but it does focus on actual abilities in terms of outcomes, a key point in competency and standards based approaches.
The warnings about setting the standard too high, the effects on teaching methods and on other learners and the risk that a larger than required group of well taught and well motivated learners might reach the higher standard are all points to take into consideration.
The assumption is made here that for a recognised qualification and/or curriculum there will never be complete freedom to choose among totally optional units. On the other hand, it is clear that for some courses of study all units will be compulsory. In the variety of cases that may occur, it is strongly recommended that the following rules be followed. This is advised regardless of the method used to determine merit.
Using merit in a situation where there is only a small group of compulsory units could have serious implications in terms of errors of measurement and consequent level of reliability. To take a very simple example, a learner who just fails to reach a merit standard in two or three compulsory units, but achieves at a very high level in a number of optional units where no merit standard is available, is likely to be disadvantaged by the system rather than by skill level in the program as a whole.
In such cases, it is inevitable that the core units will require an approach to merit which is at best simply different from that used in the optional units and at worst often less valid. In the reverse case, where only compulsory units have an identifiable ceiling, the use of merit standards for these units is highly recommended.
What is being proposed may sound like a ranking test, but this is not the case. The competency level for this extra test could be set at quite a high standard, and it would simply have to be accepted by both learners and the selecting body that the numbers of successful learners may well not match the places available. The goal in setting the standard might well be to err on the side of having too many successful learners. At least, then, a meritorious group would have been identified. Whatever its faults, a ballot could then be used when necessary as the fairest way to select among this group.
The second option is to run a norm referenced ranking test along the lines discussed earlier. This has the advantage of probably making selection easier, but the first option is arguably more valid.
Should such a ballot, however, be restricted to those whom the competency based program teachers believe have demonstrated work habits and personal qualities which suggest they will benefit most from further study? While this is intuitively attractive, it does relate to a means of selection already rejected as not being part of a Western ideal of merit. It would seem odd, therefore, to use such criteria as part of a 'pre-selection' for a ballot.
Finally, an important consequence of all these considerations is the need for considerable discussion over selection procedures among all parties involved. Whatever solutions are initially adopted should be continually monitored for effectiveness and equity. Only in this way will improvements be made in an area where an examination of both theory and practice reveals that selection is always going to be an issue of complex debate.
New Zealand Qualifications Authority. (1993). Briefing papers for the incoming government. Wellington: New Zealand Qualifications Authority.
Peddie, R. A. (1992). Beyond the norm? An introduction to standards-based assessment. Wellington: New Zealand Qualifications Authority.
Peddie, R. A. (1993a). Achieving excellence: A second report on merit in competency-based assessment (Report to the New Zealand Qualifications Authority). Auckland: The University of Auckland.
Peddie, R. A. (1993b). Standards, levels, grades and merit: A critical analysis (Proceedings of the National Assessment Research Forum conducted by the Competency-Based Working Party of the Vocational Education, Employment and Training Advisory Committee, Sydney, 1-2 April 1993).
Peddie, R. A. (1993c). Standards of excellence: The use of merit awards in competency-based assessment (Report to the New Zealand Qualifications Authority). Wellington: New Zealand Qualifications Authority.
Author details: Roger A. Peddie is Associate Professor and Director of the Centre for Continuing Education, The University of Auckland, New Zealand. After working as a secondary school teacher and as a teachers' college lecturer, he was appointed in 1978 to the Education Department of the University of Auckland. He took up his present position in1992. He has undertaken extensive research for the New Zealand Qualifications Authority on standards based assessment and has recently co-authored (with Bryan Tuck) a book on standards based assessment Setting the Standards.
Please cite as: Peddie, R. A. (1997). Some issues in using competency based assessment in selection decisions. Queensland Journal of Educational Research, 13(3), 16-45. http://education.curtin.edu.au/iier/qjer/qjer13/peddie.html |