QJER logo 2
[ Contents Vol 13, 1997 ] [ QJER Home ]

Some issues in using competency based assessment in selection decisions

Roger A. Peddie
Centre for Continuing Education
The University of Auckland
This paper examines some important issues relating to selection from a competency based qualification into further or higher education. It addresses the question: 'Can a competency based program be used to select outstanding students; and, if not, how else can they validly and reliably be selected?' After briefly considering a ballot, the discussion looks first at how competency standards are set, and the possible effects this may have on both notions of 'competence' and selection for further study. It then examines various approaches to grading within a competency based approach, with particular emphasis on a 'merit' grade. The use of a selection process which falls outside the competency based program is then reviewed. The paper draws these various aspects into a general concluding section, which also suggests what might be the most valid procedures to follow.

From the outset, it must be stressed that the discussion which follows treats the issues both from a theoretical and a practical standpoint. It is assumed that assessment and selection processes are the result of decisions by human beings and not the necessary outcome of any specific theory of assessment or selection. It is further assumed that these decisions can therefore be changed if they are found wanting.

Nevertheless, the discussion does not assume that 'anything goes as long as it works' but seeks to establish principled approaches to the problems at hand. This runs the risk of challenging the current enthusiasm for competency which appears on both sides of the Tasman to be a notion strongly favoured by politicians. However assessment is a complex affair and simplistic answers of any kind may 'work' but conveniently overlook critical issues of validity and reliability.

While one major focus here is the selection into higher education of students emerging from vocational education and leaving courses, the discussion also applies to other situations present or developing in Australia and New Zealand. In both countries, there is widely expressed concern on the part of many teachers and researchers, especially in universities, that a move to competency based assessment will bring only mediocrity, and dampen motivation to strive for excellence. Indeed, early evidence from both countries confirms the fear that some universities at least equate 'competent' with 'average'. This is based on what is arguably the mistaken belief that a person who is competent has reached only a modest standard of achievement, similar to a pass level in more traditional norm based forms of graded examinations. Such a pass level commonly assumed that the learner had managed to achieve only part of what was possible or required. It also meant that the learner was typically in the modal group and hence an 'average' student. This belief that 'competent' means 'average' is clearly wrong, especially when competency requires mastery and/or a high level of accuracy and skill.

Interestingly, there have not been similar fears of mediocrity expressed by large-scale employers or industry groups. In a recent discussion of unit standards and qualifications in New Zealand, it was noted that: 'Participation by industry has been so enthusiastic that the funding allocated has been inadequate to meet the demand' (New Zealand Qualifications Authority, 1993). There is some evidence, however, that at least smaller industries are interested in distinguishing 'better' learners from those who are simply declared as competent (Peddie, 1993a).

A final introductory comment is that the issue of access and equity is not debated in this paper. The author strongly believes that, while maintaining entry standards which offer a reasonable chance of success, addressing equity concerns by reserving places for disadvantaged groups is one small way of moving towards a fair and just society. Equity concerns should therefore be integrated into any adopted selection policies and procedures.


A preliminary aspect to consider is whether the competency based program teachers can convince the selecting body that any places should be allotted to their 'only competent' learners. As noted above, there is a risk that some institutions may take a negative line, particularly those opposed to competency based approaches and to what are perceived as government or industry controlled assessment and qualifications frameworks.

The answers to such an approach lie largely outside assessment theory. First, it can be argued that all learners who have completed the program are 'competent' and have not simply 'gained a pass'. Secondly, it can presumably be shown that in pre-competency times a certain number of learners with similar entry characteristics and program experiences were successful in further study. A similar number of places could justifiably be requested in the first year of completion of a competency based program, with this allocation to be reviewed biennially in terms of the actual performance of the selected learners. Thirdly, it could be requested that the selecting body prepare a list of desired entry level qualities which could then be matched against the skills and understandings attained in the competency based program. This could well give a good deal of useful information which might not otherwise be obvious to the selecting body.

If the number of places is less than the number of (competent) applicants, how should this group be selected? If no further differentiation is or can be made, the use of a ballot might be necessary. As a means of selection, a ballot is reliable, transparent, practical and easy to operate. In general, it can also be regarded as valid and fair when other methods of selection are impossible.

However, it must also be accepted that those who are not selected by ballot, and who believe they are better than some who are, will feel cheated and angry. An equally serious concern is that some who are selected may well feel guilty and uncomfortable as they know that peers who are clearly better have been unsuccessful. Finally, a straight ballot does not provide for equity, unless applicants are first grouped and several ballots are held. On balance, therefore, a ballot should be viewed as a last resort.

The remainder of this paper examines different ways in which those who do deserve selection might be selected at the conclusion of a competency based program.


When a competency based program is developed which in turn leads to a vocational qualification, in both Australia and New Zealand this program is normally divided into a number of what, for simplicity's sake, will be called ' units'.[1] Each unit typically includes a set of 'elements' or 'learning outcomes'. These elements are further clarified by some form of outcome statement describing the level or standard of performance which the learners must demonstrate for their performance to be counted towards the satisfactory completion of the unit. This standard is, if some proponents of the system are to be believed, an objective statement of the competency level required on the job and/or to proceed to a unit located at the next level.

Even a moment's reflection will show that these standards are only 'objective' in a very narrow sense. Once the standard is set, and the conditions are made clear to the learner and the assessor, then the assessment of the learner's performance may indeed be (relatively) objective.[2] To argue that any such standard is itself objective is to ignore the realities of measurement and assessment.

Let us suppose that a basic unit in carpentry has an outcome relating to the ability to hammer in a nail. On first sight, this appears to be relatively straightforward; either learners can hammer in a nail competently or they cannot. But the density of the wood, the length and type of the nail, the weight and balance of the hammer, and other such factors will affect the performance of the learner. This means that, unless the learning outcome is specified to what might seem an absurd extent, the assessor will have to exercise a degree of what is best described as professional judgement in assessing learner competence. In any judgement which evaluates the standard, as opposed to a straightforward 'measurement', there is inevitably some subjectivity.

In some areas, the task might appear to be much more clearcut. An engineering unit may have a very precisely defined task in terms of the measurements of a component of, say, a refrigerator. Yet it is no less true to say that the decision about these measurements was almost certainly a (subjective) professional judgement based on what was technologically possible at the time and/or the design features set down by the manufacturer.

Assuming there is a need to assess, this would presumably be because the appropriate machinery allows for incompetent performance. There is also in that case an element of subjectivity over whether a learner must always produce the component accurately or whether an occasional failure is permissible. If this seems absurd in practice, as the task is in fact a very simple one, it should be recognised nevertheless that subjective judgements is still being made.

Of course there are learning outcomes involving health and safety where a particular standard is absolutely necessary for us to be confident that the learner can proceed. Nevertheless, performance at that standard is no guarantee of future performance. An anaesthetist may well perform competently on several assessments, and subsequently over a large number of surgical operations. Yet if on the day after a close friend is innocently killed in an alcohol-related car accident the anaesthetist makes an error involving a drunk driver, we would be highly unlikely to blame the assessment procedures used during training.

It is perhaps not obvious that the examples of both the anaesthetist and the carpentry student show how standards are, in fact, based on our experience and expectations and not on some externally determinable objective standard (Peddie, 1992). We may decide that a trainee anaesthetist must demonstrate competence on a number of occasions, in several locations, and under varying degrees of (artificial) stress. We still will make a reasonable judgement about the likely performance of such trainees when we assess their competence, and we hope that any unforeseen circumstances will then be dealt with by the individual or some system of support.

In other words, we make a subjective decision about both the manner and the extent to which we will assess competence. This is so, even in fields where life-and-death decisions have to be made by the learner who is judged 'competent'. We set the competency standard at a level we believe, based on our expectations and experience, is most appropriate for normal on-the-job performance.

Another way of putting this, as has been observed frequently in the literature (see McGaw, 1993), is that we infer competence on the basis of performance; we do not observe it directly. Our avoidance of talk of competence by arguing that what we are doing is simply 'measuring against the standards' does not negate this point. To say someone has 'met the standards' or 'performed at the specified level' means that we have judged them to have performed in a manner similar to someone who we have reason to believe is competent in the elements of the unit in question.

It is perhaps a pity that those who regard competence as something to be judged directly from performance are not more clearly aware of the debates in this area in child language acquisition during the 1950s and 1960s. In that period, Noam Chomsky and others showed the inadequacy of views which focussed solely on behaviour and demonstrated that performance was simply the observable evidence which gave partial confirmation and clues about the nature of the competence which had been acquired by the learner.

What all of this leads to is the strong argument that there is no abstract, external, objective 'competency standard' somewhere out there, just waiting to be incorporated into a unit. Similarly, there is no abstract, external and objective 'competency' which will be acquired by the learner as one might acquire a hamburger from the corner takeaway. A learner is assessed as competent (or not) in terms of the standards set. This competence may be, and often is, quite different from the competence of the experienced and/or expert worker, even if the task which both are doing is identical.[3]

If, then, the issue of selection is added to this situation, it is a simple step to see that altering the standard - a human decision - will alter the proportion of students who are likely to achieve that standard. This, in theory, would allow selection to be made on the basis of the (deliberately manipulated) proportion of those who are successful.

If, for example, I wish to select a small promising group of tennis players for further coaching, I might set the competency standard for successful serving at 90 per cent. Following that decision, far fewer learners will achieve that standard than if it were only 70 per cent. This will be particularly the case if I add some range statements about the speed of the serve and the placement of the ball in the service court.

Yet while such a move may solve one problem, it clearly raises others. We do have expectations and experience in terms of what learners can and cannot reasonably be expected to do. But it is also quite clear that one of the prime goals of a shift to competency based assessment was to eliminate the expectation of failure which is built into most norm referenced systems. If the competency standard is set so high that only a few can be expected to reach it, this creates serious concern over 'pass' rates.

A second problem is that raising the standard in no way guarantees that only a few will reach that standard. Superb teaching, coupled with a high level of motivation and a good entrance level might result in a large proportion of particular groups of learners reaching the high standard set. In a competency based program there would then be no fair way of retrospectively setting the standard even higher, to ensure that only a small number could be selected.

A third problem relates to the overall program of learning. In many cases, a much higher standard would already be one of the learning outcomes of a subsequent unit placed at a higher level on the qualifications framework, or belonging to a higher level qualification in which selection was not an issue. In such cases, the amendment of the standard for one unit could have very complex and interactive effects on what was prescribed for other units.

Yet perhaps this approach should not be dismissed out of hand. Might it be possible and even considered desirable to set the standard of one or two of the last units in a program at a very high level, and to use success in these units as a selection device? Unfortunately, the three issues just mentioned would still apply: a very high standard right at the end would prove demoralising for the 'good-average' learner; there could be no guarantee of producing the 'right' number of successful learners; and there could still be awkward programming and qualification issues as a result of the somewhat artificially higher standard if this is a lower-level qualification where a higher-level one exists.

It would seem clear from this brief review that artificially manipulating the standard in one or more units may or may not allow for selection. Furthermore, it will always have some undesirable effects, and will often create serious program and other curriculum issues.


Another approach to selection is to consider increasing the number of standards within a unit, so that grades can be awarded and thus the 'best' learners identified.[4] According to the logic of such an approach, a learner who accumulates a series of high grades will then be at an advantage for selection purposes. While this approach is intuitively attractive, not the least to teachers used to giving percentages and grades, there are some major issues to consider.

First, however, it must be assumed that we are talking here about grades which - perhaps among other things - measure performance above the predetermined competency standard. While grades might well be used to show learners how far their performance is below being competent, that is not the issue here. The present discussion is about selection, and therefore about ways of selecting 'the best'.

This leads to the consideration of an important fact: many learning outcomes do not allow the valid use of more than two grades (competent, not yet competent). In particular, in most cases of competency based assessment it would be foolish to try to use grades where a totally accurate performance was required for competence, especially where health and safety are involved. The phrase 'in most cases' suggests there are some cases where this is possible; but where in fact could we use grades in such a case? The answer lies primarily in the validity and reliability of the measures selected.

For example, to some observers, speed is a commendable attribute, regardless of whether speed is in fact essential or even desirable for the learning task. A fast finisher, then, might be awarded a higher grade. For others, the (apparent) motivation and interest of the learner add a dimension which might be rewarded. In these cases, however, we are in fact relying on features which appear to lie outside the competency framework. We will return to a discussion of the use of such features below.

A second point is that what most employers and higher education selection agencies will be looking for is the 'highest' grade or combination of grades. It therefore becomes slightly less important to specify a number of grade levels, and more important to consider first a 'merit' level. At this point, no distinction is drawn between 'merit' and 'excellence'; this distinction is, however, examined in a later section of this paper.

To begin with, what valid criteria for a 'merit' standard might be used in units where there is some sort of 'ceiling' above the desired competency level? Peddie (1993b, 1993c) has suggested that for a single merit standard or grade there is a variety of ways which might be considered. First, we will consider approaches that relate more closely to the learning outcomes themselves.


There are at least four main ways in which a merit grade might be awarded that are to a greater or lesser degree dependent on the achievement of the learning outcomes, and are not 'additional' to the normal set of competency tasks:
  1. Achievement of standards required for the next framework level;
  2. Achievement at a standard well beyond the competency/ credit standard;
  3. Speed - attaining learning outcomes at a faster rate; and
  4. Consistency of performance. (Adapted from Peddie, 1993b)

Types 1 and 2: Higher standards

Each of these clearly has its advantages and its limitations. The first approach would be limited to high achievement in some, but not all of the learning outcomes. If the learner achieved all outcomes at the level required for the next level, they would obviously be credited with the higher level unit. The New Zealand Qualifications Authority's (NZQA) early rejection of this approach, on the grounds that 'partial' credit could not be awarded, seems to ignore reality. In fact, in sequential units measuring the same skills (for example, typing), at least a few learners will at times reach the competency standards at the next higher level for at least some of the learning outcomes.

Conceptually, therefore, the first two approaches are very similar. They have the strong attraction of being what we regularly think of as meritorious or excellent, and they would both seem to be appropriate for determining a small group for selection.

Nevertheless, there are some problems here. These relate to measurement, motivation and a variety of other issues. The following points present discussion on some of the more obvious of these problems.

  1. Is the range of (competent) performance sufficiently large to allow for valid and reliable criteria to be set, and judgements to be made? In other words, is the ceiling high enough to allow for an extra grade to be fairly determined? This is normally easy enough to determine for individual learning outcomes. What does the assessor do, however, in cases where there is a ceiling for some learning outcomes, but not for others, and competence is judged in terms of performance on the whole unit? A practice-derived professional decision, based in part on whether this decision gives undue attention to less important aspects of the unit, appears to be the most reasonable way of handling this question.

  2. Should the merit level be set so high that it is confidently expected that only a small number of learners will reach it? Given that the answer is likely to be 'yes', can such an obviously norm based procedure be used validly in a competency based program? What effects will this have on the motivation and morale of otherwise competent learners?

    This point was discussed earlier in terms of manipulating the competency standard. The same problems noted there apply here, although one point can be added. Earlier discussion showed that no upper limit of meritorious students can be pre-specified, as good teaching and an unusually strong group may result in higher numbers than expected. But, if the merit level is set at a very high level, the result may well be that fewer learners reach the merit standard than the selection process requires. This creates the same sort of problems in selection as a system in which too many learners reach the standard. Unless the grades are then statistically manipulated, which is a violation of the competency based approach,[5] there is no valid way of determining which of the remaining candidates should be selected.

  3. What effects will setting a high standard for merit have on the teaching methods, formative assessment tasks[6] and the ways in which units are grouped in a broader curriculum package? On a similar note, what practical decisions will teachers need to make to allow better learners appropriate opportunities to achieve a merit standard but also to ensure that disproportionate amounts of class time are not taken up by focussing on these learners?

  4. What appears to be a further and purely practical point in fact has theoretical overtones. How will teachers handle students who achieve competence, but then demand further teaching before proceeding to higher level units, as they want to achieve the merit level for selection purpo ses? While this might seem at first sight somewhat peculiar, there are both precedents and arguments which make sense. It is by no means clear that continued work at the one level is desirable, yet students have been able to re-sit the University Entrance Bursary examination in New Zealand after passing the first time. As well, and perhaps even more importantly, the New Zealand Qualifications Framework is based in part upon a philosophy of success through retesting. The theoretical issue relates to whether candidates achieving a merit standard after several attempts are regarded as equally meritorious. It is worth adding and stressing that early fears about assessment taking up too much time in competency based approaches have recently been borne out by British experiences in the National Curriculum. The possibility of a demand for further help to reach a merit standard (which must be assessed), reinforces those fears and also makes the point that teaching time can be equally drastically affected.
As can be seen, there are answers to these and other questions, but they are not all satisfactory, either theoretically or in terms of the practicalities of the classroom or workplace. The introduction of grades, or even a merit grade, is in many ways a straightforward matter. At the same time, it clearly does threaten to shift the focus away from a competency standard. This then may easily debase that standard in ways that many observers would see as quite counterproductive to the whole competency movement.

Type 3: Speed

The third approach listed above, that of speed, should not be dismissed lightly. In most Western societies, we do regard speed of performance as important. This is true not only in the many sports where competitor speed is judged directly (car or foot races) or indirectly (rugby or netball). We also approve and often talk about such things as the speed of service given by a car mechanic, a drycleaner, or the checkout operator at a supermarket. In more educational settings, we admire a very young learner who achieves at the highest level in a music examination, and we even tend to speak approvingly when our child is the youngest in her class at school. It is curious that the same approval is never expressed over someone being the oldest in the class, although we do approve of and applaud a learner in their seventies who successfully completes a university degree.

It is reasonable to say that many learning outcomes in a competency based program will have speed as one of the aspects to be assessed. To take some fairly obvious examples, it would be somewhat peculiar, for example, if accuracy in keyboard skills did not involve any sort of time limit on the performance. It would be equally true that an apprentice carpenter who took five minutes to hammer in each nail - however accurate the performance - would not be highly regarded on a building site.

If speed is built in as a performance measure for one or more learning outcomes, this simply takes us back to the first two approaches involving performance above the competency standard. At that point, most of the concerns expressed above about those approaches can simply be repeated. Are there, then, cases where speed could be a convenient and appropriate measure for the award of merit but where it was not an aspect of the learning outcome?

Two cases where this might be acceptable and remain valid would be the speed at which the learner completed the unit (or achieved one or more of the learning outcomes) and the speed at which the learner demonstrated competence. The former case would be acceptable because we quite normally think of those who learn the same material faster as being 'better' than their slower counterparts. It could be seen as valid because it was thus a measure of merit which is both widely accepted and which does not alter the nature of the competency required. The latter case (speed of performance) is commonly associated with expertise and would seem to be quite acceptable for that reason.

Such an approach does introduce a new dimension to the learning process. In New Zealand, at least, one of the major publicised features of the new competency based approach has been that it eliminates the notion of 'time-served' measures, and allows learners to proceed at their own pace. If speed of attaining learning outcomes were to become the criterion for merit, then this could have potentially drastic implications for 'non-merit' learners in either the classroom or a workplace learning situation.

There is a further complication. Speed of completion, either of the unit or of assessment tasks, may in part be a reflection of ability or 'merit', but in part is also a reflection of prior learning and experience. It would seem most odd to claim that a learner who asked for assessment on the basis of prior learning, and who immediately reached the credit/competency standard should be awarded a merit grade for having shown exceptional speed! Yet, if it is accepted that all learners bring with them to many classrooms very different prior learning experiences, it is difficult to see how speed alone can reflect 'merit'.

Does it matter, however, if the object is to determine which learners can profit by further and advanced forms of learning? Put another way, if one learner does finish much faster than others, is it not this result that matters? A moment's reflection will show that, while there may be some element of truth in that claim, it is clearly only part of the story. If a learner has spent some time developing basic skills in one arena, and then comes rapidly to show competence in that same set of skills as part of a competency assessment, this does not automatically mean that such a learner will perform rapidly or successfully in tasks involving more advanced skills.

Type 4: Consistency

The final suggestion above, the consistent performer, is an approach to merit which is more attractive to some non-European cultures than are some others, particularly when it is linked to the positive attitude of the learner (see below). This has been flatly rejected by NZQA on the grounds that merely attaining the competency standard in a consistent manner has little to do with everyday notions of merit or excellence. Given that the students' results will be no better or worse than others who might have required more than one attempt to reach some standards, it is clearly very unlikely to gain support from any higher education selection board.


The following additional approaches go beyond the learning outcomes, taking into account related features considered desirable in terms of a generally accepted notion of merit. They assume that it is acceptable to judge merit on the basis of additional features of learner performance, but still remain broadly competency based:
  1. Transfer of skills to new situations;
  2. Achievement of additional learning outcomes;
  3. Originality, creativity, 'flair'; and
  4. Outstanding attitudes, approach to learning, motivation. (Adapted from Peddie, 1993b)

Type 5: Transfer

The fifth type, the ability to transfer skills, is clearly a desirable quality in a learner and one valued by employers. The extent to which we might think of this as 'merit' is nevertheless unclear. If learners showed themselves capable of transferring only one skill to one new learning context, we might be especially hesitant to label this as worthy of a merit grade. More importantly, how many new contexts would we require to be clear that the particular learner deserved a merit grade? While it is clear that the teacher would have to provide an answer, it is not at all clear on what theoretical or even practical basis this answer might be given. In other words, there appear once again to be problems of validity about which fairly subjective judgements would have to be made.

There are other practical and theoretical issues to consider here. What would we do when one learner demonstrated transfer of one skill to a variety of con texts while another learner demonstrated transfer of several skills into one new context? Would both learners be considered equally meritorious? It may be that the answer to such a question can reasonably be determined on the basis of the goals of a program, or the eventual needs of an industry sector, but the validity of this determination needs to be explicit.

Type 6: Additional Outcomes

Similar, but yet further issues appear when the next approach is considered, that of achieving additional outcomes.[7] First, if these additional learning outcomes are considered important enough for the award of merit to be based on them, why are they not part of the set required for competence? Next, even if we resolve that question, how many additional learning outcomes would we need to specify? Would this number depend on the number of learning outcomes the learner had to attain for competence? What effect would knowing about these 'additional' outcomes have on both the teaching and the learning processes in the classroom or workplace? How and when would learners decide to attempt these additional outcomes?

While there are several significant practical concerns, the most important of these questions appears to be the first. It does seem odd to say that a particular set of learning outcomes is required to be achieved for a learner to be declared 'competent' but that something different is required for the learner to be selected as 'excellent'. There would seem to be a much stronger conceptual basis for considering more closely an approach in which the 'normal' learning outcomes themselves were the focus of any measure.

A further point here is the possible link to speed of learning (or of performance). If a learner is expected to achieve additional learning outcomes in the same time as learners who achieve only what is specified for competence, then the earlier discussion on speed as a criterion needs to be revisited. If the time frame is not specified or important, then a new factor arises: to what extent would we feel comfortable with the notion that a learner might simply keep on going until, well after other learners have passed on to a new higher level unit, this plodder finally achieves the additional learning outcomes and is awarded a 'merit' grade?

Type 7: Creativity

An approach based on originality, flair, or some other form of what we would normally regard as creativity, can focus on the learning outcomes. Indeed, in many forms of problem solving we would probably consider the person who consistently produced creative and innovative solutions as a valuable member of the team. It is not, however, a straightforward shift to argue that this constitutes merit or excellence in a more general sense or, especially, as a basis for further and more advanced learning.

First, the most creative people are not necessarily the highest achievers in a more conventional sense. A creative learner may indeed produce an innovative response or performance, but not one in which high levels are achieved in the more practical aspects of that performance. There are of course learning domains like the arts where creativity is valued as a central part of a learning outcome, but these are relatively limited.

Nevertheless, it should not be forgotten that higher education institutions typically applaud learners who provide 'creative' analyses to areas as diverse, for example, as literature, mathematics and sociology. The same, by extension, can be said for many performances in engineering, architecture and some branches of science, but arguably less so for medicine, law, languages and some other aspects of the sciences.

Yet is this 'creativity of solutions' the same as creativity in a work of art? In one sense it is, as it involves the understanding of the parameters of a situation and extending them in some novel way. In another sense, there would seem to be a clear difference between a self-initiated form of creativity and one in which a learner responds to an issue raised by a teacher, or even writes an essay which is genuinely 'creative' (or 'innovative'), but has very little to do with the question asked - a situation many teachers will recognise immediately!

More importantly, even if there were to be agreement that creativity should be counted as a measure of merit or excellence, the problems of reliability of measurement are both serious and again well known. While there are ways around this, they tend to involve several judges and a range of performances.[8] This can sometimes lead to severe practical difficulties.

Finally, there are cases where creativity is neither expected nor welcomed at the early stages of a learning process. The assessing of learners in fields like automotive engineering, carpentry, computer technology and many other vocational areas is not concerned in the early stages with flair or creativity. This does not seem therefore to be an option which could have widespread application. Yet in the arts, hairdressing, fashion technology, interior design and in many other fields, if valid criteria for creativity can be determined for a competency level, then a merit level may well equally be possible.

Type 8: The 'good learner'

The eighth approach, or 'qualities of the learner' approach has been dealt with somewhat incidentally but also quite adequately earlier in this paper. It is clear that the notion of a good, consistent learner being declared as worthy of a merit standard when their achievement was in no way better than any other student would not be acceptable in a Western-dominated system. It should be noted again that discussions with some Pacific Islands colleagues in New Zealand suggests that such a view is by no means universally held.

A special case of considering the good learner does occur currently in educational selection. There are several prestigious scholarships and awards (including the Rhodes Scholarship) where personal qualities are part of the selection criteria. While this is true, these appear to be limited to cases where selection needs to be made among those whose scholastic excellence is well-established. There do not appear to be cases where personal qualities alone count as the criteria for selection.[9]


This leaves a final, and for some a most obvious approach, selection by ranking. It should nevertheless be clear by now that the issue is not just whether such an approach does violence to the notion of competency based assessment - the view taken by NZQA. It is also the case that the 'top' group can be selected only if the ranking system used is one which validly and reliably places the candidates in order in terms of the criteria considered desirable. Thus, all the warnings, problems and issues relating to a number of the alternative approaches discussed above apply equally to selection based on ranking.

First, can ranking be used in a competency based assessment approach? The short answer is 'possibly'. A system of multiple grades can be used in which one grade point is properly specified as the competency level, provided no scaling is applied to ensure only certain numbers of learners reach this level. In one sense the positive answer is still debatable as it does depend on the definition of 'competency based'. The approach taken here is that the critical characteristic distinguishing a norm based approach from a competency based approach is that in the former approach ranking and a suitable spread of scores are the prime and conscious goals while in the latter approach they are not.

In a norm based approach it is also common to use statistics to scale the results to ensure an appropriate spread of marks. This is at odds with a competency based approach. There, the prime focus is helping learners reach one standard which has been specified as the competency level. Learners may then be quite confident that, if they personally reach that standard, they will be declared competent and credited with that unit.[10]

In a norm based system, then, the prime focus of the assessment (but not necessarily the teaching), is to spread the students as widely as possible so that a reliable rank order can be determined. This is done either by choosing an appropriate difficulty level for the assessment, or by statistical manipulation of the marks gained, or both. There is therefore no certainty about outcomes on the basis of performance for the individual learner. While the 'very good' learner will still normally gain a very good result, the specific rank and mark awarded may well be determined not only on their individual performance but also on the performances of all other learners attempting the unit or tasks, and especially the number and quality of other 'very good' learners.[11]

It should nevertheless be recognised that ranking systems can incorporate full feedback and profiles of learner strengths and weaknesses. It is also true that the 'pass' level can as easily be set in terms of competence as it might be in terms of experimentally determined norms. This is because the setting of our competency levels almost always relates to our underlying expectations and experience, in other words, it is related to more informally determined norms (Peddie, 1993b). Similarly, it should not be thought that in competency based systems there cannot be attention paid to relative degrees of success in a class group.

It does seem fair to suggest that in a competency based program the focus is on helping all learners to reach the competency standard. In a ranking system, the focus is, or certainly should be, on helping each individual learner to do their best. Assessments are then typically set in such a way as to allow all learners to perform at optimum levels. In a mixed-ability group this almost necessarily means offering a system where a number of grade or percentage points is used as a convenient summary of learner achievement.[12] In New Zealand, at least, it has also tended to be the case that when a pass/fail level is used this base has sometimes been set at or near the mid-point of these grades or percentages. In a competency based approach, these practices are totally unnecessary.

It should be noted in any case that the provision of a mid-point pass/fail mark in either a ranking or any norm based system is perhaps only defensible when selection is required on the basis of a fairly coarse measure and in terms of some notional standard. Thus, a 'pass' in the Year Ten School Certificate examinations in New Zealand used to mean that a learner was, broadly speaking, in the upper half of the year group, and had achieved at a more or less identifiable level in the subjects passed. This was achieved by scaling marks to ensure a normal distribution.[13] There would seem, however, to be no theoretical justification for setting a pass/fail point in a norm based or ranking system unless selection needs to be made in these (largely predetermined) terms.

While this is by no means new, it is worth stressing in any discussion of selection. Selection based on merit requires some form of ranking. The key issue then becomes the extent to which we can validly and reliably say that the lowest ranked learner selected is 'better' than the highest who is not. As it may be assumed that not all of those who achieve at the competency level will be selected (if they were, a paper like this would be unnecessary), the theoretical - and the practical - focus must be squarely on the 'cut-off' point. To add to the problem here, the cut-off point will presumably vary on different occasions and for different purposes, as the numbers being selected from successful learners who have completed a competency based program may themselves be affected by the numbers applying on a different basis.

To sum up this section, the following conclusions can be drawn.

  1. Ranking in a competency based assessment program cannot be seen as valid when the intention is to separate out each learner from every other learner. No matter how many grade points or standards can validly be determined, a competency based approach must allow two or several learners to be recognised as achieving at the identical standard.

  2. When there are a number of valid and reliable grade points and standards, and a group of learners do achieve at different points on such a scale, it is perfectly valid to assign a rank order to the (groups of) learners, and select for additional learning on the basis of these ranks. Such an 'accidental' result, however, cannot be built into a competency based program.

  3. It is perfectly valid in a competency based program to assign a rank to each valid and reliably determined grade point, and for another teacher or institution to determine which grade point is necessary or desirable for selection purposes. Once again, it must be stressed that the numbers cannot be guaranteed, as the cut-off for the number of places may well fall in the 'middle' of an identically achieving group.
When all of these points are considered, it is clear that a ranking system designed to select either a fixed or a variable number (externally determined) of learners for further education cannot validly or reliably be used within a competency based program. This is not an abstract or 'theoretical' judgement, but one which is solidly based on the contradictions built into different forms of assessment and assessment procedures. The use of ranking tests outside such a program will be discussed a little later in this paper.


It was noted much earlier in this paper that the discussion of a merit grade would be presented, setting aside the issue of any perceived difference in the notions of merit and excellence. Before considering selection tests as additional to a competency based program, this distinction does need to be drawn. It becomes clear when discussing the uses of ranking that there may well be a need to distinguish both in theory and in practice those learners who show 'merit', and are therefore 'very good' in terms of whatever criteria are deemed valid, and those learners who are 'excellent'.

Excellence carries with it not only a notion of being outstanding, but also the notion of a certain exclusivity (Peddie, 1993a). Thus, while it probably does not violate our notions of appropriate assessment policy to say that the aim of a program is to have the whole group of learners perform at a merit standard, there is arguably something slightly odd about saying that as a matter of course all learners could be (equally) 'excellent' in performing a particular skill.[14] In other words, it can be argued that learners are 'excellent' in comparison with another larger group of learners who are not excellent. While therefore it makes sense to suggest that a merit standard could be the highest of an equal division of grades or levels, it is likely that we would want a standard of excellence to be somewhat 'smaller' than other grades.[15]

In some contexts, this distinction may simply be noted; in others, and where selection is the focus, the difference is critical. If a high standard is required, but there are no very strict limits on numbers, a merit grade might be quite adequate as a basis for selection. If places are limited and entry is competitive, then 'excellence' may be the major focus. In that case, the intention of the selection process may well be the elimination of even the very good learners, and the selection grade(s) will be deliberately set so that only the few learners required are expected to attain the standard(s). If the distinctions drawn here are accepted, it becomes clear that setting a standard for 'excellence' appears to be conceptually at odds with competency based teaching and assessment, even if strict ranking is not utilised (Peddie, 1993a). This leads to consideration of separate assessment for selecting those learners who are 'excellent'.


If then selection of the meritorious appears fraught with difficulties and selection of the excellent is at odds with a competency based program, is the answer to conduct special extra assessments simply for selection purposes? This approach does have some obvious advantages but some of the problems discussed earlier do reappear. The apparent advantages of a separate approach to selection can be listed in a fairly straightforward way:
  1. Selection tests can be held after and outside the normal competency based program, avoiding disruption to teaching.

  2. Such tests can be offered only to learners who are seeking selection, minimising the time needed for the extra assessing involved.

  3. The tests can be norm based, with ranking used to discriminate among learners. Furthermore, because the tests are aiming to discriminate amongst mainly better learners, there is no need to set a test which will widely spread all learners.[16]

  4. If the purpose is selection for further learning, the extra test can be validly, both achievement based and prognostic.
These apparent advantages, however, need further examination. First, the fact that a selection test is held outside the normal program will clearly not mean that it is independent of that program. No teacher concerned for the learners would ignore the importance of a selection test to be held effectively on the basis of what was taught in the competency based program. History show us that assessments from which no obvious selection will be made are always regarded as much less important than those that are. Although there is the clear benefit of achieving competence, if selection is seen by learners and teachers as the major goal, this will definitely affect the teaching and learning involved.

The notion that such a test would not 'disrupt' normal teaching, then, is true only in terms of the assessing time itself. Furthermore, those learners who (in theory) are not seeking selection would be affected by the very fact that most teachers would 'teach to the test'. It is likely that they, too, would aim for selection, particularly if the stakes were perceived as high. Given the likelihood of a test which attempts to discriminate only among better learners, this could have significant effects on the morale of otherwise 'competent' students.

The use of a norm based and/or ranking test would also tend to lead to the kind of competition in the learning situation that protagonists of competency based programs are seeking to avoid. Also, the presence of a test in which it was known that features or approaches could be included which were outside the normal competency based program would tend to lead to these being included by the conscientious teacher.

If it is decided that such a test should be held, how can these and other problems be minimised? First, a selection test should focus mainly on the normal content of the competency based program. Second, if a prognostic aspect is believed desirable, this should be tested by assessing the ability of learners to adapt their skills to new contexts or situations, even if the program for which selection is being made does involve other sorts of skills.[17] Third, the test should be norm based across the whole group of learners, with an effort to ensure that any learner already assessed as competent will score at least at the half-way point.

These proposals strongly suggest that it should be the institution or organisation in which the competency based program is running which should prepare and run the selection test, with the obvious addition of consultation with the selecting body to ensure the test is acceptable for their purposes. It is far less likely that the latter would prepare a test which would avoid the difficulties raised earlier in this section.


How then should learners be validly and reliably selected after achieving a competency based qualification? A too-rapid perusal of the preceding discussion might well lead to the conclusion either that there are no answers, or that answers simply need to be ad hoc and based on sensible judgements about possible and realistic practice. This is not the case. What the preceding discussion does show may be summarised as follows:
  1. There are no simple universal solutions to the issue of selection following a competency based program. Not only is this true in terms of assessment theory but it is true also in terms of practical considerations and cultural (and quite probably other) perceptions and beliefs.

  2. Equally important as a starting point, there are no universal approaches to the determination or insertion of merit standards into units of learning in a competency based program. This does not mean that the use of such standards should necessarily be avoided for selection purposes.

    In many spheres of life, it is surprisingly common to select for a single outcome learners who show different but basically equally valued characteristics. Sports teams do not select only players for one position; outstanding scholars can have superb research or writing or other skills; companies may well select manager trainees who show a variety of strengths which may collectively be useful to the company. Thus, it is arguably both valid and acceptable to have more than one approach to merit or excellence in a program or qualification and to use all such approaches for selection purposes. The key point here is that those selecting be completely aware of the different bases for merit used in the program(s) from which learners are being selected.

  3. The decisions over how to select certain learners over others should be made on the basis of demonstrably valid and reliable criteria, and not on the basis of abstract arguments over differences in assessment theory.

  4. There are many ways in which selection can be made quite effectively in practice but most of these are demonstrably invalid, or unreliable, or simply unfair (or any combination of these).

  5. For a valid and reliable grade system to aid selection, there is a clear distinction to be drawn in cases where the learning outcomes of a unit have an identifiable 'ceiling' above the competency standard, and those units where this is not the case. Furthermore, it is not true that all units (or learning outcomes) with a readily identifiable ceiling allow a similarly ready identification of a standard of merit or excellence.

  6. There are important distinctions to be drawn when selection is aimed at identifying those learners who achieve at a higher standard than competent, and where the aim is to identify a small top ranking group (because numbers of entry places are limited).

  7. There is a serious risk of confounding the award of merit or other higher standards and the issue of speed of completion. This occurs when merit is awarded on the basis of learners doing 'extra' assessment in a 'normal' time frame.

  8. The use of a norm based or ranking test 'outside' the competency based assessment program does not eliminate the problems which are found when selection is attempted within such a program, although there are ways of minimising some of these problems.

    It must be emphasised that all of these points need to be taken into consideration when a decision on selection policies is made for learners who are completing qualifications awarded on the basis of standards or competency based assessment.


The summary of issues leads directly to the following conclusions and recommendations. These are inevitably somewhat complex but are aimed at being workable for a wide variety of programs and groups. The decision making process described below assumes that the following four conditions apply: In the following question-answer series, all the questions need to be answered before a clear decision on selection procedures can be identified.
  1. Do all learning outcomes in all units have a clearly identifiable 'ceiling' above the standard set for competence, a ceiling in which a further, higher standard can validly and reliably be set?

    1. If the answer is 'yes', then it is recommended that a merit grade be set for every unit and that selection be made on the basis of the number of merit standards achieved by individual learners. Where the number of units is severely limited, and selection is not practical because inadequate discrimination would be achieved, a different option must be pursued.

      This approach is clearly the 'purest' approach to selection within a competency based framework. It accepts that the best way to select learners is in terms of their actual achievement in identical or highly comparable situations. It does tend to ignore the effects of entry skills and abilities, but it does focus on actual abilities in terms of outcomes, a key point in competency and standards based approaches.

      The warnings about setting the standard too high, the effects on teaching methods and on other learners and the risk that a larger than required group of well taught and well motivated learners might reach the higher standard are all points to take into consideration.

    2. In cases where it is not true that all learning outcomes have an identifiable ceiling, then a different scenario is advised. Generally speaking, it is suggested that no more than two distinct ways of ascribing merit should be used.[19] The approach, however, should depend on responses to the questions which follow.

  2. Are all units compulsory or are there core and optional units?

    The assumption is made here that for a recognised qualification and/or curriculum there will never be complete freedom to choose among totally optional units. On the other hand, it is clear that for some courses of study all units will be compulsory. In the variety of cases that may occur, it is strongly recommended that the following rules be followed. This is advised regardless of the method used to determine merit.

    1. Either all compulsory units should have a merit standard included or none should.

    2. Either all optional units should have a merit standard included or none should.

    3. The larger the number of compulsory units the more useful and reliable it is to have, where possible, a merit standard for all such units. The smaller the number of compulsory units, the better it is to have an all or none policy for both compulsory and optional units.

      Using merit in a situation where there is only a small group of compulsory units could have serious implications in terms of errors of measurement and consequent level of reliability. To take a very simple example, a learner who just fails to reach a merit standard in two or three compulsory units, but achieves at a very high level in a number of optional units where no merit standard is available, is likely to be disadvantaged by the system rather than by skill level in the program as a whole.

    4. Where programs comprise compulsory and optional units, and only the optional units have an identifiable ceiling, merit standards should generally not be used.

      In such cases, it is inevitable that the core units will require an approach to merit which is at best simply different from that used in the optional units and at worst often less valid. In the reverse case, where only compulsory units have an identifiable ceiling, the use of merit standards for these units is highly recommended.

  3. Is it practicable and possible to assess learners after the program has been completed and before selection will be made?

    1. If the answer to this question is 'yes', then there appear to be two main options. One is to assess all those who have successfully completed the program and who wish to be selected, using a pre-announced and competency based assessment format which aims to distinguish those who can best apply the whole of their learning to new situations and contexts. It may be necessary to limit this to simulations or, in extreme cases, to written or oral assessments of what the learners would do. An important feature of such a test would be to assess the ability of learners both to integrate their learning and to utilise specific skills in which they had been deemed competent. If speed of performance was valued by the selecting body, this could be one of the criteria used for the competency standard.

      What is being proposed may sound like a ranking test, but this is not the case. The competency level for this extra test could be set at quite a high standard, and it would simply have to be accepted by both learners and the selecting body that the numbers of successful learners may well not match the places available. The goal in setting the standard might well be to err on the side of having too many successful learners. At least, then, a meritorious group would have been identified. Whatever its faults, a ballot could then be used when necessary as the fairest way to select among this group.

      The second option is to run a norm referenced ranking test along the lines discussed earlier. This has the advantage of probably making selection easier, but the first option is arguably more valid.

    2. Sometimes it is not practicable to run such an additional test because, for example, selection is made before the competency based program is due to be completed. In that case selection can be made on the basis of merit grades, where these are already available (see above). Where merit grades are not available, there seems to be no obvious way at all of making a valid selection, and a ballot is again (a little reluctantly) recommended.

Should such a ballot, however, be restricted to those whom the competency based program teachers believe have demonstrated work habits and personal qualities which suggest they will benefit most from further study? While this is intuitively attractive, it does relate to a means of selection already rejected as not being part of a Western ideal of merit. It would seem odd, therefore, to use such criteria as part of a 'pre-selection' for a ballot.

Finally, an important consequence of all these considerations is the need for considerable discussion over selection procedures among all parties involved. Whatever solutions are initially adopted should be continually monitored for effectiveness and equity. Only in this way will improvements be made in an area where an examination of both theory and practice reveals that selection is always going to be an issue of complex debate.


McGaw, B. (1993). Assessment issues. (Paper delivered at the 'Testing Times' Conference, Sydney, 1-3 November 1993). Sydney: NCVER/TAFE NSW.

New Zealand Qualifications Authority. (1993). Briefing papers for the incoming government. Wellington: New Zealand Qualifications Authority.

Peddie, R. A. (1992). Beyond the norm? An introduction to standards-based assessment. Wellington: New Zealand Qualifications Authority.

Peddie, R. A. (1993a). Achieving excellence: A second report on merit in competency-based assessment (Report to the New Zealand Qualifications Authority). Auckland: The University of Auckland.

Peddie, R. A. (1993b). Standards, levels, grades and merit: A critical analysis (Proceedings of the National Assessment Research Forum conducted by the Competency-Based Working Party of the Vocational Education, Employment and Training Advisory Committee, Sydney, 1-2 April 1993).

Peddie, R. A. (1993c). Standards of excellence: The use of merit awards in competency-based assessment (Report to the New Zealand Qualifications Authority). Wellington: New Zealand Qualifications Authority.


The original version of this article was commissioned as part of a project undertaken by Graham Maxwell with funding by the Queensland Department of Employment, Vocational Education and Industrial Relations and appeared in the final report Getting Them In: Final Report of the Review of Selection Procedures for TAFE Associate Diplomas and Diplomas in Queensland. Copyright has been released by the funding agency to allow publication in this form.


  1. In New Zealand, the term 'Unit Standard' is used to refer to the information about a unit of learning held by the New Zealand Qualifications Authority. The Unit Standard comprises the learning elements (outcomes), assessment information, range statements, and information about the level and credit rating of the unit. As in Australia, these Unit Standards can be combined in a variety of ways by the teacher.

  2. This bald statement conceals a number of serious issues. Even if a performance can be measured or assessed reasonably objectively, there are a number of factors which may influence the perceptions and the judgement of the assessor, thus regularly making the assessment process anything but 'objective'.

  3. This point is often overlooked. What is the difference between a competent novice and an expert performance? To what extent is it a matter of speed and error-free performance, and to what extent is the expert 'better'?

  4. Much of the discussion which follows is drawn from Peddie (1993a, 1993b, 1993c). Readers are referred to these publications for further and more general discussion.

  5. The issue of ranking and scaling is discussed later in this paper.

  6. A formative assessment is one which is used to give feedback to the learner and is not used for the final judgements of competence. Here, for example, this feedback would need to be sufficiently discriminating to give better learners the information they needed to be able to try subsequently to surpass the competency standard aimed for by the rest of the class.

  7. It should be noted first that specifying additional learning is not a new approach. Some contract systems of learning dating back thirty years have specified what has to be done for a pass, and what extra things are required for a higher grade.

  8. Examinations in practical art in New Zealand have used such systems with a reportedly high level of reliability. The system involves the assessing of a portfolio of work, and a range of works being then considered by a panel of qualified assessors.

  9. The importance of equity in selection policies has already been stressed. In one sense, equity concerns are an example of 'personal qualities' but not in the sense being discussed here.

  10. It is accepted that some will disagree with this characterisation but the arguments in the text are not in fact wholly dependent on such a definition.

  11. In one absurd and infamous historical case in New Zealand, a single very bright class taking an unusual language in a national examination had their marks scaled so that half of them 'failed'.

  12. In other words, where there are a number of learners, all with different abilities, a record of optimal performances would theoretically need to have at least the same number of different levels or grades. The decision to have, say, five, ten or a hundred levels might well be made on other grounds.

  13. In fact, this is an over-simplification of policies in recent years but changes in scaling procedures still broadly reflected this approach.

  14. This point is reinforced by the outcry a few years ago over a New Zealand University course in which all students were awarded A+ grades. The lecturer successfully demonstrated that all students had met demanding contract requirements but the issue was front-page news for days.

  15. Intriguingly enough, we tend to operate in the opposite fashion at the bottom end of a multi-grade scale. It is not uncommon for an E grade to represent percentage marks up to about 30 or more, and to be quite undifferentiated (that is, there are seldom E+ or E- grades). It would be rare for A grades to be undifferentiated, and rarer for the very top grade to range over 30 percentage points.

  16. Thus, a test in which average learners and less able learners alike scored very low would be quite in order and the test could focus on finer discriminations at the upper end of the performance scale.

  17. Clearly there is a choice here. The line taken reflects the priority placed on preserving intact the major features of a competency-based approach.

  18. This does not exclude the possibility that other learners may be looking for selection with different qualifications. How the results of the process described here will be ranked by the selecting body against other qualifications is a separate issue.

  19. This suggestion is not theory-based. It is simply accepted that different approaches to merit have different conceptual bases, and that having two in any program is difficult to defend, although this may provide a realistic and practical solution in some cases.

Author details: Roger A. Peddie is Associate Professor and Director of the Centre for Continuing Education, The University of Auckland, New Zealand. After working as a secondary school teacher and as a teachers' college lecturer, he was appointed in 1978 to the Education Department of the University of Auckland. He took up his present position in1992. He has undertaken extensive research for the New Zealand Qualifications Authority on standards based assessment and has recently co-authored (with Bryan Tuck) a book on standards based assessment Setting the Standards.

Please cite as: Peddie, R. A. (1997). Some issues in using competency based assessment in selection decisions. Queensland Journal of Educational Research, 13(3), 16-45. http://education.curtin.edu.au/iier/qjer/qjer13/peddie.html

[ Contents Vol 13, 1997 ] [ QJER Home ]
Created 1 Apr 2005. Last revision: 1 Apr 2005.
URL: http://education.curtin.edu.au/iier/qjer/qjer13/peddie.html