QJER logo 2
[ Contents Vol 13, 1997 ] [ QJER Home ]

Possible options for differentiation in competency based assessment

John Wilmut and Henry G. Macintosh
Education Consultants, England
In its most basic form competency based assessment is a pass versus not pass or a can do versus cannot do system which provides the minimum differentiation amongst students. The aim of this paper is to explore approaches which would facilitate differentiation amongst students. These include approaches to differentiation which are additional to (and hence outside) the competency based assessment system, as well those which form an integral part of it. Within the discussion we have examined the various options on grounds of cost effectiveness, manageability, rigour, validity and capacity for consistent application.

There are basically, only three ways in which an individual's performance can be described: in relation to other people, in relation to specified criteria, and in relation to his or her own previous performance. In the jargon, we call these norm, criterion and self (or ipsative) referencing. All three have their uses, depending upon the purposes to which the results of assessment are being put, and all three affect the methods used for assessment, the ways in which performance is reported and recorded, and attitudes and practices in relation to teaching and learning.

It is widely understood that competence is a deep structure which cannot be directly observed. We can observe performance, but this may or may not provide an accurate insight into competence, an issue which is discussed in some detail by Wood and Power (1987). They attempt to unravel some of the complexities and levels of competence, and suggest that its development as a coherent structure in a particular area may take a number of years; this view is also taken by Gonczi (1993) and Hager, Gonczi and Athanasou (1994). For this reason alone performances may be inconsistent, even assuming that the tasks which have been chosen to elicit the competence are entirely appropriate.

Gonczi (1994) describes three conceptions of the nature of the competence which is inferred. The first of these conceptions is task-based where competence is seen in terms of the satisfactory completion of a large number of discrete, small-scale tasks but with no exploration of the connections between them. The second concentrates on the general attributes which are required of a practitioner, dealing with some underlying domains such as knowledge or critical-thinking ability. The third links the two previous conceptions by setting the performance of particular tasks into a context of general attributes. We assume that the competency based system as implemented in the Australian vocational education and training (VET) system is concerned with this third conception; this appears to be the only way in which competency based assessment can remain anchored to the completion of vocational or occupation-specific tasks whilst meeting criticisms of the 'atomisation ' of learning and performance.

This is especially important in the context of the particular issues with which we are concerned in this paper. Gonczi (1993) suggests that, in relation to entry into educational institutions, assessment within vocation education and training should relate to the curriculum aims and not solely to the occupation-specific competency standards. Vocational courses, moreover, should incorporate ways of developing and assessing generic competencies, perhaps in occupation-specific contexts.

Such an approach would also interact directly with the mastery of processes which underpin competencies. Although it is frequently the case that the outcomes statements in a competency based scheme cannot be met without due attention to process, it is important that the generalisable nature of the processes is not lost in learning which is directed towards doing specific tasks. If this happens we narrow the focus of eduction to those things most readily observed (McGaw, 1993).

It is also a desirable feature of the third of Gonczi's three conceptions that it readily supports the collection of evidence which supplements that from the performance of specified tasks. If this occurs we can be more certain about inferring underlying competence. The issue has arisen particularly sharply in relation to what is referred to in the British National Vocational Qualifications as underpinning knowledge. This is described in the unit specifications for the award, but it is not necessarily made explicit through normal occupational activities (or would require an unacceptably large number of parallel activities to be undertaken in order to demonstrate all aspects of required knowledge). Supplementary evidence, collected from oral questioning, written tests, written assignments and the like, has therefore become commonplace, and is often complemented by evidence from prior achievements (Black & Wolf, 1990). There is no reason why such supplementary evidence should be confined to matters of knowledge.

Under a competency based system, assessors make judgements, which are based on evidence gathered from performances, about whether or not an individual meets particular criteria. It therefore relates to criterion referencing which, when operated at its simplest level, will lead to outcomes of pass versus not pass or can do versus cannot do; here we would be dealing with a concept of mastery. Under such narrow conditions outcomes may not readily be described in terms of grades or degrees of mastery (Nuttall, 1984; Peddie, 1992a), and as such do not lend themselves to the differentiation needed in a selection process in a competitive environment; this is the dilemma at the heart of the current discussion.

However, if we are actually dealing with more general attributes, and can accept a more continuous view of competence, we should be able to organise assessment so as to observe degrees of performance, and would have much more scope for differentiated outcomes from the assessment. Power (1986), writing about Year 12 students in Australia, comments:

In none of the domains at Year 12 level is competence an all-or-nothing affair. In each case, competence is continuous, acquired gradually rather than by crossing a threshold. There are no naturally occurring discontinuities in knowledge and skill which enable us to draw a pass-not pass line and divide papers into A-B-C-D-E-F.
Moreover, we may also be able to determine attributes which characterise performance which goes beyond that required simply to pass, and which would be recognised as meriting greater recognition. In doing this we have moved beyond the elementary concept of mastery.

Whilst there is no doubt that some use of grading within competency based systems would help selectors and others, it would also satisfy the legitimate aspirations of students. In the course of providing a comprehensive discussion of grading within a competency based system, Byrne (1993) (in two separate quotations) makes the following points.

An almost universal quest in education is to maximise each student's performance in relation to his or her abilities.

A good criterion referenced grading system provides incentives for students to master the basics as well as go beyond basics to pursue excellence and creativity.

In attempts to address these issues, increased use has been made, by those involved in exploring greater differentiation within competency based assessment systems, of the term standards referenced. Peddie (1992a), in his work for the New Zealand Qualifications Authority, sees standards-based assessment as including a wide variety of assessment types of which the two main ones are competency based and achievement based assessment. He defines standards based assessment as occurring when the measurement or outcome is assessed against some description or level of achievement known as a standard; the nature of the description will vary from something that is very task-specific to some very general grade description, of the type used in some public examinations, where a diversity of performances can lead to the same grade outcome.

Withers and Batten (1990) on the other hand, in their continuum of assessment types, use the terms normative, standards referenced, criterion referenced and descriptive, and make a key distinction between the two areas of comparative and non-comparative assessment. In their descriptions, standards referenced assessment clearly falls on the comparative side, so that grading of some sort is easily allowable. Criterion-referencing, in their analysis, can fall within both the comparative and non-comparative sections. In the comparative area, student performance is generalised and converted into grades; in the non-comparative area, it is reported directly without generalised grades through descriptive statements which prohibit direct comparisons.

What is clear is that competency based assessment must be based upon clearly specified and publicly stated outcomes, all of which have to be assessed. Madaus (1992) is clear that assessments must

... be geared into well defined and articulated curricula which need to precede the assessments - not arise out of them ...
and Clarke (1993) points out that the challenge is to
... communicate assessment information in ways that adequately reflect the richness and complexity of the performances which our contemporary assessment tasks now require ...
Competency based assessment must also involve the selective use of a wide range of evidence, generated through a wide range of learning activities. Moreover, any reporting and assessment arrangements must be clearly related to those learning outcomes and reflect this range of evidence. In so doing they will need also to take account of three issues, all of which are necessary but not sufficient conditions for the supply of information to selectors through competency based assessment, so that the information will be of value to them, and can fairly, reliably and adequately differentiate between individuals. These are Procedures set up to improve differentiation for selection processes need to take adequate account of all of these.


Selection which uses the outcomes derived from a competency based system demands a degree of differentiation in the assessment in order that individuals can be placed reliably within a rank order. This process is a consequence of the limits on available resources, which results in some applicants being successful and others not. There is, of course, a view that differentiation of this sort is incompatible with the outcomes of assessments which have been made against performance criteria. This contradiction has resulted in a good deal of discussion about the legitimacy of grading within such an assessment system (Peddie, 1992b; 1993a).

There are some important distinctions to be made within this discussion. Principally, they are between the legitimacy of grading which, for example, recognises a simple differentiation into, say, pass, merit and distinction grades (as in the General National Vocational Qualification in the UK) and the suitability of some system which results in partitioning of candidates into many more categories. While there are some strong arguments in favour of recognising particularly meritorious performance with some suitable grading category, it is more difficult to argue for a multi-grade or mark scale which would enable individuals to be placed on at least an ordinal scale that is, a scale upon which individuals are placed in rank order of merit. To do this we would need to be able to specify many levels of competence and there is evidence (Murphy, 1986; Cresswell, 1987; Gipps, 1992; Wolf, 1993; Wolf, Burgess, Stott & Veasey, 1994) that reliability of assessment is compromised because

If we are to arrive at an ordinal scale we shall have to do so by stealth, with the possibility that it may be unattainable if it is to operate from a competency based system.

In passing, Byrne (1993) provides a useful summary of what she calls the 'pros and cons of grading'. Although she is addressing employers' needs in selection and in the conduct of training the list is a useful summary for our purposes, and points to some of the backwash effects which grading might have within the vocational education and training (VET) system.



Selectors, of course, appear to need the ordinal scale mentioned above. In the present circumstances this might be especially true if applicants are to be selected from VET diplomas into higher education alongside candidates from other backgrounds. If those candidates can be placed in a rank order then it is tempting to demand that VET candidates should be inserted into the same order, using some suitable algorithm. If it is possible to differentiate in the one case, shouldn't we be attempting to do so in the other?

The alternative would be an algorithm which would place VET candidates onto a parallel scale, on their own, and for a quota system to operate so that selection was carried out from this and from the other scale(s) on an equitable basis. In order for this to be seen to be just there would need to be points of alignment for the parallel scales so that the cut-off between selection and non-selection was reckoned to be at an equivalent standard on each. However, it is possible that there is little difference in principle between the amalgamation of two scales and their equating in this fashion, since the amalgamation would, by its very nature, need a scale-equating algorithm.

If we are to use either parallel or joint scales we shall need information which generates a rank order which distinguishes candidates sufficiently for selection cut-off points to be decided upon. The selection purpose will not have been served adequately if we have, for example, only three or four scale points, with a large number of candidates grouped at each point. It will be necessary to adopt a strategy which moves assessment outcomes beyond the point where a candidate gains either a pass or not pass outcome on the qualification as a whole. We can regard this as a two-grade scheme in which candidates would either be seen to have satisfied the competency requirements (passed) or not to have satisfied at least one of them (not passed).[1]

In practice the decisions about passing and not passing will usually be made on a unit-by-unit basis. Units can vary in scope; the smallest useful unit in the context of the current discussion is a single learning outcome or performance criterion which can be used as a basis for the judgement of evidence produced by a student. In practice units are usually 'a coherent and explicit set of outcomes ' (Further Education Unit, 1992) organised around a coherent structure for learning (which is sometimes described as a module of learning). When the student achieves the specified outcomes, he or she gains the credits which are explicitly attached to the unit. Credit, which is the currency of the system, may be at various levels. The pass on the whole qualification is an accumulation of credit, sometimes according to rules of combination of acceptable units.

If we are to be able to create a suitable ordinal scale for the whole qualification we may be able to do so from some accumulation of a large number of individual pass-not pass decisions made at the unit level. On complex units, covering large numbers of performance criteria, it may be possible to introduce some grades such as not pass, pass, merit and distinction. This does not violate the principles of the credit model described above (Further Education Unit, 1993) but may limit opportunities for credit transfer into and out of the award (Wilson, 1993). Further, we may wish to treat units in a variety of ways, because they do not all have the same status. Some combination of these approaches is also possible.

The alternative is that we look outside the qualification either completely or partly. It is possible that information could be generated which would stand instead of the pass-not pass information on the units or on the qualification as a whole and which could be used in place of the assessment outcome or in combination with it. This information could come from some further assessment, directly from a portfolio of evidence or from a record of achievement. If such alternative information sources were to be available, they would need to bear some close relationship to the qualification itself. In some cases they could form the basis of an ordinal scale or could contribute to one generated from the competency based assessment; otherwise they would be likely to involve a completely different process of judgment for selection from that used for the ordinal scale. It is also possible that the process would then be unique to TAFE candidates. We must therefore consider not only the suitability of the information but also the consequent complexity and acceptability of the selection processes which would have to be used.

At this stage it is helpful to begin to set down the possible approaches; this is done in table 1. Here it is evident that the available strategies are not entirely distinctive, and that they merge into one another in various ways. We discuss each separately in later sections of this paper, attempting to describe a way of working in each case as well as the advantages and disadvantages of adopting the approach. The emphasis will always be on selection but we wish to include, at each stage, some consideration of the effects of the chosen process on the competency based assessment for the TAFE Diploma.

Table 1: A summary of approaches to differentiation for selection

Type of ApproachAssessment Possibilities
Using sources of information from outside the learning outcomesUsing a portfolio or record of achievement
Use of special assessments operated separately from the qualification assessment processes
Generating grades from ungraded unitsUsing combinations of vocational units
Using core skill units
Adding in unit tests or assignments
Grading the unitsApplying overall criteria to each unit
Using unit-specific criteria
Adding unit tests
Grading the whole qualificationApplying overall grading criteria related to the competency base but applied overall
Grading on consistency or speed of performance or on overall assignment or project


This discussion deals with the use, for selection, of information sources which are not assessments of the learning outcomes themselves nor derivations from them. If any of these sources is to be used it will need to be seen to be valid in relation to the outcomes. It will also need to be generated either within the TAFE structure or in close relation to it. Since it is probably impossible to say, in advance, which candidates may wish to participate in the selection process, the information may need to be generated for everyone, if that is not already happening as part of the program. There are several possibilities.

Using a portfolio or record of achievement

Judgements about a candidate's eligibility for higher education could be made if he or she were to present, at the point of selection, a portfolio of evidence of achievement, generated as a response to the activities undertaken in their VET diploma. Portfolios are used quite widely in competency based assessment schemes. They provide
primary evidence for assessment ... derived from projects and assignments ... to show that [students] have covered all the outcomes from each unit. Besides fulfilling an important role in providing evidence for assessment and grading, this allows people ... to examine the quality of student's work (NCVQ, 1993).
A record of achievement may provide a summary of the evidence in a portfolio, in the form of an organised set of descriptions or a profile of the student's achievements. A summary report could also be generated, perhaps as an interpretation of what is in the portfolio, or to stand in its place. This would need to come from the teacher, perhaps in dialogue with the student. There are several attractions in seeking to select students using either the portfolio or a summary of the achievements represented by the evidence in it. In the use of portfolios or records of achievement there are some logistical difficulties, some problems over selection criteria and some difficulties in relation to the selection amongst non-VET applicants. The generation of a profile, in relation to some descriptions or criteria closely related to the performance criteria at unit level appears to get round some of these difficulties. However, it hardly seems worth the effort when the criteria could themselves be used. The use of a summary report would make the whole system more manageable for selectors but it would be difficult to ensure sufficient uniformity from college to college or teacher to teacher for selectors to be able to use it other than as secondary or supporting information.

Use of special assessments operated separately from the qualification assessment processes

We can envisage selection processes which operate separately from the VET diploma, but which draw upon what has been learned in pursuing it. These would almost certainly involve the use of a selection test; it could be a supplementary assessment taken within the diploma (but only by those who wanted to enter the selection process), or be set by, or on behalf of, the selectors (that is, outside the operation of VET system).2 Candidates would not necessarily have to have finished the diploma, although they would need to draw on their learning in order to do the test.

An alternative (which is actually a sort of test) would be an extended assignment or project, to be undertaken for selection purposes. This could cover a wide range of knowledge and skills relevant to the selection purpose, and be assessed on the basis of a viva, written report, or by observation.

The attractions of a special selection device of this kind are the

The difficulties relate to the effect of this additional selection mechanism on the VET diploma.


If the view is taken that each unit should continue to be assessed on a pass-not pass basis but that differentiation ought to be generated from within the competency based assessment scheme of the Associate Diploma, the only possibility is that grades must be generated by allowing different combinations of units to lead to different grades. These units might all be vocational (that is, related to the area in which the VET diploma is being taken), or may be of a special type (such as core skill units). Further possibilities may exist if some or all units incorporate particular assessments such as terminal tests or assignments which are marked in a conventional way.

Using combinations of units

It is only possible to generate a useful basis for selection if there are units in various categories. A simple example would be a VET diploma made up of a number of basic units, to be passed in order to gain a pass; this is the minimum qualification. Grades beyond the minimum could be gained by taking additional units; each could have a credit value which would enable a candidate for selection to work towards a high place on the selection scale.

An alternative could be a set of requirements from selectors which would demand particular units to be included; thus, selection for degree courses in engineering might favour units which dealt with relevant mathematical or scientific competencies. The advantages of this approach are that

On the other hand

Using core skill units

Some units may have a special importance which may make them particularly useful in combination so as to generate a graded outcome. Core skill or core competency units may fulfil this function - indeed there may be an argument for conducting grading on these units alone. Such units would include those covering competencies in areas such as communication, number, personal skills, information technology skills, language skills and the like. The special properties which they possess are that Candidates for selection may be required to demonstrate core skills at a higher level than the minimum required. Once again, the advantages stem from the complete integration of this approach into the conduct of the diploma: there should be no distortion to the learning processes arising from selection. But the same problem applies as with the use of combinations of vocational units: the scale for selection will not have many points upon it, and may not thus serve its purpose very well.

Adding in unit tests or assignments

There is a requirement in the General National Vocational Qualification (GNVQ) in the UK that, where these are in place, a candidate must pass a unit test which is designed to confirm his or her coverage of the range of knowledge and understanding which is specified as part of the unit, and over which the performance criteria (or competency statements) apply. On an experimental basis there has also been some use of compulsory assignments; these have a wider scope than coverage of knowledge and understanding.

These devices, which are part of the structure of GNVQ, offer some possibilities for grading. In their simplest form they can be hurdles: gaining a pass on the test or assignment is a pre-condition to gaining a pass on the unit (which is based on a much wider spectrum of evidence). Used in another way the pass mark on the unit test or assignment may form the basis for a grading scale. Once again the advantages derive from the relative simplicity of the process and its integration within the structure of the qualification. The disadvantages are that

A special case of this approach would be the use of core skill tests, of the type of the Queensland Core Skills Test (Pitman, 1993). If core skills are otherwise not assessed within the VET diploma, there may be some justification for this; if they are, then the tests would probably be an unwelcome diversion.


There are some obvious attractions to generating grades on a unit by unit basis. For the student, in addition to the motivation of passing a unit, and then being able to move on, there is the added incentive of a high grade, and the accumulation of these towards a good result on the whole qualification. In order to operate this system there would need to be combination rules, leading from unit grades to an overall graded result. For the selector there is the possibility of very sophisticated information, in which the selection of candidates with particular patterns of units can be followed by selection amongst them. The unit grades also form a profile which may be used as a basis for selection.

The challenge lies in generating the basis for the grading. There is a limited range of possibilities.

Applying overall criteria to each unit

If there is a basis for, say, a merit or a distinction grade it could be argued that it should be the same for each unit. The basis would have to be a set of criteria which could be used irrespective of the content of the unit; we call these generic criteria. Not to have criteria for this purpose would run the risk that each assessor would use a different basis for grade decisions.

The choice of criteria will be important. They could relate to the 'quality ' of the work which has been undertaken: it is to be of a standard which goes beyond that required for a pass. The qualities could be spelled out in terms of some underlying attributes such as the quality of communication. On the other hand they could relate to some skills which are held in high regard, such as planning, organising information, analysis or evaluation. If this is to be the basis then it may be necessary to provide sets of statements which contextualise the generic skills to the content of each unit.

The considerable advantage of either or both of these approaches is that the generic basis for grading each unit is also the basis for defining the meaning of the grades on the overall qualification. Furthermore,

There are some difficulties in establishing this approach. First, the generic criteria have to be relevant to every unit, although not necessarily to the same degree. This may have the effect of making them very bland and imprecise, forcing the units into a uniform pattern which is at odds with the 'natural ' structure of learning for the qualification and making some units more demanding than they would be by virtue of their vocational content, thus making the whole qualification over-demanding. Second, the creation of statements which contextualise the generic criteria for use in each unit may not be very easy and will certainly make the unit specifications much more cumbersome.

Using unit-specific criteria

The alternative is to devise grading criteria which are specific to each unit. There would be no overall structure of generic criteria; the grading for each unit would reflect the characteristics of that unit. This approach has a number of attractions: The difficulties again relate to the complexity of what is produced, for each unit. This is not a trivial matter, since students may have considerable difficulty in getting to grips with the structure and meaning of the specification for the qualification. The advantage of a clear statement about the meaning of the overall grade disappears with the loss of the generic criteria.

Adding in unit tests

A marginal development of this approach to grading is to incorporate results from unit tests (or other mandatory assessments) into the grading scheme for each unit. The basis would be the same as that discussed earlier for ungraded units but would require some rules of combination which would generate unit grades from test results plus grades produced by either of the two methods described above.

The advantage would be an increase in the number of scale points available but there would be a more complex structure for unit grading which would be even more difficult for users to understand. It is likely that the marginal increase in apparent discrimination is offset by the problems of making the system transparent to users.


If generic criteria can be developed, and then applied to each unit in turn, they can also be applied to the whole portfolio of evidence submitted by a student. These generic criteria could be of two types:

Applying overall grading criteria related to the competency base but applied overall

Within the General National Vocational Qualification in the UK, the current grading (into pass, merit and distinction) is based on grading criteria which are separate from the performance criteria of the units and which relate to overall attributes, of which planning, information handling, and evaluation are typical. These attributes, incidentally, relate closely to the capacity for autonomous learning, the development of which is a central purpose in GNVQ and which is also reflected in the emphasis put on the development of core skills (NCVQ, 1993).

These attributes have to be present across a substantial portion of the evidence in a candidate's portfolio, irrespective of the units from which they came, and the grading judgements can only be made when the portfolio is complete, or nearly so. The units themselves can (and do) vary in content and approach, and each candidate need only satisfy some portions of each grading criterion in each unit. It is therefore not possible to say that a particular piece of work has earned a 'merit ' or 'distinction ' but it is possible to say that each has some specific attributes which meet some specific aspects of one or other set of grading criteria.

Other approaches of a similar type have been proposed. Peddie (1993a) suggests that creativity or originality could be used as a basis for developing criteria, and it is not difficult to imagine a range of constructs which could be used in this way. These could include criteria which are based on the scope of underpinning knowledge and understanding displayed in the portfolio, or criteria which deal with aspects of the 'quality ' of work which has been completed. In all of these cases the criteria are, necessarily, couched in context-free language. As elsewhere in this discussion the criteria will need to reflect competencies which are appropriate to university courses.

This approach has the advantages that the unit content and competency statements are not compromised, and high grades are not awarded simply on the basis of quantity of work or on some unspecified concepts of quality applied across the portfolio. A number of difficulties have, however, emerged in the UK, resulting in suggestions that grading should operate on a unit-by-unit basis, and be more closely related to concepts of 'quality of work '. Amongst these difficulties are that:

On the other hand, two advantages of grading criteria which are quite separate from unit criteria should not be under-estimated. First, they are likely to be clearer about what is meant, in an overall sense, about merit or distinction; it will be very difficult to get this overview from a collection of separate criteria. Second, there is no difficulty about comparability of grade; each candidate will have been graded on the same basis, irrespective of the units chosen.

Grading on consistency or speed of performance or on an overall assignment or project

Peddie (1993a) discusses the possibility that grading should relate to some overall achievements in relation to the units; he suggests that consistency of performance or speed of completion of the qualification could be used.
It is difficult to see that criteria would be used for consistency which are not, in principle, similar to the application of overall grading criteria described above. The particular difficulty with consistency is that it cannot be judged prior to the completion of the qualification.

There can be no justification for using speed of completion when a qualification is not time-limited. This would be especially true where learners were part-time or were adult-returners.

More likely to be useful for selectors is grading which is separate from the overall statement of pass-not pass, but which is based on the completion of an integrating project or assignment by those who have passed. This project or assignment would reflect competencies gained through the course as a whole and there would need to be a set of criteria for each grade. Byrne (1993) reports that this procedure is being introduced by The Scottish Vocational Education Council (SCOTVEC) and discusses its advantages and disadvantages. On the positive side these are that: On the other hand:


It will be obvious from the preceding pages that none of the options proposed is wholly satisfactory. They all have their cons as well as their pros and any future action will have to be based on weighing the balance of advantages and disadvantages. Indeed we would argue that a single universal solution is not necessary, even if achievable, and that it is probably possible to combine or link a number of approaches which we have discussed. Some research studies may need to be commissioned in order to verify that preferred solutions do, indeed, operate at the required levels of reliability, that the effects on VET programs are acceptable, and that selectors are satisfied that the desired outcomes will be achieved.

Obviously, factors of expense, time, staff training and general manageability will loom large in the discussions which precede action, but we would consider it vitally important to ensure that the following two points are borne carefully in mind when coming to decisions:


Black, H. & Wolf, A. (1990). Knowledge and competence. Sheffield: Employment Department.

Byrne, J. (1993). The option of graded assessment in competency-based education and training. Paper presented at the National Assessment Research Forum, Sydney, April 1993.

Clarke, D. J. (1993). The assessment agenda. Paper presented at the 5th European Conference for Research on Learning and Instruction, Aix en Provence, September 1993.

Cresswell, M. J. (1987). Describing examination performance: Grade criteria in public examinations. Education Studies, 13(3), 247-265.

Further Education Unit. (1992). A basis for credit? Developing a post-16 credit accumulation and transfer framework. London: Further Education Unit.

Further Education Unit. (1993). A basis for credit? Developing a post-16 credit accumulation and transfer framework: Feedback and developments. London: Further Education Unit.

Gipps, C. V. (1992). National Curriculum assessment: A research agenda. British Educational Research Journal, 18(3), 277-286.

Gonczi, A. (1993). Integrated approaches to competency based assessment. Paper presented at the National Assessment Research Forum, Sydney, April 1993.

Gonczi, A. (1994). Competency based assessment in the professions in Australia. Assessment in Education, 1(1), 27-44.

Hager, P., Gonczi, A. & Athanasou, J. (1994). General issues about assessment of competence. Assessment and Evaluation in Higher Education, 19(1), 3-16.

Madaus, G. F. (1992). A technological and historical consideration of equity issues associated with proposals to change the nation's testing policy. Paper prepared for the Symposium on Equity and Educational Testing and Assessment. Washington DC, March 1992.

McGaw, B. (1993). Competency-based assessment: Measurement issues. Paper presented at the National Assessment Research Forum, Sydney, April 1993.

Murphy, R. (1986). The emperor has no clothes: Grade-related criteria and the GCSE. In C. V. Gipps (Ed.) The GCSE: An uncommon exam. (Bedford Way Papers No. 29). London: University of London.

NCVQ. (1993). GNVQ information note. London: National Council for Vocational Qualifications.

Nuttall, D. L. (1984). Alternative assessment: Only connect ... (Report for the seminar: New Approaches to Assessment, 13 June 1984). London: Secondary Examinations Council.

Peddie, R. A. (1992a). Beyond the norm? An introduction to standards-based assessment. Wellington: New Zealand Qualifications Authority.

Peddie, R. A. (1992b). Standards of excellence: The award of merit in competency-based assessment. Wellington: New Zealand Qualifications Authority.

Peddie, R. A. (1993a). Standards, levels, grades and 'merit': A critical analysis. Paper presented at the National Assessment Research Forum, Sydney, April 1993.

Peddie, R. A. (1993b). Achieving excellence: A second report on merit in competence-based assessment. Wellington: New Zealand Qualifications Authority.

Pitman, J. (1993). The Queensland Core Skills Test: In profile and in profiles. Paper presented at the 19th Conference of the International Association of Educational Assessment, Mauritius, June 1993.

Power, C. (1986). Criterion based assessment, grading and reporting at Year 12 level. Australian Journal of Education, 30(3), 266-284.

Robertson, D. (1994). Choosing to change (The report of the HEQC CAT Development Project). London: Higher Education Quality Council.

Wilson, P. (1993). Developing a post-16 CAT framework: The technical specifications. In Discussing credit. London: Further Education Unit.

Withers, G. & Batten, M. (1990). Defining types of assessment. In B. Low & G. Withers (Eds), Developments in school and public assessment. Melbourne: Australian Council for Educational Research.

Wood, R. & Power, C. (1987). Aspects of the competence-performance distinction: Educational, psychological and measurement issues. Journal of Curriculum Studies, 19(5), 409-424.

Wolf, A., Burgess, R., Stott, H. & Veasey, J. (1994). GNVQ assessment review project: Final report. London: University of London Institute of Education.

Wolf, A. (1993). Assessment issues and problems in a criterion-based system. London: Further Education Unit.


The original version of this article was commissioned as part of a project undertaken by Graham Maxwell with funding by the Queensland Department of Employment, Vocational Education and Industrial Relations and appeared in the final report Getting Them In: Final Report of the Review of Selection Procedures for TAFE Associate Diplomas and Diplomas in Queensland. Copyright has been released by the funding agency to allow publication in this form.


  1. The latter judgement is an interim one; in most schemes of this kind there are opportunities to remedy the shortcoming on another occasion, gain the qualification via credit accumulation, and re-enter the selection process.

  2. In fact, we understand that such a test already exists; it is the Special Tertiary Admissions Test, but we have no information about it, and don't know whether its use in this context would be feasible or desirable.

  3. For example, in oral communication, there could be a developing scale of competence which depends on the audience to which the communication is addressed. There are several dimensions which could be used to define the hierarchy: the audience could be familiar or unfamiliar to the candidate, it could be small or large, or it could have no knowledge of the subject matter of the communication or be expert.

  4. Thus, in the General National Vocational Qualification (GNVQ) in the UK a student is required, at level 3 (or advanced level), to complete 12 vocational units (of which 8 are mandatory and 4 optional) and 3 core skill units (in each of communication, application of number and information technology) at level 3 or above.

Author details: John Wilmut is an educational consultant based at The Old Post Office, Bray Shop, Callingdon, Cornwall, England. After moving from engineering to science teaching, he specialised in educational assessment and examinations, first in the School of Education, University of Reading, and then as research officer at the Associated Examining Board. He also worked at the Open University and The University of Queensland. He has worked on the development and evaluation of the National Vocational Qualifications (NVQs) and the General National Vocational Qualifications (GNVQs) and on various assessment and examination projects. His work has included training and advisory programs for India, China, Malawi, Jordan, Namibia and South Africa and he has written widely on assessment issues.

Henry Macintosh is an educational consultant based at Brook Lawn Middleton Road, Camberley, Surrey, England. After teaching History in primary and secondary schools and at the Royal Military Acadmy, Sandhurst, he went into examination administration and became secretary to the Southern Regional Examinations Board in England. Since 1969 he has been the Advisor on Assessment and Accreditation for the Education Division of the Employment Department - recently merged with the Department for Education. He was treasurer and membership secretary of the International Association for Educational Assessment (IAEA) from 1984 to 1995 and has travelled extensively, visiting Australia on twelve occasions. He has written widely on assessment and curriculum.

Please cite as: Wilmut, J. and Macintosh, H. G. (1997). Possible options for differentiation in competency based assessment. Queensland Journal of Educational Research, 13(3), 46-70. http://education.curtin.edu.au/iier/qjer/qjer13/wilmut.html

[ Contents Vol 13, 1997 ] [ QJER Home ]
Created 6 Apr 2005. Last revision: 6 Apr 2005.
URL: http://education.curtin.edu.au/iier/qjer/qjer13/wilmut.html