QJER logo 2
[ Contents Vol 13, 1997 ] [ QJER Home ]

Competency based assessment and tertiary selection: Background context and issues

Graham S. Maxwell
Graduate School of Education
The University of Queensland
Competency based assessment was introduced into Australia in the past decade as part of the training reform agenda. Some parts of the vocational education and training system have adhered to a strict interpretation of the national guidelines as implying that any form of grading is inconsistent with competency based assessment. In such cases, all trainees completing an award have the same profile of results: competent on all elements of competency. This creates a problem where selection decisions must be made among applicants with the same or equivalent competency-based awards. In such cases it is necessary to find some basis on which applicants can be differentiated.


Competency based assessment has become an accepted part of the vocational education and training (VET) system in Australia, enshrined in the National Framework for the Recognition of Training (NFROT) agreed to by all the States and Territories. The origins of competency based assessment in Australia can be traced to similar movements elsewhere in the English speaking world (Bowden & Masters, 1993; Harris, Guthrie, Hobart & Lundberg, 1995; Wheeler, 1993; Wolf, 1995). Key features of this competency based approach to training are emphasis on 'knowing how' not just 'knowing what', assessment against defined levels of competency, and attainment of a qualification by demonstrating all the relevant competencies (Keating, 1995; Tovey, 1997).[1]

The introduction of competency based training and competency based assessment into Australia has been charted by Harris et al. (1995). Specifically, despite earlier research, interest in the benefits of competency based training and assessment really began in the mid-1980s with a report, Standards-based Trade Training: A Discussion Paper, by Nicholas Clark and Associates in 1986. The ideas were taken up by various bodies and agencies and led to an official statement in 1989, Improving Australia's Training System, by the Federal Minister of Employment, Education and Training, advocating the adoption of a competency based training system. Further encouragement came later that year from a report by the Employment and Skill Formation Council, Industry Training in Australia: The Need for Change. Competency based training was agreed as part of the national training reform agenda by the Federal, State and Territory Ministers of vocational education and training at two special conferences in April 1989 and November 1990. Formal adoption of the training reform agenda occurred in 1992 followed by processes of implementation.[2]

A broad view of what competency based assessment might mean in Australia is characterised in the following definition of competency by the National Training Board[3] (1992, p, 29):

The concept of competency focuses on what is expected of an employee in the workplace rather than on the learning process; and embodies the ability to transfer and apply skills and knowledge to new situations and environments. This is a broad concept of competency in that all aspects of work performance, and not only narrow task skills, are included. It encompasses -
However, this broad view has proved difficult to implement and a simpler view has been adopted, involving a less wholistic approach and requiring the enumeration of many units of competence making up any qualification. These units of competency are defined by elements of competence (identifying the assessable outcomes or actions within a unit), performance criteria (the characteristics by which performance on the elements and the unit as a whole can be judged as demonstrating competence), range of variables (defining the industrial context of the performance criteria) and the evidence guide (covering specific requirements such as underpinning knowledge and skills) (Wheeler, 1993). Attainment of the overall qualification (certificate or diploma) requires demonstration of competence on all the elements of competence. Despite the attempt by several analysts to expound the theory that competence is an inherent characteristic which cannot be observed directly, that is, drawing a distinction between performance and competence, in practice it appears that competence is assessed in terms of successful performance on tasks which represent the element of competence being assessed. Overall competence is therefore not viewed wholistically but as a collection of elements of competence.

Whether competence is viewed wholistically or atomistically makes no difference for selection decisions based on the qualification. Successful completion of the qualification requires demonstration of overall competence, that is, with all components of the qualification being recorded as 'competent'. This means that everyone who gains the qualification has the same profile of results and differentiation among them is impossible on the basis of that profile. The issue in selection, whether for employment or education, is how to differentiate among the applicants and whether such differentiation should be provided within the qualification itself.

IMPLEMENTATION OF COMPETENCY BASED ASSESSMENT IN AUSTRALIA

There appears to be some ambiguity about the nature and extent of implementation of competency based assessment throughout the vocational education and training sector (VET) in Australia. Despite the official rhetoric on implementation, in their survey of Australian practice Thomson, Mathers and Quirk (1996) revealed wide variation in implementation of competency based assessment, ranging from 30 percent to 80 percent in the TAFE sector and involving less than 10 percent in the non-TAFE sector. Similarly, it has been found that employers and industries have been slow to implement competency based assessment for on-the-job training (Misko & Saunders, 1995). It seems that the basic principles of competency based assessment are a long way from being fully implemented and that many of the existing assessment schemes currently adopted around the country have only distant resemblance to a competency based assessment system.

Thomson et al. (1996) also found considerable diversity of opinion and practice on the issue of differentiated levels of proficiency beyond the basic level of competence. Some consider that grading is anathema to and inconsistent with competency based assessment. Others prefer to adopt a pragmatic stance and to provide differentiated levels of proficiency where there is a market driven demand to do so.

Different attitudes and approaches to reporting differentiated levels of proficiency were found between different States and Territories, between private providers and TAFE institutes and between employers and trainers. Thomson et al. (1996) report that grading options are allowed in Australian Capital Territory, New South Wales, South Australia, Victoria (to some extent) and Western Australia (where appropriate) but that Queensland and Tasmania consider grading to be inconsistent with nationally agreed principles. Private providers more readily accept grading than TAFE Institutes and consider that this is one way in which they can give their best trainees an edge for employment and further study. Employers are not keen to grade students in on-the-job training and see little point in doing so but they are keen to see training providers grade students in off-the-job components of their qualifications. This is a rather confusing situation, one which, as Thomson et al. (1996) note, requires policy clarification.

SELECTION AND COMPETENCY BASED ASSESSMENT

The general issue concerning selection on the basis of a competency based qualification is that a thoroughgoing implementation of competency based assessment offers no differentiation among students who have successfully completed a course. As Thomson et al. (1996, p. 1)) indicate: 'By definition, competency-based assessment does not include the concept of grading'. Tovey (1997, p. 12) also supports this view: 'Competency is a definition of satisfactory performance of an individual. It does not provide for standards which allow grading of competence'. To complete a course successfully under this form of competency based assessment, a student must demonstrate competence on each and every one of a list of competencies. Therefore, all students receiving the course award have an identical record, namely, competence on all of the competencies. For purposes of selection based on this award, this means that all students receiving an award must be treated identically.

This would not present a problem if it were the case that completion of agreed prerequisites guaranteed a place in the relevant tertiary course. However, this is not always the case. Entry to tertiary courses is competitive and based on merit. This is so even where negotiated credit arrangements exist. Such credit is only relevant after the student has gained entry to the course.

This last point is not always appreciated and the subtleties of the wording of agreements and advertisements are not always noticed. For example, one recent newspaper advertisement for a tertiary course said 'satisfactory completion of this course makes the student eligible to apply for entry to certain university courses'. In a strict sense, this offers nothing since anyone is 'eligible to apply'. Whether they are 'eligible to enter' and 'offered a place' are other matters dependent on the eligibility requirements and the competition for places respectively.

There are two main questions to address. One is whether it is possible to provide some means of differentiation among students under a competency based assessment system without threatening the coherence of competency based assessment. Asking the question in this way assumes that the implementation of competency based assessment itself is not under challenge. The reason for this is pragmatic rather than theoretical; the policy decision to implement competency based assessment has already been taken and implementation is proceeding. In this situation, it appears necessary that any additional assessment policies should not contradict the basic philosophy and assumptions of competency based assessment.

The other question is whether, even if appropriate differentiation is possible, it is desirable to do so or whether the problem can be satisfactorily resolved in some other way. Clearly, this would mean making use of other information not directly related to performance in the VET course. Three types of information might be obtainable: background characteristics; other educational performances; or special test results. All of these are probably feasible though also probably inequitable. Background characteristics might be considered in terms of the amount and type of previous relevant experiences that can be documented, such as work, study or service activities; however, equitable mechanisms would also have to be devised to process this information and allow it to be compared with and combined with other information. Other educational performances, such as results in other courses, can be processed to produce rankings of applicants; however, arbitrarily replacing one qualification with another begs the question of which qualification is the most relevant. The same can be said of special tests; while they can provide a pragmatic solution to the selection problem, they may not provide the most valid information.

Both of these questions are considered in the following articles in this volume. In the end the conclusion is that: yes, differentiation within a competency based system is possible without violating the principles of competency based assessment; and, yes, it is desirable to do this rather than resort to other, less relevant, information in making selection decisions.

ELIGIBILITY VERSUS SELECTION

In any system of selection, it is important to distinguish eligibility decisions from selection decisions. Eligibility is established in terms of minimum requirements for admission and is usually intended to ensure that applicants have a reasonable chance of success in the course. On the other hand, selection involves choosing among the eligible applicants when there are fewer places available than applicants. This is not to say that students who are not selected would not be successful. Provided that the eligibility requirements have been appropriately determined, all eligible applicants should have a reasonable chance of success.

Eligibility answers the question: 'Which of the applicants could be expected to engage the course successfully?' Selection answers the question: 'What is the order of preference among the eligible applicants?'

Powles (1990) has pointed out that there is widespread confusion between eligibility and selection criteria. Beswick (1987) has emphasised the need for a differentiation of eligibility and selection decisions. Magean (1985) also, in the context of trade courses, supports the argument presented here for eligibility requirements to be based on analysis of baseline course expectations and for selection criteria to differentiate among eligible applicants in terms of their relative capability.

Eligibility needs to be determined by the demands of the course to which admission is sought. Preferably, these requirements should be expressed in terms of prerequisite qualities and capabilities. These qualities and capabilities should be sufficient for the student to engage the course. That is, they should provide the base for learning in the course, the foundation on which new learning can be built. They are not in themselves sufficient to ensure success in the course; success requires appropriate learning conditions and appropriate student effort. However, provided the course is structured and taught in a way which supports learning, a committed student should find the prerequisites sufficient for making satisfactory progress.

There are some situations where eligibility barriers might be lowered or removed entirely. These include situations where the demands of the course are relatively low, the course can be adapted to match student capabilities, the consequences of failure are not serious for the student, or wastage of resources does not matter.

Another situation where eligibility barriers might be lowered or removed is where competition is so great that only the strongest applicants can be selected, that is, where the selection cutoffs are so high that there is never any danger of those selected failing to satisfy any reasonable minimum qualifying standard. Some university courses have adopted this approach, though none have eliminated eligibility requirements entirely.

MERIT AS A BASIS FOR SELECTION

Selection becomes necessary when there are more applicants than places. In a democratic society, the basis for selection decisions, that is, for ranking eligible applicants, is considered to be 'merit'. Merit is a loosely used term which can have many meanings. It is usually considered to refer to appropriate qualities and capabilities which allow differentiation among applicants, typically using numerical scales or rank orderings. What is considered appropriate varies with the context and depends on particular assumptions, perceptions and values. In general, students with greater 'potential' are preferred over others. Potential can only be assessed indirectly through measures of achievement, effort, ability and (sometimes) personality, moderated perhaps by considerations of disadvantage.

Measures of achievement, especially of course results, recognise both the learning which has occurred and the effort which was involved. The use of course results places a premium on demonstrated capability in a curriculum of study. It recognises that, other things being equal, students with a better record of past performance are likely also to perform better in the future. Past performance is sometimes used as an indicator of ability and effort and in other cases as an indicator of strength of preparation for future studies. In both cases, higher achievement is valued because it indicates a more effective and more efficient student.

In some cases, relevant achievement may relate to specific knowledge and skills, as demonstrated in particular subjects or courses. In other cases, relevant achievement may be more general, that is, on any of a variety of subjects or courses. Specificity and generality must usually be balanced. The problem is that the more restrictive the relevant subjects or courses become, the more this restriction acts as an eligibility barrier as well as a selection criterion. Further, potential applicants will find it difficult to keep their options open. Keeping options open is a desirable strategy in the face of uncertainty about selection.

Other measures of achievement can be derived from work experience, whether in terms of its amount, diversity or quality. The extent to which this might substitute for or combine with other measures of achievement needs careful consideration. An alternative is to assess work experience in terms of the equivalence between what has been learned in the work situation and what has typically been learned in formal studies. This requires case by case assessment which allows recognition of prior learning (RPL).

Relevant qualities and capabilities might be assessed in other ways than through past achievement. Auditions, interviews, folios and tests are possible contenders. As with achievement measures, the assessment procedures need to be valid, reliable, equitable and manageable. That is, the procedures must be shown to provide a proper assessment of the defined qualities and capabilities, to provide measures with a high degree of consistency (that is, uninfluenced by the particular time, particular circumstances and particular assessors involved in each assessment), to be free of bias towards any individual or group and to satisfy various pragmatic requirements (especially feasibility, timeliness, affordability, openness to freedom of information and resilience to judicial review).

THE LIMITATIONS OF UNDIFFERENTIATED ASSESSMENT OF ACHIEVEMENT

Whether or not selection makes use of other measures besides achievement measures, past achievement is typically an important consideration. In some cases, other measures are difficult to obtain or do not satisfy requirements of validity, reliability, equity and manageability. Achievement measures, although not necessarily perfect, have the virtue of already satisfying various institutional accountability mechanisms in terms of their validity, reliability and equity and of being easily manageable because of their acceptance and availability.

The use of competency based assessments in eligibility requirements is neither contentious nor problematic. It is only contentious and problematic when eligible applicants must be differentiated for selection, in some cases with the requirement that the eligible applicants be rank ordered. Unless additional information allowing differentiation of students on the basis of their achievement is available, differentiation must necessarily be obtained through some of the other kinds of measures already discussed. In some cases these measures may be considered less appropriate than achievement measures.

Completion of the requirements for a qualification which involves competency based assessment is interpreted by some as equivalent to obtaining a 'bare pass' when compared with other similar qualifications using graded assessment. For example, in some state tertiary entrance systems completion of an award using competency based assessment attracts the same tertiary entrance rank as the minimum passing grade point average for a similar award. Others have argued that 'competence' implies a higher standard than 'bare pass'. The case for an appropriate equivalence needs to be substantiated. Setting the equivalence at a higher level does not resolve the issue of differentiation within the group where that may be necessary. Strictly, differentiation is unnecessary when applicants with this award do not apply en bloc for a particular tertiary course or where they do not fall en bloc at a quota boundary. However, it is impossible to predict such situations in advance.

A further issue is the perceived disadvantage of those students who would have been able, under a graded assessment system, to demonstrate capability well beyond that of a student who just managed to pass. Such students have no opportunity under the competency based assessment procedures to show their capability of going beyond the minimum performance level necessary for competence.

It should be noted that the term 'minimum' is sometimes interpreted as meaning 'low' but it is helpful to distinguish between 'minimum' and 'minimal'. A minimum can be set at quite a high level, that is, can be quite demanding rather than 'minimal'. Unfortunately, whether the standard is 'low' or 'high' depends on the demand characteristics of the task. For example, the requirement that four out of five questions be answered correctly for some competencies is a minimum standard for competence. But whether this is a high standard or a low 'minimal' standard depends on the complexity of the response called for by the questions.

Typically, too, if differentiation at a quota boundary becomes necessary, reference will be made to other qualifications in the applicant's profile. This disadvantages applicants without additional qualifications and forces discrimination at the margin to be made on qualifications which may be less recent and less valid.

THE FOLLOWING ARTICLES

The following two articles in this issue of the journal, the first by Peddie and the second by Wilmut and Macintosh, provide different perspectives on the problem of competency based assessment in relation to selection. Both support the use of various methods of differentiation of student proficiency within a competency based assessment system. Their suggestions need adaptation to the precise circumstances pertaining in Queensland, and more generally in Australia, but provide useful insights and ideas into ways of providing additional information on student capabilities without breaching the rationale for competency based assessment.

In particular, it should be noted that Peddie assumes that it is possible to frame selection in terms of a sub-quota for a particular university course and to apply this sub-quota to a cohort of students in a particular VET diploma course. In fact, subquotas are not used in this way in Queensland, nor generally anywhere in Australia. Given the dynamic nature of the selection system, it is difficult to see how they could be. Students may use their VET qualifications as a springboard into any number of different university courses, according to new or old aspirations. This does not, however, invalidate the broad thrust of Peddie's argument, nor indeed the relevance of his recommendations for circumstances which may pertain elsewhere.

The ideas in this issue are not the last word on these issues. However, they make an important contribution to the debate. It is hoped that they will stimulate further discussion of the issues and encourage further development of the options. Theoretical underpinnings for incorporating merit assessment within competency based assessment have been provided. What is needed now is the development of practical methods of implementation.

REFERENCES

Beswick, D. (1987). Current issues in student selection. Journal of Tertiary Education Administration, 9 (1), 5-32.

Bowden, J. A. & Masters, G. N. (1993). Implications for higher education of a competency-based approach to education and training. Canberra: Australian Government Printing Service.

Harris, R., Guthrie, H., Hobart, B. & Lundberg, D. (1995). Competency-based training: Between a rock and a whirlpool. Melbourne: Macmillan Education Australia.

Keating, J. (1995). Australian training reform: Implications for schools. Carlton, Victoria: Curriculum Corporation.

Magean, P. (1985). Selection to pre-employment trades based courses (Working Paper No. 5). Adelaide: VET National Centre for Research and Development.

Misko, J. & Saunders, J. (1995). Needs of special workers. In W. C. Hall (Ed.), Key aspects of competency-based assessment. Leabrook, South Australia: National Centre for Vocational Education Research.

National Training Board (1992). National competency standards: Policy and guidelines (2nd ed.). Canberra: National Training Board.

Powles, M. (1990). Access and selection to high demand VET courses. Canberra: Australian Government Printing Service.

Thomson, P. Mathers, R. & Quirk, R. (1996). The grade debate. Leabrook, South Australia: National Centre for Vocational Education Research.

Tovey, M. D. (1997). Training in Australia: Design, delivery, evaluation, management. Sydney: Prentice Hall.

Wheeler, L. (1993). Reform of Australian Vocational Education and Training: A competency-based system. In C. Collins (Ed.), Competencies: The competencies debate in Australian education and training. Canberra: Australian College of Education.

Wolf, A. (1995). Competence-based assessment. Buckingham and Philadelphia: Open University Press.

ACKNOWLEDGEMENT

The original version of this article was produced as part of a project undertaken by Graham Maxwell with funding by the Queensland Department of Employment, Vocational Education and Industrial Relations and appeared in the final report Getting Them In: Final Report of the Review of Selection Procedures for TAFE Associate Diplomas and Diplomas in Queensland. Copyright has been released by the funding agency to allow publication in this form.

ENDNOTES

  1. It is considered by some that the term 'competency based assessment' has been misapplied and that it should be 'competence based assessment' as has been adopted by Wolf (1995). The distinction some would make between 'competency' and 'competence' is the same as that between 'criterion' and 'standard', that is, between a characteristic or dimension taken into consideration in the assessment and a performance level or achievement level satisfying a quality requirement on those dimensions. In these terms, 'competency' refers to a 'criterion' and 'competence' refers to a 'standard'. Unfortunately, there is a parallel confusion about whether to use the term 'criteria based assessment' or 'standards based assessment'. The term 'competency based assessment' has been retained here because it is more commonly used.

  2. It is not intended here to give a full history of this implementation. Further details are provided in Harris et al. (1995).

  3. The National Training Board has been replaced by the Standards and Curriculum Council (SCC) within the Australian National Training Authority (ANTA).

Please cite as: Maxwell, G. S. (1997). Competency based assessment and tertiary selection: Background context and issues. Queensland Journal of Educational Research, 13(3), 4-15. http://education.curtin.edu.au/iier/qjer/qjer13/maxwell1.html


[ Contents Vol 13, 1997 ] [ QJER Home ]
Created 1 Apr 2005. Last revision: 1 Apr 2005.
URL: http://education.curtin.edu.au/iier/qjer/qjer13/maxwell1.html